KDnuggets Home » Jobs » IBM: Big Data Architect ( 16:n20 )

IBM: Big Data Architect


Seeking qualified candidates that will assist with shaping the design and implementation of Big Data projects as part of multi-disciplinary technical teams.



IBM
Company: IBM
Location: Any City, USA
Web: www.ibm.com
Position: Big Data Architect

_Contact_:
Apply online.

IBM Employees may earn a referral bonus for a successful hire.

Is data your jam? Do you think BIG data is better data? Do you think data lakes are more fun than real lakes? Do you enjoy solving business problems with data and communicating your recommendations to others? Do you get excited about learning new tools in the Hadoop ecosystem? Do you like to travel? Are you self motivated and like working on teams? This role might be for you!

We are seeking qualified candidates that will assist with shaping the design and implementation of Big Data projects as part of multi-disciplinary technical teams.

In this role you will:

  • Be accountable for creating end-to-end solution design and development approach in a Hadoop/Spark environment
  • Be accountable for integration solution design and development for integration Hadoop/Spark environments with analytic platforms (e.g., SAS, SPSS) and Enterprise Information Management (EIM) and Data Warehouse (DW) platforms
  • Design, test, and continuously improve performance of Hadoop/Spark based solutions
  • Expertly utilize distributed/parallel processing for information management solution design and development
  • Perform hands on development, coaching and leadership through all project phases
  • Provide advisory help in selecting products and components as part of sales solutioning
  • Create new methods for Big Data and lead teams that are developing accelerators

You will be successful in this role if you enjoy problem solving and utilizing consulting skills. Team leadership experience is preferred.

In this dynamic role, you will have the opportunity to interact directly with clients. As such, travel to client sites may be required, up to 4 days per week. Preferred candidates will reside within a 50 mile radius of a major airport.

Stay connected by subscribing to the IBMjobs blog (blog.ibm.jobs) for career insights, news and latest job opportunities and check out the below links for additional information on Big Data happenings at IBM.

http://www-935.ibm.com/services/us/business-consulting/tech-data/
http://www-935.ibm.com/services/us/gbs/thoughtleadership/2014analytics/
http://www-935.ibm.com/services/us/gbs/thoughtleadership/ibv-big-data-at-work.html

To be an official applicant to IBM, you must submit a resume and online application.  Resumes submitted remain active for six months.

To all Recruitment Agencies:  IBM only accepts resumes from agencies on our Approved Agency List.  Please do not forward resumes to our applicant tracking system, IBM employees, or send to any IBM company location.  IBM is not responsible for any fees related to unsolicited resumes.

Required Technical and Professional Expertise:

  • At least 2 years of experience in the Hadoop platform (such as Cloudera, Hortonworks, MapR, and/or IBM BigInsights).
  • At least 5 years of experience in data architecture.
  • At least 1 year of experience in the distributed cluster environment.
  • At least 2 years of experience in use of open source tools such as: Java technology (Hadoop, Sqoop, Hive, Storm, YARN, etc. - i.e., Hadoop 2.0), Python and/or BASH/CSH scripting.
  • At least 2 years of experience in designing large data warehouses with working knowledge of design techniques such as star schema and snowflake.
  • At least 2 years of experience in various information modeling techniques.
  • At least 2 years of experience in a consulting environment.
  • At least 2 years of experience in the following components of the Hadoop ecosystem: Hive, HBase, Spark, Storm, YARN, Flume, and/or Oozie.

Preferred Technical and Professional Experience:

  • At least 5 years of experience in the Hadoop platform (such as Cloudera, Hortonworks, MapR, and/or IBM BigInsights).
  • At least 7 years of experience in data architecture.
  • At least 3 years of experience in the distributed cluster environment.
  • At least 5 years of experience in use of open source tools such as: Java technology (Hadoop, Sqoop, Hive, Storm, YARN, etc. - i.e., Hadoop 2.0), Python and/or BASH/CSH scripting.
  • At least 5 years of experience in designing large data warehouses with working knowledge of design techniques such as star schema and snowflake.
  • At least 5 years of experience in various information modeling techniques.
  • At least 5 years of experience in a consulting environment.
  • At least 2 years of experience in hands on ETL script development and batch processing.
  • At least 5 years of experience in the following components of the Hadoop ecosystem: Hive, HBase, Spark, Storm, YARN, Flume, and/or Oozie.

Preferred Education:
Bachelor's Degree

EO Statement:
IBM is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.


Sign Up

By subscribing you accept KDnuggets Privacy Policy