Ten-X: Sr Data Engineer [San Mateo, CA]
Seeking a top tier data engineer to work with our data engineering team on high impact projects that improve data availability and quality, and provide reliable access to data for the rest of the business.
Location: San Mateo, CA
Position: Senior Data Engineer
Ten-X Commercial is the CRE marketplace that is a force multiplier for sellers, buyers and brokers. Ten-X precision-matches assets, accelerates close rates, and streamlines the entire transaction process with more than $55 billion in sales and increasing daily. Leveraging desktop and mobile technology, Ten-X allows people to safely and easily complete real estate transactions entirely online. We bring quality assets to the market and attract prospective investors from around the world. By virtue of our best-in-class marketing and scalable technology platform, buyers and seller are able to conduct transactions in an efficient manner.
Ten-X empowers consumers, investors and real estate professionals with unprecedented levels of flexibility, control and simplicity – and the convenience of transacting properties whenever and wherever they want. As real estate continues to move online, Ten-X is uniquely positioned at the forefront of this dramatic industry evolution.
Data, and our ability to leverage it, is seen and championed as a key competitive advantage from our CEO on down. We are looking for a top tier data engineer to work with our data engineering team on high impact projects that improve data availability and quality, and provide reliable access to data for the rest of the business. You will be working on key projects that have board level visibility.
- Design, architect and support new and existing data and ETL pipelines and recommend improvements and modifications.
- Create optimal data pipeline architecture and systems.
- Be responsible for ingesting data into our data lake and providing frameworks and services for operating on that data.
- Work with Data Scientists to help dedupe and fuzzy match data
- Analyze, debug and correct issues with data pipelines
- Identify, design, and implement internal process improvements: automating manual processes and optimizing data delivery.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Spark and AWS technologies.
- Undergraduate degree (ideally a Masters) in a relevant quantitative subject (Math, Statistics, Computer Science, Engineering, Economics, etc.)
- 8+ Years’ Experience in data engineering, data warehousing, business intelligence including: 5+ years in a modern data stack environment, specifically the Hadoop stack, 3+ Years' Python experience relating to data engineering
- Experience with iterative Agile methodologies and use of supporting tools like JIRA, Confluence and Git
- Experience in the following will be a plus:
- Clickstream data
- Machine Learning
- Streaming Data
- Elastic Search
- Containers (Docker)
- Fuzzy Matching / NLP
- Ability to understand business problems and translate them into data engineering requirements
- Understanding and Familiarity with:
- Hadoop and all the related stack (Pig, Hive, HBase, etc.)
- SQL skills and SQL Databases
- Strong oral and written communication skills and be able to communicate complex technical knowledge in meaning terms
- Ability to work in a fast-paced environment and fluidly adapt to changing priorities
- Must be passionate about getting to the root cause of issues and driving to whys
- Proven ability to obtain buy-in/ partner with the data science team, including demonstrated ability to partner with functional leaders toward common goals
- Well-developed analytical and interpersonal skills with ability to draw conclusions and communicate/present them confidently and effectively to broad audiences, including senior leadership
- High energy and passion about solving business needs through data
- Organized, structured thinker with ability to handle multiple assignments, remain calm under pressure, and digest information from multiple, disparate parts
- Continuous improvement mindset
- Not afraid to challenge conventional thinking or analyses
- Cloud migration experience from On-Prem to cloud preferably AWS.
- Machine Learning experience.