Kinetica: Software Engineer (Python) [Arlington, VA]

Work closely with the Product Owner to build out the product in Python and integrate all other parts (TensorFlow, Kubernetes, and our GPU-powered DB) using Python bindings to build and deliver an overall product (a REST API).

At: Kinetica
Location: Arlington, VA


Apply online

Company Description
When extreme data requires companies to act with unprecedented agility, Kinetica powers business in motion. Kinetica is the instant insight engine for the Extreme Data Economy. Across healthcare, energy, telecommunications, retail, and financial services, enterprises utilizing new technologies like connected devices, wearables, mobility, robotics, and more can leverage Kinetica for machine learning, deep learning, and advanced location-based analytics that are powering new services. Kinetica's accelerated parallel computing brings thousands of GPU cores to address the unpredictability and complexity that result from extreme data.

For more information and trial downloads, visit or follow us on LinkedIn and
Twitter @KineticaHQ.

Job Description

We are seeking a Python Software Engineer to join our accomplished team to help build out a new product line for our company.

Our team of engineers is building out a scalable, distributed machine learning and data science platform with tight integrations and pipelines to a distributed, sharded GPU-powered database. This means the product would need to be developed in Linux and operate inside containers (Docker for us), work in a container-orchestrated environment (Kubernetes for us), operate in a scalable resource managed system (GPUs via Kubernetes) and interact with complex analytical systems (TensorFlow, etc.). Sounds interesting right?

In this role, you will work alongside the ML Product Owner to build out the product in Python and integrate all other parts (TensorFlow, Kubernetes, and our GPU-powered DB) using Python bindings to build and deliver an overall product (a REST API.) This is a product engineering role where we are building a generic solution that works across many industries, many use-cases, many clients, many data varieties, many data volumes, and many data velocities (rather than building for a specific use case or a specific client -- this offers a huge opportunity to make a mark on a fast-growing industry.)

Job Responsibilities

  • Integrate a variety of components into an overall smooth-functioning product
  • Ability to bring things to a close -- not just exploring but getting things to the finish line.
  • Research products and keep abreast of marketplace offerings and possibilities
  • Work with commercial and open source packages to find stacks to achieve required product features
  • Work with a close-knit team to design and develop a release-quality commercial product
  • Work with our broader engineering group to ensure products fit into the company's product lineup
  • Work iteratively to hone proof-of-concepts for new product features and steadily merge development into the overall product
  • Keep attuned to the marketplace and spot opportunities to expand functionality in response to new technical capabilities as they arise
  • Keep attuned to customer use and actively work to improve product experience to meet usage, both current and future usage the customer may not even realize they need

Basic Qualifications

  • Technical Bachelors Degree
  • Proficiency with Python development in a Linux environments
  • Familiarity with SQL and databases
  • Familiarity with containerized Python applications (Docker specifically)
  • Familiarity with Container Orchestration (Kubernetes specifically), ideally via Python binding
  • Exposure to one machine learning open source package (sklearn, TensorFlow, Caffe2, Torch, etc.)

Preferred Qualifications

  • Experience (3+ years) at high-tech startup, technology/data science consultancy with data science tools, DevOps tools; Academic experience is a valid substitute (e.g., Ph.D., Fellowship)
  • Strong communications skills as demonstrated by personal projects, technical blog postings, volunteer activities, etc.
  • Passion for Machine Learning and Data Science
  • Excitement about a small company with close team interactions and a fast-moving culture
  • Experience working with highly complex technical ecosystems (resource managers, containers, automated testing -- e.g., Mesos, Kubernetes, Docker)
  • Understanding of the data science ecosystem -- commercial and open source
  • Experience working with Python libraries and computational systems (e.g., NumPy, Pandas, Spark, etc.)
  • An active participant in the technology community (e.g., Hackathons, StackOverflow, Kaggle, open source contributions/projects)

How To Apply

Apply online here

Applicants are encouraged to share online project/code portfolios and any demonstrations of community participation (e.g., public Github profiles, public StackOverflow profiles, public Kaggle profiles, technical blog postings, etc.)

Additional Information
All your information will be kept confidential according to EEO guidelines.