Specialist Solutions Architect - Data Intelligence Platform

Full Time
6 months ago

FEQ225R36

As a Specialist Solutions Architect (SSA) - Data Intelligence Platform, you will guide partners in building big data solutions on Databricks that span a large variety of use cases. You will be in a partner-facing role, working with and supporting our field Solution Architects and partner teams, that requires hands-on production experience with SQL, Apache Spark™ and expertise in other data technologies.  SSAs help partners build capabilities for design and successful implementation of essential workloads while aligning their technical roadmap for expanding the usage of the Databricks Intelligence Platform. As a deep go-to-expert reporting to the Field Engineering Leadership, you will continue to strengthen your technical skills through mentorship, learning, and internal training programs and establish yourself in an area of specialty - whether that be data governance, data science, machine learning, streaming, performance tuning, industry expertise, or more.

The impact you will have:

  • Provide technical leadership to guide strategic partners to successful implementations on big data projects, ranging from architectural design to data engineering to model deployment
  • Architect production level data pipelines, including end-to-end pipeline load performance testing and optimization
  • Become a technical expert in an area such as data lake technology, big data streaming, or big data ingestion and workflows
  • Assist Solution Architects with aspects of the technical sale as they work alongside partners including customizing proof of concept content, estimating workload sizing, and custom architectures
  • Provide tutorials and training to improve partner community adoption (including hackathons and conference presentations)
  • Contribute to the Databricks Community

What we look for:

  • 7+ years experience in a technical role with expertise in at least one of the following:
    • Software Engineering/Data Engineering: data ingestion, streaming technologies - such as Spark Streaming and Kafka, performance tuning, troubleshooting, and debugging Spark or other big data solutions.
    • Data Applications Engineering: Build use cases that use data - such as risk modeling, fraud detection, partner life-time value.
    • Data Science or Machine Learning Ops: Design and build of production infrastructure, model management, and deployment of advanced analytics that drives measurable business value (ie. getting models running in production).
  • Must be able to work collaboratively and independently to achieve outcomes supporting go-to-market priorities and have the interpersonal savvy to influence both partners and internal stakeholders without direct authority
  • Deep Specialty Expertise in at least one of the following areas:
    • Expertise in data governance systems and solutions that may span technologies such as Unity Catalog, Alation, Collibra, Purview, etc.
    • Experience with high-performance, production data processing systems (batch and streaming) on distributed infrastructure.
    • Experience building large-scale real-time stream processing systems; expertise in high-volume, high-velocity data ingestion, change data capture, data replication, and data integration technologies.
    • Experience migrating and modernizing Hadoop jobs to public cloud data lake platforms, including data lake modeling and cost optimization.
    • Expertise in cloud data formats like Delta and declarative ETL frameworks like DLT.
    • Expertise in building GenAI solutions such as RAG, Finetuning, or Pre-training for custom model creation.
  • Bachelor's degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience.
  • Maintain and extend production data systems to evolve with complex needs.
  • Production programming experience in SQL and Python, Scala, or Java.
  • Experience with the AWS, Azure, or GCP clouds.
  • 4+ years professional experience with Big Data technologies (Ex: Spark, Hadoop, Kafka) and architectures
  • 4+ years with system integration partner or customer-facing experience in a pre-sales or post-sales role (consultant working for a partner)
  • Can meet expectations for technical training and role-specific outcomes within 6 months of hire
  • This role can be remote, but we prefer that you will be located in the job listing area and can travel up to 30% when needed.

 

Pay Range Transparency

Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents base salary range for non-commissionable roles or on-target earnings for commissionable roles.  Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks utilizes the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here.

 

Local Pay Range$169,000—$299,000 USD

About Databricks

Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.

Our Commitment to Diversity and Inclusion

At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.

Compliance

If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.