Sr. Technical Solutions Engineer

Full Time
Amsterdam, Netherlands
11 months ago

Mission

As a Spark Technical Solutions Engineer, you will provide a deep dive technical and consulting related solutions for the challenging Spark/ML/AI/Delta/Streaming/Lakehouse reported issues by our customers and resolve any challenges involving the Databricks unified analytics platform with your highly comprehensive technical and customer communication skills. You will assist our customers in their Databricks journey and provide them with the guidance, knowledge, and expertise that they need to realize value and achieve their strategic objectives using our products.

Outcomes

  • Performing initial level analysis and troubleshooting issues in Spark using Spark UI metrics, DAG, Event Logs for various customer reported job slowness issues.
  • Troubleshoot, resolve and suggest deep code-level analysis of Spark to address customer issues related to Spark core internals, Spark SQL, Structured Streaming, Delta, Lakehouse and other databricks runtime features.
  • Assist the customers in setting up reproducible spark problems with solutions in the areas of Spark SQL, Delta, Memory Management, Performance tuning, Streaming, Data Science, Data Integration areas in Spark.
  • Participate in the Designated Solutions Engineer program and drive one or two of strategic customer’s day to day Spark and Cloud issues.
  • Plan and coordinate with Account Executives, Customer Success Engineers and Resident Solution Architects for coordinating the customer issues and best practices guidelines.
  • Participate in screen sharing meetings, answering slack channel conversations with our internal stakeholders and customers, helping in driving the major spark issues at an individual contributor level.
  • Build an internal wiki, knowledge base with technical documentation, manuals for the support team and for the customers. Participate in the creation and maintenance of company documentation and knowledge base articles.
  • Coordinate with Engineering and Backline Support teams to provide assistance in identifying, reporting product defects.
  • Participate in weekend and weekday on-call rotation and run escalations during databricks runtime outages, incident situations, ability to multitask and plan day 2 day activities and provide escalated level of support for critical customer operational issues, etc.
  • Provide best practices guidance around Spark runtime performance and usage of Spark core libraries and APIs for custom-built solutions developed by Databricks customers.
  • Be a true proponent of customer advocacy.
  • Contribute in the development of tools/automation initiatives.
  • Provide front line support on the third party integrations with Databricks environment.
  • Review the Engineering JIRA tickets and proactively intimate the support leadership team for following up on the action items.
  • Manage the assigned spark cases on a daily basis and adhere to committed SLA's.
  • Achieving above and beyond expectations of the support organization KPIs.
  • Strengthen your AWS/Azure and Databricks platform expertise through continuous learning and internal training programs.

Competencies

  • Min 6 years of experience in designing, building, testing, and maintaining Python/Java/Scala based applications in typical project delivery and consulting environments.
  • 3 years of hands-on experience in developing any two or more of the Big Data, Hadoop, Spark,Machine Learning, Artificial Intelligence, Streaming, Kafka, Data Science, ElasticSearch related industry use cases at the production scale. Spark experience is mandatory.
  • Hands on experience in the performance tuning/troubleshooting of Hive and Spark based applications at production scale.
  • Proven and real time experience in JVM and Memory Management techniques such as Garbage collections, Heap/Thread Dump Analysis is preferred.
  • Working and hands-on experience with any SQL-based databases, Data Warehousing/ETL technologies like Informatica, DataStage, Oracle, Teradata, SQL Server, MySQL and SCD type use cases is preferred.
  • Hands-on experience with AWS or Azure or GCP is preferred
  • Excellent written and oral communication skills
  • Linux/Unix administration skills is a plus
  • Working knowledge in Data Lakes and preferably on the SCD types use cases at production scale.
  • Demonstrated analytical and problem-solving skills, particularly those that apply to a “Distributed Big Data Computing” environment.

Benefits

  • Medical insurance reimbursement or collective healthcare scheme
  • Accident and income protection insurance
  • Pension Plan
  • Enhanced Parental Leaves
  • Equity awards
  • Fitness reimbursement
  • Annual career development fund
  • Home office & work headphones reimbursement
  • Business travel accident insurance
  • Mental wellness resources
  • Employee referral bonus

About Databricks

Databricks is the data and AI company. More than 9,000 organizations worldwide — including Comcast, Condé Nast, and over 50% of the Fortune 500 — rely on the Databricks Lakehouse Platform to unify their data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe. Founded by the original creators of Apache Spark™, Delta Lake and MLflow, Databricks is on a mission to help data teams solve the world’s toughest problems. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.

 

Our Commitment to Diversity and Inclusion

At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.

 

Compliance

If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.