Assessment Item Writer - AI Engineer (Part-Time, Contract)

Vollzeit
vor 2 Monate

About the Role: 

We are looking for passionate individuals with substantial experience in using AI alongside Data Engineering and Data Science to work with our Certification and Assessment team. DataCamp Certification allows individuals to validate their data proficiency in specific data roles and connect with companies that are hiring data professionals. 

 

As an item writer for DataCamp, you will be responsible for writing individual questions for assessments used across the DataCamp platform. You will work with our Certification Manager to write and review sets of questions to a pre-defined and validated specification. You will act as a technical expert for item writing, working with DataCamp assessment experts to ensure items are valid, reliable and unbiased while also being technically correct. 

 

Responsibilities

  • Create assessment items as specified to the test specifications
  • Ensure items are written to the appropriate level of difficulty and focus of the assessment and include all relevant supporting materials (e.g. grading information, source information for data)
  • Provide peer reviews to other item writers to ensure technical correctness of all items developed 
  • Implement feedback provided by other items writers and/or DataCamp assessment experts
  • Ensure that all assigned tasks are completed within the agreed timelines
  • Attend meetings as necessary for training, or alignment of requirements

 

The ideal candidate:

  • Has 2+ years of experience extensively using AI for data engineering and/or software development, to include one or both of these skillsets:
    • Skillset 1: AI for data engineering
      • Develop, train, test and evaluate deep learning models (RNN, CNN, transformer, transfer) for text-based and image deep learning tasks
      • Monitoring and analysis of the performance of machine learning models in production environments
      • AI data management, including import, assessing data quality, identifying AI / ML specific solutions for data processing and storage, AI data governance and privacy
      • Exploratory data analysis
      • MLOps / LLMOps best practice; ETL / ELT pipelines, creation of user-facing systems and applications that utilize generative AI
      • Experience using Python, scikit-learn, PyTorch, PySpark, Gymnasium, Llama 3, Shell, FastAPI or similar tools.
    • Skillset 2: AI for software development
      • Understand and apply key concepts in LLMs, including advanced prompt engineering skills, identifying, managing, and applying models, and key stages in LLMOps
      • Use software development and AI engineering principles to design, implement, and deploy complex AI and software systems
      • Describe best practices for mitigating privacy and safety issues when handling data and using generative AI models.
      • Strong understanding of vector Databases
      • Experience using Hugging Face, OpenAI API, Pinecone, Streamlit, LangChain, Graph RAG or similar tools
  • Is comfortable giving and receiving constructive feedback
  • Has a strong command of English and communication skills, both written and verbal
  • Is collaborative and motivated, with ability to execute swiftly
  • Is organized, analytical and detail-oriented
  • Experience in assessment item writing is a plus but not required (appropriate training will be provided)

 

Time Commitment

As a minimum, we expect you to commit to writing 20 items per week and reviewing 20 items per week, paid per item written/reviewed. You will be required to complete initial onboarding and training in assessment item writing before you start which will be paid at an hourly rate. 

 

 

*Please note that this is a part-time, contract, remote position. Please be sure to check your "Spam" folder for any responses to your DataCamp Application.