Posted in

Staff Machine Learning Modeler – Model Risk Management

Staff Machine Learning Modeler – Model Risk Management

CompanyBlock
LocationOakland, CA, USA
Salary$194500 – $343100
TypeFull-Time
DegreesMaster’s
Experience LevelSenior

Requirements

  • Advanced degree in Computer Science, Machine Learning, or related quantitative field
  • 5+ years experience in model validation or risk management, with focus on machine learning models; or 3+ years and a graduate degree
  • Strong software engineering practices and experience building maintainable, well-documented code
  • Strong understanding of tree-based models, particularly gradient boosted decision trees and XGBoost
  • Understanding of LLM capabilities, limitations, and validation requirements
  • Experience with or strong understanding of: Feature importance analysis for tree-based models, Model interpretability techniques, Validation of training data quality, especially in cases of automated labeling, Performance metric selection and validation for different model types, Model governance in financial services
  • Expertise in Python for building robust validation frameworks and automation tools
  • Advanced SQL skills for data analysis and validation automation
  • Experience with test automation and software testing frameworks
  • Strong quantitative skills with the ability to identify patterns in validation processes
  • Experience building modular, reusable code and tools
  • High ethical standards with a commitment to integrity and professionalism

Responsibilities

  • Build scalable validation frameworks for tree-based models, focusing on feature importance analysis, stability, and performance metrics
  • Develop validation approaches for hybrid systems where LLMs support the ML pipeline
  • Create governance frameworks for LLM applications, including: Reliability and consistency assessment methodologies, Prompt engineering validation approaches, Output quality control mechanisms, Drift detection for both traditional ML and LLM components
  • Design testing frameworks that can adapt to different model types and use cases
  • Create tools that help generate clear validation reports
  • Set up systems to continuously monitor model performance
  • Build tools that make model validation faster and more consistent
  • Create validation components we can reuse across different projects
  • Develop automated approaches for common tasks like: Checking model performance, Running statistical tests, Verifying data quality, Testing model assumptions, Tracking performance changes over time
  • Assess machine learning models using rigorous validation methodologies
  • Develop automated validation tools that can scale across different model types
  • Set up comprehensive model monitoring systems
  • Maintain clear validation documentation aligned with regulatory requirements
  • Partner with model development teams to understand validation needs
  • Build effective relationships while maintaining independent assessment standards

Preferred Qualifications

  • Experience developing internal tools or validation frameworks
  • Knowledge of software development best practices (version control, unit testing, CI/CD)
  • Experience validating models in regulated domains
  • Knowledge of model governance platforms and frameworks
  • Experience with model inventory management systems
  • Knowledge of emerging technology validation approaches
  • Experience with data visualization tools (e.g., Looker) for monitoring and reporting