DatologyAI's Posts (76)

Research Scientist, Post-Training

About the Company Models are what they eat. But a large portion of training compute is wasted training on data that are already learned, irrelevant, or even harmful, leading to worse models that cost more to train and deploy. At DatologyAI, we’ve built a state of the art data curation suite to automatically curate and optimize petabytes of data to create the best possible training data for your models. Training on curated data can dramatically reduce training time and cost ( 7-40x faster training depending on the use case), dramatically increase model performance as if you had trained on >10x more raw data without increasing the cost of training, and allow smaller models with fewer than half the parameters to outperform larger models despite using far less compute at inference time, substantially reducing the cost of deployment. For more details, check out our recent blog posts sharing our high-level results for text models and image-text models . We raised a total of $57.5M in two rounds, a Seed and Series A. Our investors include Felicis Ventures, Radical Ventures, Amplify Partners, Microsoft, Amazon, and AI visionaries like Geoff Hinton, Yann LeCun, Jeff Dean, and many others who deeply understand the importance and difficulty of identifying and optimizing the best possible training data for models. Our team has pioneered this frontier research area and has the deep expertise on both data research and data engineering necessary to solve this incredibly challenging problem and make data curation easy for anyone who wants to train their own model on their own data. This role is based in Redwood City, CA. We are in office 4 days a week. About the Role We’re looking for a Research Scientist to lead work on post-training data curation for foundation models. You’ll design and implement algorithms to generate and improve instruction, preference, and other post-training datasets. You’ll also help bridge the gap between pre-training and post-training by exploring how to jointly optimize data across stages. This role requires strong scientific judgment, fluency with the deep learning literature, and a drive to turn research ideas into real-world impact. You’ll work autonomously, collaborate closely with engineers and product teams, and shape the future of data curation at DatologyAI. What You'll Work On Post-training data curation. You’ll conduct research on how to algorithmically curate post-training data—e.g., how to generate and refine preference and instruction-following data, how to curate capability- and domain-specific data, and make post-training more effective, controllable, and generalizable. Unifying pre-training and post-training data curation. Pushing the bounds on model capabilities requires unifying post-training and pre-training data curation. You will pursue research on end-to-end data curation: how to curate pre-training data to improve the post-trainability of models and how to jointly optimize pre- and post-training data curation, all in service of maximizing the final performance of post-trained models. Transform messy literature into practical improvements. The research literature is vast, rife with ambiguity, and constantly evolving. You will use your skills as a scientist to source, vet, implement, and improve promising ideas from the literature and of your own creation. Conduct science driven by real-world needs. At DatologyAI, we understand that conference reviewers and academic benchmarks don’t always incentivize the most impactful research. Your research will be guided by concrete customer needs and product improvements. How You'll Work Nobody knows how to do your work better than you. We believe that scientists do their best work when they have the autonomy to pursue problems in the manner they prefer, and we will ensure that you are equipped with the context and resources you need to succeed. Science is more than just experiments. We expect our Research Scientists to collaborate closely with engineers, talk to customers, and shape the product vision. About You 3+ years of deep learning research experience Experience with post-training large vision, language, and multimodal models Post-training algorithm development, data curation, and/or synthetic data methods for: Preference-based tuning (e.g. DPO, RLVR, RRHF) Alternative supervision & self-supervision techniques such as self-training and chain-of-thought distillation SFT (e.g. instruction tuning and demonstration fine-tuning) Post-training tooling development and engineering experience Strong understanding of the fundamentals of deep learning Sufficient software engineering + deep learning framework (PyTorch or a willingness to learn PyTorch) skills to conduct large-scale research experiments and build production prototypes. Demonstrated track record of success in deep learning research, whether papers, tools, or other research artifacts. We would love it if candidates have: Experience with data management and distributed data processing solutions (e.g. Spark, Snowflake, etc.) Experience building + shipping ML products Candidates do not need a PhD or extensive publications. Some of the best researchers we’ve worked with have no formal training in machine learning, and obtained all of their experience by working in industry and building products. We believe that adaptability, combined with exceptional communication and collaboration skills are the most important ingredients for successful research in a startup environment. Compensation At DatologyAI, we are dedicated to rewarding talent with highly competitive salary and significant equity. The base salary for this position ranges from $180,000 to $260,000. The candidate's starting pay will be determined based on job-related skills, experience, qualifications, and interview performance. We offer a comprehensive benefits package to support our employees' well-being and professional growth: 100% covered health benefits (medical, vision, and dental). 401(k) plan with a generous 4% company match. Unlimited PTO policy Annual $2,000 wellness stipend. Annual $1,000 learning and development stipend. Daily lunches and snacks are provided in our office! Relocation assistance for employees moving to the Bay Area.

Location: Redwood City

Salary range: None - None

Software Engineer, Data Infrastructure

About the Company Models are what they eat. But a large portion of training compute is wasted training on data that are already learned, irrelevant, or even harmful, leading to worse models that cost more to train and deploy. At DatologyAI, we’ve built a state of the art data curation suite to automatically curate and optimize petabytes of data to create the best possible training data for your models. Training on curated data can dramatically reduce training time and cost ( 7-40x faster training depending on the use case), dramatically increase model performance as if you had trained on >10x more raw data without increasing the cost of training, and allow smaller models with fewer than half the parameters to outperform larger models despite using far less compute at inference time, substantially reducing the cost of deployment. For more details, check out our recent blog posts sharing our high-level results for text models and image-text models . We raised a total of $57.5M in two rounds, a Seed and Series A. Our investors include Felicis Ventures, Radical Ventures, Amplify Partners, Microsoft, Amazon, and AI visionaries like Geoff Hinton, Yann LeCun, Jeff Dean, and many others who deeply understand the importance and difficulty of identifying and optimizing the best possible training data for models. Our team has pioneered this frontier research area and has the deep expertise on both data research and data engineering necessary to solve this incredibly challenging problem and make data curation easy for anyone who wants to train their own model on their own data. This role is based in Redwood City, CA. We are in office 4 days a week. About the Role We're looking for an experienced Data Platform Engineer to join as a member of our core Datology AI team. As one of our early senior hires, you will partner closely with our founders on the direction of our product and drive business-critical technical decisions. You will lead the development of our core product and data platform. These are key components of our stack that allow us to process customer data and apply state of the art research for identifying the most informative data points in large-scale datasets. You will have a broad impact over the technology, product, and our company's culture. We provide visa sponsorship for candidates selected for this role. What You'll Work On Design, build and maintain highly scalable data processing solutions, while ensuring scalability, reliability, and security Architect, build, and deploy the back-end systems and services that power our data curation platform Partner with researchers and engineers to bring new features and research capabilities to our customers Ensure that our systems are reliable, secure, and worthy of our customers' trust About You Have meaningful experience with leading and building production data systems to deliver on major product initiatives. You have built and managed highly scalable data processing solutions (e.g. Spark, Flink), data lakes or warehouses (e.g. Snowflake, Hive), authored queries (SQL), distributed storage systems (e.g., HDFS, S3), used workflow management (e.g. Airflow, Dagster), and have experience maintaining the infra that supports these. Proficiency in at least one programming language commonly used within Data Engineering, such as Python, Scala, or Java. Expertise with any of ETL schedulers such as Airflow, Dagster, or similar frameworks. Experience maintaining a high quality bar for design, correctness, and testing. Take pride in building and operating scalable, reliable, secure systems Have a humble attitude, an eagerness to help your colleagues, and a desire to do whatever it takes to make the team succeed Own problems end-to-end, and are willing to pick up whatever knowledge you're missing to get the job done You have experience being the technical lead of a Data Engineering / Platform / Infrastructure Team. Experience building ML/DL systems and/or data infrastructure that feeds into training large ML models Don’t meet every single requirement? We still encourage you to apply. If you’re excited about our mission and eager to learn, we want to hear from you! Compensation At DatologyAI, we are dedicated to rewarding talent with highly competitive salary and significant equity. The base salary for this position ranges from $180,000 to $250,000. The candidate's starting pay will be determined based on job-related skills, experience, qualifications, and interview performance. We offer a comprehensive benefits package to support our employees' well-being and professional growth: 100% covered health benefits (medical, vision, and dental). 401(k) plan with a generous 4% company match. Unlimited PTO policy Annual $2,000 wellness stipend. Annual $1,000 learning and development stipend. Daily lunches and snacks are provided in our office! Relocation assistance for employees moving to the Bay Area.

Location: Redwood City

Salary range: None - None

Forward Deployed AI Engineer

About the Company Models are what they eat. But a large portion of training compute is wasted training on data that are already learned, irrelevant, or even harmful, leading to worse models that cost more to train and deploy. At DatologyAI, we’ve built a state of the art data curation suite to automatically curate and optimize petabytes of data to create the best possible training data for your models. Training on curated data can dramatically reduce training time and cost ( 7-40x faster training depending on the use case), dramatically increase model performance as if you had trained on >10x more raw data without increasing the cost of training, and allow smaller models with fewer than half the parameters to outperform larger models despite using far less compute at inference time, substantially reducing the cost of deployment. For more details, check out our recent blog posts sharing our high-level results for text models and image-text models . We raised a total of $57.5M in two rounds, a Seed and Series A. Our investors include Felicis Ventures, Radical Ventures, Amplify Partners, Microsoft, Amazon, and AI visionaries like Geoff Hinton, Yann LeCun, Jeff Dean, and many others who deeply understand the importance and difficulty of identifying and optimizing the best possible training data for models. Our team has pioneered this frontier research area and has the deep expertise on both data research and data engineering necessary to solve this incredibly challenging problem and make data curation easy for anyone who wants to train their own model on their own data. This role is based in Redwood City, CA. We are in office 4 days a week. About the Role We are looking for a highly technical, customer-obsessed Forward Deployed AI Engineer (Post Sales) to guide customers through deploying, operating, and adopting DatologyAI’s platform in complex on-prem or hybrid environments. You will become the trusted technical advisor for our most strategic customers, partnering closely with Sales, Research, and Engineering to drive successful deployments and long-term customer value. You'll bridge the gap between our core platform capabilities and the unique requirements of each customer's environment. This role is ideal for someone who thrives in ambiguity, enjoys solving challenging distributed systems problems, and wants to build both deep relationships and scalable solutions within a fast-moving startup. What You’ll Work On Lead customers through onboarding, deployment, and production rollout of DatologyAI’s platform while serving as the technical owner for assigned accounts—driving architecture, execution, long-term adoption, and tailored technical success plans. Partner cross-functionally with Sales, Engineering, and Research to translate use-case requirements into actionable technical strategies, support early trials, relay customer feedback, and help shape roadmap priorities. Guide customers in designing scalable, secure workflows across compute, storage, networking, and distributed systems, providing ongoing reporting on deployment progress, workload health, usage metrics, and executive-level updates. Adapt and optimize DatologyAI’s platform across AWS, GCP, Azure, and on-prem Kubernetes environments, handling provider-specific APIs, storage systems, networking configurations, and compute orchestration—including tuning performance for network topology, storage tiering, and resource allocation in each environment. About You 5+ years of experience in technical roles involving solution architecture, customer engineering, consulting, or technical program delivery. Strong background in distributed systems, data infrastructure, and/or on-prem or hybrid compute environments. Experience working with ML/AI workflows, designing or deploying systems involving Kubernetes, networking, data pipelines, or large-scale backend infrastructure. Proficiency in Python, SQL, or similar languages, with the ability to contribute to technical conversations and debug customer issues end-to-end. Experience leading complex technical projects with multiple stakeholders—translating business needs into clear architecture and execution plans. Deep hands-on experience with multiple cloud platforms (AWS, GCP, Azure) including their compute, storage, networking, and IAM services. Proven track record of adapting complex distributed systems to run across different infrastructure environments. Expertise in infrastructure-as-code and configuration management for multi-environment deployments. Required to travel to customer sites as needed to support critical deployments and customer engagements. Compensation At DatologyAI, we are dedicated to rewarding talent with highly competitive salary and significant equity. The salary for this position ranges from $230,000 to $300,000 OTE. The candidate's starting pay will be determined based on job-related skills, experience, qualifications, and interview performance. We offer a comprehensive benefits package to support our employees' well-being and professional growth: 100% covered health benefits (medical, vision, and dental). 401(k) plan with a generous 4% company match. Unlimited PTO policy Annual $2,000 wellness stipend. Annual $1,000 learning and development stipend. Daily lunches and snacks are provided in our office! Relocation assistance for employees moving to the Bay Area.

Location: Redwood City

Salary range: None - None

Product Marketing Manager

About the Company Models are what they eat. But a large portion of training compute is wasted training on data that are already learned, irrelevant, or even harmful, leading to worse models that cost more to train and deploy. At DatologyAI, we’ve built a state of the art data curation suite to automatically curate and optimize petabytes of data to create the best possible training data for your models. Training on curated data can dramatically reduce training time and cost ( 7-40x faster training depending on the use case), dramatically increase model performance as if you had trained on >10x more raw data without increasing the cost of training, and allow smaller models with fewer than half the parameters to outperform larger models despite using far less compute at inference time, substantially reducing the cost of deployment. For more details, check out our recent blog posts sharing our high-level results for text models and image-text models . We raised a total of $57.5M in two rounds, a Seed and Series A. Our investors include Felicis Ventures, Radical Ventures, Amplify Partners, Microsoft, Amazon, and AI visionaries like Geoff Hinton, Yann LeCun, Jeff Dean, and many others who deeply understand the importance and difficulty of identifying and optimizing the best possible training data for models. Our team has pioneered this frontier research area and has the deep expertise on both data research and data engineering necessary to solve this incredibly challenging problem and make data curation easy for anyone who wants to train their own model on their own data. This role is based in Redwood City, CA. We are in office 4 days a week. About the Role We’re seeking a Technical Product Marketing Manager who blends technical credibility, business acumen, and sharp storytelling to build and lead our marketing function from the ground up. You’ll translate complex technical concepts into clear, compelling narratives, craft high-impact content, and drive our messaging across channels. This role demands strong communication skills, confidence engaging with customers and partners, and the ability to present to a wide range of stakeholders. As an early marketing hire, you’ll shape our strategy, amplify our technical voice, and have an outsized influence on our brand and growth. This role is ideal for someone who thrives at the intersection of product, marketing, and growth with a passion for turning cutting-edge technology into a compelling narrative. What You’ll Work On Build the marketing function from the ground up, defining strategy, processes, and programs to drive awareness and growth. Own events and all social media channels to increase engagement and brand visibility. Produce high-quality technical and product content (blogs, whitepapers, case studies, website copy). Develop and promote the company narrative, including data quality as a compute multiplier and the broader “data-first model optimization” message. Partner cross-functionally to translate product updates and benchmarks into marketing initiatives, proof points, ROI models, and sales enablement materials. Create competitive positioning frameworks and equip sales with pitch decks, 1-pagers, demo narratives, and objection-handling resources. About You BS/MS in Computer Science, Engineering, or equivalent experience, with 5+ years in technical marketing Deep expertise in machine learning, translating complex concepts into clear messaging for technical and non-technical audiences Proven track record creating content for ML/AI products and building marketing processes from scratch Excellent problem-solving, project management, communication, and cross-functional collaboration skills, with a bias for action across business and technical domains Compensation At DatologyAI, we are dedicated to rewarding talent with highly competitive salary and significant equity. The salary for this position ranges from $190,000 - $230,000. The candidate's starting pay will be determined based on job-related skills, experience, qualifications, and interview performance. We offer a comprehensive benefits package to support our employees' well-being and professional growth: 100% covered health benefits (medical, vision, and dental). 401(k) plan with a generous 4% company match. Unlimited PTO policy Annual $2,000 wellness stipend. Annual $1,000 learning and development stipend. Daily lunches and snacks are provided in our office! Relocation assistance for employees moving to the Bay Area.

Location: Redwood City

Salary range: None - None

Software Engineer, Infrastructure

About the Company Models are what they eat. But a large portion of training compute is wasted training on data that are already learned, irrelevant, or even harmful, leading to worse models that cost more to train and deploy. At DatologyAI, we’ve built a state of the art data curation suite to automatically curate and optimize petabytes of data to create the best possible training data for your models. Training on curated data can dramatically reduce training time and cost ( 7-40x faster training depending on the use case), dramatically increase model performance as if you had trained on >10x more raw data without increasing the cost of training, and allow smaller models with fewer than half the parameters to outperform larger models despite using far less compute at inference time, substantially reducing the cost of deployment. For more details, check out our recent blog posts sharing our high-level results for text models and image-text models . We raised a total of $57.5M in two rounds, a Seed and Series A. Our investors include Felicis Ventures, Radical Ventures, Amplify Partners, Microsoft, Amazon, and AI visionaries like Geoff Hinton, Yann LeCun, Jeff Dean, and many others who deeply understand the importance and difficulty of identifying and optimizing the best possible training data for models. Our team has pioneered this frontier research area and has the deep expertise on both data research and data engineering necessary to solve this incredibly challenging problem and make data curation easy for anyone who wants to train their own model on their own data. This role is based in Redwood City, CA. We are in office 4 days a week. About the Role We're looking for an experienced Infrastructure Engineer to join as a member of our core Datology AI team. As one of our early senior hires, you will partner closely with our founders on the direction of our product and drive business-critical technical decisions. You will lead the development of our data infrastructure capabilities, including multi-cloud support and support for various deployment models, as well as training and inference infrastructure. You will have a broad impact on the technology, product, and our company's culture. What You'll Work On Design and build the development and production platforms that power our products, enabling reliability and security at scale Architect, build, and deploy our core infrastructure while supporting multiple cloud providers and various deployment models Accelerate company productivity by empowering your fellow engineers & teammates with excellent tooling and systems, providing a best-in-case experience Partner with researchers and engineers to bring new features and research capabilities to our customers About You Have meaningful experience in spearheading and constructing large-scale infrastructure Proficiency in bash, Kubernetes, Python, and/or Terraform or similar technologies Have experience working with AWS, other cloud platforms such as Azure or GCP and/or on-prem environments Have expertise in debugging problems across the stack, such as networking issues, performance problems, hardware issues or memory leaks Take pride in building and operating scalable, reliable, secure systems Have a humble attitude, an eagerness to help your colleagues, and a desire to do whatever it takes to make the team succeed Own problems end-to-end and are willing to pick up whatever knowledge you're missing to get the job done We would love it if you had: Built out data infrastructure from, or nearly from, scratch at a fast-growing startup. Experience building ML/DL infrastructure and/or data infrastructure that feeds into training large ML models Don’t meet every single requirement? We still encourage you to apply. If you’re excited about our mission and eager to learn, we want to hear from you! Compensation At DatologyAI, we are dedicated to rewarding talent with highly competitive salary and significant equity. The base salary for this position ranges from $180,000 to $250,000. The candidate's starting pay will be determined based on job-related skills, experience, qualifications, and interview performance. We offer a comprehensive benefits package to support our employees' well-being and professional growth: 100% covered health benefits (medical, vision, and dental). 401(k) plan with a generous 4% company match. Unlimited PTO policy Annual $2,000 wellness stipend. Annual $1,000 learning and development stipend. Daily lunches and snacks are provided in our office! Relocation assistance for employees moving to the Bay Area.

Location: Redwood City

Salary range: None - None

1 2 3 ... 16