Termina's goal is to accumulate first-party data for all Venture-backed companies in the world.
Private-Markets investment decisions happen in the dark, uninformed, and in many cases without objective systems of measure. We are optimistic that by assembling the world's largest dataset of venture-backed companies, Investors and Executives will dramatically improve their decision making using Termina's Quantitative Diligence.
Termina was incubated as the initial diligence technology at Tribe Capital, a $1.7B venture fund, and is also backed by Sequoia's Scout program, X&, and Trenches Capital. We have largely bootstrapped through our early growth, and we are profitable.
We’re hiring a Senior/Staff Backend Engineer to build scalable systems that model product-market fit and power high-stakes decision-making for top Venture Investors. You’ll tackle the unique challenge of creating infrastructure to ingest all data from all startups. If you’re excited by high-impact, performance-driven engineering, join us to build the future of investing.
An Opportunity to Architect Large-Scale Data and Backend Infrastructure:
Engineer core pillars of our multi-interface data application from scratch.
Manage a sophisticated array of data APIs and data infrastructure with an unwavering passion for high-leverage deployment patterns and IAC.
Create new products leveraging AI and LLMs to unlock new interfaces with, and understanding of, the unique datasets that Termina is accumulating.
Conceive of and / or implement novel ways of distributed systems management, and high-scale data processing and retrieval.
About You:
An obsession with automation and getting into the weeds.
Bulletproof backend software engineering foundations, and perpetually learning new frameworks to unseat the old.
The ability to make cut-through infrastructure and data processing decisions that scale.
A Passion for new AI technologies and startups.
5-7 years of experience, though we are flexible.
Our Stack Today:
Python for backend and data processing. Current key frameworks are fastapi, pandas, ray, dagster and sqlalchemy.
A variety of LLMs in our development process and data processing logic.
GCP is our main base of operations, utilizing GKE, BigQuery, BigTable, artifacts, and so on.
“No clicking mentality” with Terraform / terragrunt, Helm, and Skaffold, GitHub actions, Datadog for our CI / CD deployment, lifecycle management, and observability.
Location: Candidates must be located in the San Francisco Bay Area, Quebec Province, or Greater Toronto Area and be willing to travel on a monthly basis to the Bay Area for team offsites.