APPLY FOR THIS JOB HERE

Deep Lake is the Database for AI, empowering anyone to organize complex unstructured data and retrieve knowledge with AI.

Deep Lake is powered by a unique storage format optimized for deep-learning and Large Language Model (LLM) based applications (http://github.com/activeloopai/deeplake; 8K+ stars). It simplifies the deployment of enterprise-grade LLM-based products by offering storage for all data types (embeddings, audio, text, videos, images, pdfs, annotations, etc.), querying and vector search, data streaming while training models at scale, data versioning and lineage for all workloads, and integrations with popular tools such as LangChain, LlamaIndex, Weights & Biases, and many more. Deep Lake works with data of any size, it is serverless, and it enables you to store all of your data in one place. Deep Lake is used by Intel, Matterport, Flagship Pioneering, Hercules.ai, & Bayer Radiology.

Activeloop's founding team is from Princeton, Stanford, Google, and Tesla, and is backed by Y Combinator.

We're looking for an AI Search Engineer who possesses a deep understanding of large-scale information retrieval systems, deep learning, databases, and RAG architectures. The ideal candidate will have expertise in developing and optimizing search algorithms, implementing efficient indexing techniques, and leveraging RAG to enhance AI-powered search and question-answering systems.

What You Will Be Doing

As an AI Search Engineer, you will play a pivotal role in designing, developing, and deploying advanced search and retrieval systems that leverage RAG techniques to solve complex information access challenges. You will collaborate with software engineers, customers, and business stakeholders to develop AI search solutions that deliver significant value to the organization and our clients.

Key Responsibilities

RAG System Research and Implementation: Lead the design and implementation of advanced retrieval systems like Deep Memory by Activeloop, delivering optimized RAG systems across the entire value chain - from embedding or model fine-tuning to retrieval optimization with custom algorithms, to enhance knowledge retrieval accuracy.

Search Algorithm Optimization: Develop and refine search algorithms, including semantic search, hybrid search, and multi-modal search techniques, to improve retrieval performance and relevance ranking.

Vector Database Integration: Implement and optimize vector storage and indexing solutions within Deep Lake, ensuring efficient similarity search capabilities for high-dimensional embeddings used in RAG systems.

Query Understanding and Processing: Design and implement advanced query processing pipelines, including query expansion, intent recognition, and contextual interpretation to enhance search precision.

Information Retrieval Model Development: Create and fine-tune machine learning models specifically for information retrieval tasks, such as document ranking, query-document relevance scoring, and zero-shot retrieval.

Performance Evaluation and Metrics: Establish comprehensive evaluation frameworks for search and RAG systems, including relevance assessments, A/B testing, and user satisfaction metrics to continually improve system performance.

Scalability and Efficiency: Optimize RAG and search systems for high throughput and low latency, ensuring they can handle large-scale datasets and real-time query processing demands.

Data Ingestion and Indexing: Develop efficient data ingestion pipelines and indexing strategies to support rapid updates and real-time search capabilities across diverse data types and sources.