I'm currently working on a retrieval-augmented generation (RAG) system for the chemical domain. I built an entire RAG pipeline, but the answers weren't satisfactory. I've been debugging each part and found that the retrieved chunks are of poor quality, despite trying multiple retrieval methods. I suspect the embedding model is the problem; I'm currently using OpenAI's Ada, but it doesn't seem suitable for this domain. I'm considering fine-tuning my embedding model. Is this a good approach, and how can I do it ? please share your thoughts and approaches ....
nlp - How to finetune the embedding model for specific domain? - Stack Overflow
评论列表(0)
- 暂无评论