Position Embeddings and Efficient Attention
Natural Language stores a great portion of its information in the ordering of it's constituents. Positional Encodings are key to including this information in effectively using the self-attention mechanism
Read More
Evaluating LLM-generated text
As LLM capabilities improve, the sophistication of generative text evaluation methods also needs to increase. We look at some of the most common methods used thus far, both human annotated and automated
Read More
Prompt Engineering and Evaluation
Users of LLMs have found that providing examples and encouraging the LLM to explain its reasoning has lead to more relevant (and often often more accurate) outputs
Read More