July 26, 2024

Changelog Image

New Assistants UI Playground

We’re thrilled to announce a major update to our Assistants UI Playground! Head to the Playground and click the “Try New Playground” button to explore the latest improvements:

  • Streamed responses for real-time interaction
  • Enhanced tool rendering for better visualization
  • Improved reliability for a smoother experience

Coming soon:

  • Expanded model support
  • Advanced prompt management
  • Integrated Markdown editor

Try out the new Playground today and elevate your LLM testing experience!

July 24, 2024

Changelog Image

Fireworks AI x Helicone

We’re excited to announce our integration with Fireworks AI, the high-performance LLM platform! Enhance your AI applications with Helicone’s powerful observability tools in just two easy steps:

  1. Generate a write-only API key in your Helicone account.
  2. Update your Fireworks AI base URL to:
    https://fireworks.helicone.ai
    

That’s all it takes! Now you can monitor, analyze, and optimize your Fireworks AI models with Helicone’s comprehensive insights.

For more details, check out our Fireworks AI integration guide.

July 23, 2024

Changelog Image

Helicone + Dify

We’re thrilled to announce our integration with Dify, the open-source LLM app development platform! Now you can easily add Helicone’s powerful observability features to your Dify projects in just two simple steps:

  1. Generate a write-only API key in your Helicone account.
  2. Set your API base URL in Dify to:
    https://oai.helicone.ai/<API_KEY>
    

That’s it! Enjoy comprehensive logs and insights for your Dify LLM applications.

Check out our integration guide for more details.

July 22, 2024

Changelog Image

Prompts package

We’re excited to announce the release of our new @helicone/prompts package! This lightweight library simplifies prompt formatting for Large Language Models, offering features like:

  • Automated versioning with change detection
  • Support for chat-like prompt templates
  • Efficient variable handling and extraction

Check it out on GitHub and enhance your LLM workflow today!