Dashboard

Documentation

AI 101: Getting Started with AI APIs

Learn how to integrate Demeterics into your workflows with step-by-step guides and API examples.

AI 101: Getting Started with AI APIs

Overview

AI 101 is a comprehensive, hands-on learning path that teaches you how to work with AI APIs safely and responsibly. Perfect for developers who are making the leap from coding tutorials to real-world AI development.

This guide references the AI 101 GitHub repository which contains 15 complete examples in 4 programming languages (Bash, Node.js, Python, Go) demonstrating everything from basic chat to vision, audio, and AI agents.

git clone https://github.com/patdeg/ai101.git
cd ai101

What makes AI 101 special: AI safety is built in from day one. Every example includes content moderation, prompt injection defense, and responsible AI practices.

Why Use Demeterics with AI 101?

All AI 101 examples are being updated to use Demeterics as the default observability platform. This means:

  • Track every AI interaction - See prompts, responses, tokens, and costs in your Demeterics dashboard
  • Monitor model performance - Compare latency, error rates, and costs across providers
  • Debug production issues - Full conversation history with replay capability
  • Control spending - Real-time cost tracking and alerts
  • Zero code changes - Just swap your API base URL to enable observability

Quick Setup for Demeterics

  1. Sign up at demeterics.com
  2. Get your API key from the dashboard
  3. Set environment variable: export DEMETERICS_API_KEY="dmt_your_key"
  4. Point APIs to https://api.demeterics.com/groq/v1/... (OpenAI-compatible endpoints)
  5. View analytics at demeterics.com/dashboard

Now every AI API call is automatically tracked with full observability!

What You'll Learn

Core AI Capabilities

  • πŸ’¬ Chat & Reasoning - Question-answering, step-by-step thinking, prompt caching
  • πŸ‘οΈ Vision - Image analysis with multimodal AI models
  • πŸŽ™οΈ Audio - Speech-to-text transcription with Whisper
  • πŸ”Š Voice Synthesis - Text-to-speech with 11 different voices
  • πŸ” Web Search - AI-powered search and content extraction
  • πŸ› οΈ AI Agents - Building autonomous agents with function calling

Safety & Security (Built-in)

  • πŸ›‘οΈ Content Moderation - LlamaGuard for text and image safety
  • πŸ”’ Prompt Injection Defense - Detecting jailbreak attempts with Prompt Guard
  • βœ… Responsible AI Patterns - Safety checks throughout every workflow

Programming Fundamentals

  • 🌐 HTTP requests and API authentication
  • πŸ“‹ JSON parsing and data handling
  • πŸ” Environment variables and secrets management
  • 🎯 Error handling and production patterns

The 15 Examples

Each example runs in Bash, Node.js, Python, and Go:

  1. Basic Chat - Single question to the AI model (multi-vendor support!)
  2. System + User Prompts - Controlling AI behavior with system instructions
  3. Prompt Templates - Dynamic prompt compilation with variables and conditionals
  4. Vision Analysis - Analyzing local images with multimodal models
  5. Safety Check (Text) - Content moderation with LlamaGuard
  6. Safety Check (Image) - Image content moderation with LlamaGuard Vision
  7. Prompt Guard - Detecting jailbreak attempts
  8. Whisper Audio - Transcribing audio to text with Whisper
  9. Tavily Search - Web search with AI-powered answers
  10. Tavily Extract - Extract clean content from web pages
  11. Tool Use - AI agents with function calling (Groq + Tavily)
  12. Web Search (Groq) - Built-in web search with groq/compound-mini
  13. Code Execution - Python code execution with openai/gpt-oss-20b
  14. Reasoning - Step-by-step thinking with openai/gpt-oss-20b + prompt caching
  15. Text-to-Speech - Voice synthesis with OpenAI gpt-4o-mini-tts

Multi-Vendor Support

Example 01 (Basic Chat) demonstrates how to work with 5 different AI providers:

Provider Strengths Best Use Case
Groq Fastest inference, cost-effective Real-time applications, high volume
OpenAI Most advanced models, GPT series Complex reasoning, function calling
Anthropic Claude models, nuanced responses Long context, careful analysis
SambaNova Open models (Llama), enterprise Privacy-conscious, on-premise
Demeterics Universal observability proxy Production monitoring, analytics

Each provider has its own example file showing provider-specific authentication and API patterns.

Getting Started

Prerequisites

You'll need API keys for the services you want to use. Here's the recommended setup:

Primary Provider (Choose One or More)

Demeterics Managed LLM Key (Default for Groq-backed examples 2-15):

  1. Go to demeterics.com
  2. Sign up for a free account (100 free credits)
  3. Open Managed LLM Keys β†’ Create Key
  4. Copy the key (looks like dmt_xxx...)
  5. Export as DEMETERICS_API_KEY
  6. View traces + analytics at demeterics.com/dashboard

Already have Groq/OpenAI/Anthropic keys? Store them inside Demeterics β†’ Settings β†’ API Keys so every proxied request carries your BYOK credentials plus observability.

OpenAI (used directly for text-to-speech + optional variants):

  1. Go to platform.openai.com
  2. Sign up for an account
  3. Create a new API key
  4. Save as OPENAI_API_KEY

Anthropic (Claude, optional direct calls):

  1. Go to console.anthropic.com
  2. Sign up for an account
  3. Create a new API key
  4. Save as ANTHROPIC_API_KEY
  1. Go to demeterics.com
  2. Sign up for an account (100 free credits!)
  3. Get your API key from the dashboard
  4. Save as DEMETERICS_API_KEY
  5. View analytics at demeterics.com/dashboard

Why use Demeterics? Track every interaction, monitor costs in real-time, debug issues with full conversation history, and compare model performance across providers.

Additional Services (For specific examples)

Tavily (for examples 09-11):

  1. Go to tavily.com
  2. Sign up for a free account
  3. Get your API key from the dashboard
  4. Save as TAVILY_API_KEY

Setup Your Environment

Option 1: Using .env file (recommended)

# Copy the example file
cp .env.example .env

# Edit .env and add your keys
DEMETERICS_API_KEY=dmt_your_managed_llm_key
TAVILY_API_KEY=tvly_your_key_here
OPENAI_API_KEY=sk_optional_for_tts

Option 2: Export to shell

# Add to your ~/.bashrc or ~/.zshrc
export DEMETERICS_API_KEY="dmt_your_managed_llm_key"

# Reload your shell
source ~/.bashrc

Important: Never commit your .env file or expose your API keys!

Quick Start Guide

Clone the Repository

git clone https://github.com/patdeg/ai101.git
cd ai101

Pick Your Language

  • New to programming? Start with bash/ - no installation needed
  • Know JavaScript? Check out nodejs/
  • Python fan? Head to python/
  • Systems programmer? Try go/
  • Building IoT devices? Explore arduino/ for ESP32/ESP8266

Each folder has a README with setup instructions and detailed code explanations.

Run Your First Example

Bash:

cd bash
./01_basic_chat.sh

Node.js:

cd nodejs
npm install
node 01_basic_chat.js

Python:

cd python
pip install -r requirements.txt
python 01_basic_chat.py

Go:

cd go
go run 01_basic_chat.go

Using Demeterics with AI 101 Examples

All AI 101 examples support Demeterics observability. Here's how to enable it:

Example: Basic Chat with Demeterics (Python)

import os
import requests
import json

# Use Demeterics proxy instead of direct provider
BASE_URL = "https://api.demeterics.com/groq/v1"

# Managed LLM Key authentication (Demeterics proxy handles vendor auth)
headers = {
    "Authorization": f"Bearer {os.environ['DEMETERICS_API_KEY']}",
    "Content-Type": "application/json"
}

payload = {
    "model": "meta-llama/llama-4-scout-17b-16e-instruct",
    "messages": [
        {"role": "user", "content": "What is the capital of France?"}
    ]
}

response = requests.post(
    f"{BASE_URL}/chat/completions",
    headers=headers,
    json=payload
)

result = response.json()
print(result["choices"][0]["message"]["content"])

Now check your Demeterics dashboard to see:

  • Full conversation history
  • Token usage and costs
  • Response latency
  • Model performance metrics

What Gets Tracked?

When you use Demeterics with AI 101 examples, every interaction is logged with:

  • Request details - Model, prompt, parameters
  • Response data - Completion, tokens, finish reason
  • Performance metrics - Latency, throughput
  • Cost tracking - Input/output tokens, total cost
  • Error handling - Rate limits, failures, retries

Learning Path

Follow this progression for best results:

  1. Week 1: Basics

    • Example 1: Basic Chat
    • Example 2: System Prompts
    • Example 3: Prompt Templates
  2. Week 2: Multimodal & Safety

    • Example 4: Vision Analysis
    • Example 5: Safety Check (Text)
    • Example 6: Safety Check (Image)
    • Example 7: Prompt Guard
  3. Week 3: Audio & Search

    • Example 8: Whisper Audio
    • Example 9: Tavily Search
    • Example 10: Tavily Extract
  4. Week 4: Advanced Features

    • Example 11: Tool Use & AI Agents
    • Example 12: Web Search (Groq)
    • Example 13: Code Execution
    • Example 14: Reasoning
    • Example 15: Text-to-Speech
  5. Practice & Build

    • Complete exercises in exercises/ directory
    • Build your own project using the patterns you've learned
    • Monitor everything with Demeterics!

Hands-On Practice Exercises

Ready to experiment? The AI 101 repository includes progressive exercises:

  • Exercise 1: Basic Chat - Temperature, tokens, cost tracking
  • Exercise 2: System Prompt - Personas, JSON mode, constraints
  • Exercise 3: Vision - Resolution, OCR, multi-image analysis
  • Exercise 4: Safety Check (Text) - Content moderation, validators
  • Exercise 5: Safety Check (Image) - Vision moderation, context
  • Exercise 6: Prompt Guard - Jailbreak detection, security
  • Exercise 7: Whisper Audio - Quality tests, languages, noise
  • Exercise 8: Tavily Search - Web search, time filters, domain control
  • Exercise 9: Tavily Extract - Content extraction, article analysis
  • Exercise 10: Tool Use - AI agents, function calling, autonomous workflows
  • Exercise 11: Web Search (Groq) - Built-in search integration
  • Exercise 12: Code Execution - Python code execution
  • Exercise 13: Reasoning - Step-by-step thinking with caching
  • Exercise 14: Text-to-Speech - Voice synthesis with 11 voices

Each exercise includes:

  • Progressive difficulty levels
  • Real-world applications
  • Reflection questions
  • Cost calculations

Common Issues

"Unauthorized" error:

  • Check your Managed LLM Key is set: echo $DEMETERICS_API_KEY
  • Make sure you exported it in your current shell or .env file
  • In the Demeterics dashboard, confirm the key is still active (rotate if needed)

"Model not found" error:

  • Copy model names exactly (they're case-sensitive)
  • Check provider documentation for current models

Image too large:

  • Resize images before encoding
  • Use JPEG format for smaller file sizes
  • Base64 encoding increases size by ~33%

Audio file issues:

  • Max audio file size: 25 MB
  • Supported formats: mp3, wav, m4a, flac, ogg, webm
  • Cost is based on audio duration, not file size

Cost Examples (Using Demeterics Credits)

Here's what you can do with Demeterics credits:

Task Credits Used Example
Basic chat (1 question) ~0.1 credits "What is the capital of France?"
Vision analysis ~0.3 credits Describe a product image
Safety check (text) ~0.05 credits Moderate user comment
Whisper transcription (1 min) ~0.07 credits Transcribe meeting audio
Tool use (AI agent) ~0.5 credits Search web + summarize results
Reasoning chain ~1.5 credits Solve math problem step-by-step

New user promotion: Start with 100 free credits (worth $1) to explore all examples!

Resources

AI 101 Repository

Demeterics Documentation

Provider Documentation

Next Steps

After completing AI 101 with Demeterics observability, you can:

  1. Build Production Applications

    • Chatbots with conversation history
    • Image analysis tools
    • Voice transcription apps
    • AI-powered search engines
  2. Monitor & Optimize

    • Use Demeterics dashboard to identify performance bottlenecks
    • Compare costs across different models and providers
    • Set up alerts for errors or high usage
    • Export data for custom analytics
  3. Advanced Features

    • Implement streaming responses
    • Build multi-step AI agents
    • Combine multiple API calls (e.g., audio β†’ safety check β†’ AI response)
    • Add evaluations to measure output quality
  4. Scale with Confidence

    • Track costs as usage grows
    • Monitor SLA compliance with latency metrics
    • Debug issues with full conversation replay
    • Share analytics with your team

Why Learn with AI 101 + Demeterics?

πŸŽ“ Complete Learning Path

  • 15 examples covering all major AI capabilities
  • 4 programming languages (choose your preference)
  • Progressive difficulty with hands-on exercises

πŸ›‘οΈ Safety-First Mindset

  • Content moderation built into every workflow
  • Prompt injection defense examples
  • Responsible AI practices throughout

πŸ“Š Production-Ready Observability

  • Track every interaction with Demeterics
  • Monitor costs, latency, and errors in real-time
  • Debug issues with full conversation history
  • Compare model performance across providers

πŸ’‘ 80% Comments, 20% Code

  • Every file heavily commented to explain WHY, not just WHAT
  • Perfect for self-paced learning or teaching others
  • Understand concepts, not just copy-paste

πŸš€ Ready to Build?

  1. Clone github.com/patdeg/ai101
  2. Sign up at demeterics.com (100 free credits!)
  3. Start with Example 1 in your favorite language
  4. Watch your progress in the Demeterics dashboard

Let's build safe, observable AI applications together!