Skip to main content

AI Systems Guide

Understand the AI models and systems powering PromptReports. Learn how to select, configure, and optimize AI providers for your research and report generation needs.

About AI Systems#

PromptReports leverages multiple state-of-the-art AI models through OpenRouter.ai, giving you access to a wide range of large language models (LLMs) for different use cases. Understanding these systems helps you make informed decisions about which models to use for your specific research and report generation needs.

Our platform abstracts the complexity of working with multiple AI providers, allowing you to easily switch between models, compare outputs, and optimize for quality, speed, or cost depending on your requirements.

Multi-Model Access

Access dozens of leading AI models from a single platform without managing multiple API keys.

Unified Interface

Consistent experience across all models with standardized prompting and output handling.

Easy Switching

Switch between models with a single click to compare results and find the best fit.

Enterprise Security

All API communications are encrypted and processed according to enterprise security standards.

Available Models#

We provide access to a variety of AI models optimized for different tasks:

ProviderModel FamilyStrengthsBest For
OpenAIGPT-4, GPT-4 TurboReasoning, instruction following, versatilityComplex analysis, report generation
AnthropicClaude 3 (Opus, Sonnet, Haiku)Long context, nuanced writing, safetyResearch synthesis, detailed reports
GoogleGemini Pro, Gemini UltraMultimodal, factual accuracyData analysis, fact-checking
MetaLlama 3, Code LlamaOpen-source, coding, customizationTechnical reports, code generation
MistralMistral Large, Medium, SmallEfficiency, multilingualCost-effective tasks, European focus
CohereCommand R+RAG, enterprise searchResearch with citations

Flagship Models

Top-tier models like GPT-4 and Claude 3 Opus for highest quality outputs.

Fast Models

Quick-response models for real-time applications and rapid iteration.

Specialized Models

Purpose-built models for specific tasks like coding or embedding.

Model Selection Guide#

Choosing the right model depends on your specific use case. Consider these factors:

Use CaseRecommended ModelsWhy
In-depth research reportsClaude 3 Opus, GPT-4Best reasoning and synthesis capabilities
Quick analysis & summariesClaude 3 Sonnet, GPT-4 TurboGood balance of quality and speed
High-volume processingClaude 3 Haiku, Mistral SmallCost-effective for batch operations
Technical/code contentGPT-4, Code LlamaStrong code understanding and generation
Multilingual contentGPT-4, Mistral LargeExcellent non-English language support
Real-time applicationsClaude 3 Haiku, GPT-3.5 TurboLowest latency response times
1

Define your requirements

Consider output quality, speed, cost constraints, and specific capabilities needed.
2

Start with a flagship model

Begin testing with GPT-4 or Claude 3 Opus to establish a quality baseline.
3

Test alternatives

Try faster or cheaper models to see if they meet your quality requirements.
4

Compare outputs

Use A/B testing or pairwise comparison to evaluate models objectively.
5

Optimize for production

Select the model that best balances your quality, speed, and cost needs.

Configuration Options#

Fine-tune model behavior with these configuration parameters:

ParameterRangeEffectRecommendation
Temperature0.0 - 2.0Controls randomness/creativity0.3-0.7 for reports, higher for brainstorming
Max Tokens1 - model limitMaximum response lengthSet based on expected output length
Top P0.0 - 1.0Nucleus sampling thresholdUsually keep at default (1.0)
Frequency Penalty-2.0 - 2.0Reduces word repetition0.0-0.5 for varied outputs
Presence Penalty-2.0 - 2.0Encourages new topics0.0-0.3 for comprehensive coverage

Model Settings

Configure default parameters for each model in your account settings.

Preset Configurations

Save and reuse parameter configurations for different use cases.

Context Windows

Understand each model's context limit to manage long inputs effectively.

System Prompts

Customize model behavior with system-level instructions.

Performance & Costs#

Understanding performance characteristics helps optimize your usage:

Model TierRelative CostTypical LatencyUse When
Flagship (GPT-4, Claude Opus)High5-30 secondsQuality is critical
Standard (GPT-4 Turbo, Claude Sonnet)Medium2-10 secondsGood balance needed
Economy (GPT-3.5, Claude Haiku)Low0.5-3 secondsSpeed or cost is priority
Open Source (Llama, Mistral)Very Low1-5 secondsBudget-constrained projects

Usage Tracking

Monitor your AI usage and costs in real-time through the dashboard.

Cost Optimization

Use cheaper models for drafts, expensive models for final outputs.

Cost optimization strategies:

  • Use faster, cheaper models for iteration and testing during development
  • Reserve flagship models for production and final report generation
  • Implement caching for repeated queries to avoid redundant API calls
  • Optimize prompt length to reduce token usage without sacrificing quality
  • Batch similar requests together when possible for efficiency
  • Set appropriate max token limits to avoid unnecessarily long responses

Best Practices#

Maximize the effectiveness of AI systems with these recommendations:

Clear Instructions

Provide specific, unambiguous prompts. Models perform best with clear guidance.

Structured Outputs

Request outputs in structured formats (JSON, markdown) for easier processing.

Iterative Refinement

Start with simple prompts and iterate based on outputs.

Validate Outputs

Always review AI outputs for accuracy, especially for factual claims.

DoAvoid
Provide context and examplesAssuming the model knows your domain
Specify output format explicitlyLeaving format to interpretation
Break complex tasks into stepsAsking for too much in one prompt
Test across multiple modelsAssuming one model fits all tasks
Monitor for quality regressionSet and forget configurations
Use system prompts for consistent behaviorRepeating instructions in every user prompt
1

Start with proven prompts

Use the template library as a starting point rather than building from scratch.
2

Test systematically

Use test datasets and evaluations to measure prompt and model performance objectively.
3

Document what works

Keep notes on successful configurations and prompt patterns for future reference.
4

Share with your team

Collaborate on prompt development to benefit from collective learning.