Skip to main content

Quick Start Guide

Get up and running with PromptReports in under 10 minutes. Learn the essential workflow for creating, testing, and evaluating prompts.

Prerequisites#

Before you begin, make sure you have:

  • A PromptReports account (sign up at promptreports.ai)
  • Logged in to your dashboard
  • Basic understanding of prompt engineering concepts

Step 1: Create a Prompt Folder#

Prompt folders are the organizational backbone of PromptReports. They help you group related prompts, manage team access, and maintain version history.

1

Navigate to Prompt Folders

Click on Prompt Folders in the sidebar navigation or go directly to /prompt-folders.
2

Click "Create Folder"

Click the "Create Folder" button in the top right corner to open the creation dialog.
3

Configure your folder

  • Name: Give your folder a descriptive name (e.g., "Customer Support Prompts")
  • Description: Add context about the folder's purpose
  • Tags: Add tags for easy filtering later

Step 2: Create Your First Prompt#

With your folder created, it's time to add your first prompt.

1

Open your folder

Click on the folder you just created to enter the folder view.
2

Click "Add Prompt"

Click the "Add Prompt" button to create a new prompt within this folder.
3

Write your prompt

Enter your prompt content. You can use variables with the syntax {{variable_name}}.
Example Prompt
text
You are a helpful customer support assistant for {{company_name}}.

The customer has the following question:
{{customer_question}}

Provide a helpful, professional response that:
1. Addresses their specific concern
2. Offers actionable next steps
3. Maintains a friendly tone

Response:

Step 3: Test in Playground#

The Prompt Playground is your interactive testing environment where you can run your prompts with different inputs and model configurations.

1

Open the Playground

Click the "Playground" tab in your prompt editor or folder view.
2

Fill in variables

Enter values for any variables defined in your prompt. The system will automatically detect variables.
3

Configure model settings

  • Model: Choose from available AI models
  • Temperature: Control creativity (0 = focused, 1 = creative)
  • Max Tokens: Set response length limit
4

Run your prompt

Click "Run" to execute your prompt and see the AI response in real-time.

Step 4: Create a Test Dataset#

Test datasets allow you to evaluate your prompts systematically across multiple inputs. This is essential for quality assurance.

1

Navigate to Datasets

Click the "Datasets" tab in your prompt or folder view.
2

Create new dataset

Click "New Dataset" and give it a descriptive name.
3

Add test cases

Add rows with different variable values. You can:
  • Add rows manually
  • Import from CSV
  • Create from execution history
Example CSV Format
csv
company_name,customer_question
"TechCorp","How do I reset my password?"
"TechCorp","What are your business hours?"
"TechCorp","Can I get a refund on my subscription?"

Step 5: Run an Evaluation#

Now run your prompt against the test dataset to evaluate quality:

1

Select your dataset

Choose the dataset you created in the evaluation panel.
2

Configure evaluation

Select the prompt version and model configuration to evaluate.
3

Run evaluation

Click "Run Evaluation" to execute your prompt against all test cases.
4

Review results

View the results including:
  • Individual responses for each test case
  • Quality scores (if configured)
  • Comparison with previous evaluations

Next Steps#

Now that you've mastered the basics, explore these advanced features: