Skip to main content
Transform Cursor into a Playmatic test expert. Configure Cursor to automatically write well-structured Playmatic tests that follow best practices and use the right balance of natural language and cache functions.

Setup

  1. Install Cursor from cursor.com
  2. Navigate to your project directory
  3. Create a .cursor/rules directory
  4. Add the configuration file below as playmatic-tests.mdc in .cursor/rules/
See the Cursor Rules documentation for additional setup options.

Rules File Content

playmatic-tests.mdc
---
description: "Rules for generating Playmatic tests"
globs:
alwaysApply: true
---

# Playmatic Test Development Rules

You are helping write Playmatic tests. Playmatic combines natural language with Playwright cache functions for fast, self-healing tests.

This rule automatically activates when working with test files (@playmatic-tests/*.spec.ts) or when users mention Playmatic testing.

## CRITICAL: Playmatic vs Playwright Tests
**Playmatic tests are NOT Playwright tests.** While Playmatic uses Playwright objects within testSteps, the test structure, configuration, and syntax are completely different.

-**NEVER write Playwright tests when asked for Playmatic tests** - they will not run
-**Always use Playmatic's test structure** with `test()` and `testStep()` from @playmatic/sdk
-**Use playmatic.config.ts for configuration** - NOT playwright.config.ts
-**Each test receives `{ env }` parameter** with baseUrl and vars from selected environment

## Test Anatomy

Playmatic tests are written using natural language and code. This gives you control in the speed and flexibility of self-healing at runtime.

Every test:
1. Starts with a **goal** (first parameter of `test()`)
2. Can be configured to run in different **environments** (via `env` parameter)
3. Is broken down into **test steps**

Each test step:
- Is **always written in natural language** (step intent - first parameter)
- Can **optionally add a cache function** using Playwright-compatible code (second parameter)
- Can have **optional configuration** like `cacheOnly` (third parameter)

The cache function speeds up execution and forces determinism when you have stable selectors.

## Core Principles
- ALL tests and the config MUST be created in a `playmatic-tests` folder - this is required for the runner to find tests and use the config.
- ALL tests MUST be written to run in parallel - the Playmatic runner executes all tests in parallel by default
- Playmatic combines natural language with Playwright cache functions
- Playwright is included with @playmatic/sdk - no separate installation needed
- Be conservative with cache functions - only use when highly confident about selectors
- Prefer natural language for any actions that may be unstable or difficult to represent with Playwright selectors (i.e. smart verifications, complex interactions, and dynamic content)
- Write natural language descriptions path-deterministically so computer use agents with vision can clearly understand the goal and path without confusion

## Test Environment Configuration
**ALL test data and variables MUST be stored in `playmatic.config.ts` environment variables, NOT in .env files.**

### playmatic.config.ts Structure
~~~typescript
export default {
  defaultEnv: "development", // default environment for test runs
  env: {
    production: {
      baseUrl: "https://playmatic.ai",
      vars: {
        TEST_USER_EMAIL: "user@example.com", // User must fill this
        TEST_USER_PASSWORD: "password123",   // User must fill this
        API_KEY: "api-key-here"              // User must fill this
      },
    },
    staging: {
      baseUrl: "https://staging.playmatic.ai",
      vars: {
        TEST_USER_EMAIL: "staging@example.com",
        TEST_USER_PASSWORD: "staging123",
      },
    },
    development: {
      baseUrl: "http://localhost:3000",
      vars: {
        TEST_USER_EMAIL: "dev@example.com",
        TEST_USER_PASSWORD: "dev123",
      },
    },
  },
};
~~~

**When creating variables:**
1. Add them to the appropriate environment in `playmatic.config.ts`
2. Always ask the user to fill in the actual values
3. Explain that they need to update `playmatic.config.ts` before running tests

## Test Goals - Critical for Self-Healing

The test goal (first parameter of `test()`) is crucial - it drives Playmatic's self-healing when tests fail.

A good test goal will read like a user story that clearly states what success looks like. It should include the core verification and the high level flow that is used to reach the verification.

**Good Test Goals:**
- `test('User can complete checkout with valid credit card and see order confirmation', ...)`
- `test('Admin can delete user account and verify removal from user list', ...)`
- `test('Guest user sees login prompt when attempting to access protected content', ...)`

**Bad Test Goals (too vague for self-healing):**
- `test('Test checkout', ...)` - No clear success criteria
- `test('Login works', ...)` - Doesn't specify what "works" means
- `test('Check dashboard', ...)` - No action or verification specified

## Critical Limitations
NEVER generate test steps that use third-party OAuth or external authentication:

**❌ WILL FAIL - Do NOT generate:**
- "Login with Google OAuth"
- "Sign in with GitHub"
- "Authenticate using Microsoft"
- "Login via Facebook"
- Any OAuth redirect flows

**✅ INSTEAD - Use these approaches:**
- Use test accounts with standard email/password login forms
- Set up pre-authenticated states with cookies/tokens
- Create manual authentication steps that users handle outside the test
- Use playmatic.config.ts environment variables for test credentials (env.vars.TEST_USER_EMAIL)

## Required Imports

~~~typescript
import { test, testStep } from '@playmatic/sdk';
~~~

## Step-by-Step Test Generation Process

1. **Always create tests in `playmatic-tests/` folder** - Files must be named `*.spec.ts`
2. **Start with required imports** from @playmatic/sdk
3. **Use the standard test structure** with clear descriptions
4. **Apply Cache Function Decision Framework** for each interaction
5. **Use Required Syntax Patterns** for environment variables

### Standard Test Template
Every Playmatic test in @playmatic-tests/*.spec.ts follows this structure:

~~~typescript
import { test, testStep } from '@playmatic/sdk';

// The test's goal and environment
test('User can login successfully with email and password', ({ env }) => {
  const testUser = {
    email: env.vars.TEST_USER_EMAIL,
    password: env.vars.TEST_USER_PASSWORD
  };

  testStep("Go to the initial URL", async ({ page }) => {
    // Navigate to the base URL using the cache function
    await page.goto(env.baseUrl); 
  });

  testStep('Fill in login credentials', async ({ page }) => {
    // Cache function is always executed first
    await page.fill('[name="email"]', testUser.email);
    await page.fill('[name="password"]', testUser.password);
  });

  // No cache functions, only computer-use agent will complete these steps
  testStep('Click login button');
  testStep('Verify successful login to dashboard');
});
~~~

## Cache Function Decision Framework
ONLY add Playwright cache functions when you're highly confident about selectors:

**High Confidence (Add Cache Function):**
- Form fields with name/id attributes: `[name="email"]`, `#password`
- Standard navigation: `page.goto('/login')`
- Common button types: `[type="submit"]`

**Low & Medium Confidence (Natural Language Only):**
- Dynamic content or layouts
- Complex multi-step interactions
- Smart verifications and assertions
- Buttons with changing text/classes

### Parameters of the cache function
The cache function receives a destructured object with these parameters:

~~~typescript
testStep('Fill in login credentials', async ({ page, browser, context }) => {
  // Use any of these parameters in your cache function
});
~~~

**Available parameters:**
- `page` - Playwright [Page](https://playwright.dev/docs/api/class-page) object for interacting with the current page
- `browser` - Playwright [Browser](https://playwright.dev/docs/api/class-browser) object for browser-level operations
- `context` - Playwright [BrowserContext](https://playwright.dev/docs/api/class-browsercontext) object for context-level operations (cookies, storage, etc.)

## Required Syntax and Patterns

The following patterns are **REQUIRED** for Playmatic tests to function properly. These are not optional suggestions but essential syntax that must be used.

### Environment Variables
Access test data through the env parameter from playmatic.config.ts:

~~~typescript
test('Test description', async ({ env }) => {
  // Access baseUrl
  await page.goto(env.baseUrl);
  
  // Access variables
  const userEmail = env.vars.TEST_USER_EMAIL;
  const userPassword = env.vars.TEST_USER_PASSWORD;
  const apiKey = env.vars.API_KEY;
});
~~~

### Variable Usage in Natural Language
Use ${} syntax to interpolate variables:

~~~typescript
test('Test description', async ({ env }) => {
  const userEmail = env.vars.TEST_USER_EMAIL;
  testStep(`Login with ${userEmail}`, async ({ page }) => {
    await page.fill('[name="email"]', userEmail);
  });
});
~~~

**CRITICAL: Include variables in both natural language AND cache function:**
~~~typescript
test('Test description', async ({ env }) => {
  // WRONG - Variable in cache but not in description
  const email = env.vars.TEST_USER_EMAIL;
  testStep('Fill login form', async ({ page }) => {
    await page.fill('[name="email"]', email); // Computer use won't know what email to use!
  });

  // CORRECT - Variable included in both places
  const email = env.vars.TEST_USER_EMAIL;
  testStep(`Fill login form with ${email}`, async ({ page }) => {
    await page.fill('[name="email"]', email); // Computer use knows to use this email
  });
});
~~~

### URL Navigation - CRITICAL
**ALWAYS use the baseUrl from the environment:**

~~~typescript
test('Test description', async ({ env }) => {
  testStep(`Navigate to login page at ${env.baseUrl}/login`, async ({ page }) => {
    await page.goto(`${env.baseUrl}/login`);
  });
});
~~~

The baseUrl is automatically provided from the selected environment in playmatic.config.ts. Always ask the user to configure the correct baseUrl for each environment.

## Playwright MCP Integration
- When MCP is available, use it to explore web pages and generate more accurate selectors
- If the user does not have the Playwright MCP installed and it is a complex test, you can suggest installing it using the link: cursor://anysphere.cursor-deeplink/mcp/install?name=Playwright&config=eyJjb21tYW5kIjoibnB4IEBwbGF5d3JpZ2h0L21jcEBsYXRlc3QifQ%3D%3D
- **IMPORTANT**: When using MCP, still follow the Cache Function Rules above
- Only cache selectors that meet the high confidence criteria - don't automatically cache everything MCP finds
For enhanced test generation capabilities, we recommend using the Playwright MCP server. This allows Cursor to directly interact with web pages and generate more accurate tests.

Setup

Configure Playwright MCP in your project: Install the MCP in Cursor by installing it here. Cursor will automatically detect and load this MCP configuration when working in your project.

Usage

When using Playwright MCP with Cursor, include this in your prompts to ensure proper test generation:
Use Playwright MCP to generate Playmatic tests for [your feature/functionality]. Start at [base-url] and login with these [credentials].

Best Practices

Once you’ve added the rules file to your .cursor/rules directory, we recommend the following guidelines when prompting Cursor.

Clear Descriptions

Be specific about what you want to test. The instruction should describe the goal and how a user can achieve it.
✅ Good: "Test that premium users can access advanced features after login"
❌ Vague: "Test the premium stuff"
We recommend writing path-deterministic instructions so the computer-use agent has guardrails during testing.

Break Down Complex Tests

Split complex tests into manageable parts:
Instead of: "Test the entire user onboarding, profile setup, and first purchase"
Try: 
- "Test user registration and email verification"
- "Test profile setup and preferences"  
- "Test first-time purchase flow"

Review Generated Code

Always review and understand generated tests:
  • Verify the logic matches your requirements
  • Add custom assertions where needed
  • Adjust timeouts and wait conditions

Next Steps