AI Tools - Free browser utility

AI Token Counter

The AI Token Counter estimates how many tokens your prompt, text, or chat message will use across GPT, Claude, Gemini, and other language models. It also shows characters, words, token chips, and approximate input cost without uploading anything.

Tokens0
Characters0
Words0
Estimated cost$0
Context used0%
Remaining0
Density0
Pages0
Lines0
Bytes0

Try sample content

Token planning details

What to paste

Use this page for prompts, code, transcripts, article drafts, documentation, emails, JSON, markdown, and chat messages before sending them to an AI model.

What to check

Watch tokens, context used, remaining context, and estimated cost. Leave room for the model response, tool calls, citations, or JSON output.

Why estimates differ

Every model family tokenizes text differently. Code, symbols, whitespace, emojis, and non-English text can shift counts more than normal prose.

Content typeTypical tokensUseful for
Short prompt50-300Quick generation, classification, rewrite tasks
Email or memo150-600Summaries, replies, editing, tone checks
One page of prose600-900Article drafts, study notes, page analysis
Code file800-3,000+Review, refactor, test generation, debugging
Transcript hour8,000-12,000+Meeting summaries, call analysis, action items

How it works

This utility runs as static HTML, CSS, and vanilla JavaScript. Shared model metadata is bundled in a local browser data file, pricing-aware math is calculated locally, and controls work without sending data to any API.

Pricing changes frequently. Last updated: April 27, 2026. Verify against official vendor pricing pages before relying on estimates.

Frequently asked questions

What is a token?

A token is a small chunk of text that an AI model reads, often a word fragment, punctuation mark, or short word. English text averages about four characters per token, but exact counts depend on the tokenizer.

How are tokens counted for non-OpenAI models?

This browser tool uses a local estimate based on text length plus model-specific adjustment factors. Use official provider tokenizers for final billing-sensitive counts.

Does this tool send my text anywhere?

No. The token counter runs entirely in your browser and does not call any model provider API. Your text remains on your device unless you copy it elsewhere.

Why does the same text give different counts on different models?

Different model families use different vocabularies and encodings. Code, symbols, whitespace, and multilingual text can change tokenization dramatically.

Why is my Chinese or Japanese text using so many tokens?

Some tokenizers split CJK text differently from English. Modern multilingual encodings are better, but exact counts still vary by model.

What is cl100k_base vs o200k_base?

cl100k_base is an older OpenAI tokenizer family. o200k_base is a newer OpenAI tokenizer family used by GPT-4o style models and the GPT entries in this data file.

Does ChatML formatting really add tokens?

Yes. Chat messages include role and boundary tokens, so a chat prompt is slightly longer than the visible text alone.

Related AI Tools