Metrics I care about for promptctl: "did someone run it twice?" and "did they set a custom memory dir?" Usage > stars. Building in public, one release at a time.
Latest Posts by Olko Koval
You can try promptctl at prompt-ctl.com without installing anything: paste a prompt, hit enhance, see the result. Sign in with GitHub or Google for 10 free tries.
Still free. Still no sign-up required to read docs or install the CLI.
Built promptctl because I was tired of pasting "you are an expert X, do Y, output Z" into every chat. Now it's `promptctl enhance --template=review`. Templates live in ~/.promptctl. Same idea, 10x less copy-paste.
Why a single binary for promptctl? No npm install, no pip, no venv. You're in the terminal to fix a prompt — you run one command. Friction kills adoption. One binary, one install, done.
Most LLM cost problems aren't about model choice — they're about prompt structure. Unstructured prompts cost 2-3x in follow-up calls. We built a CLI that structures them. ~67% fewer tokens. Same quality.
prompt-ctl.com
Metrics I care about for promptctl: "did someone run it twice?" and "did they set a custom memory dir?" Usage > stars. Building in public, one release at a time.
You can try promptctl at prompt-ctl.com without installing anything: paste a prompt, hit enhance, see the result. Sign in with GitHub or Google for 10 free tries.
Still free. Still no sign-up required to read docs or install the CLI.
Built promptctl because I was tired of pasting "you are an expert X, do Y, output Z" into every chat. Now it's `promptctl enhance --template=review`. Templates live in ~/.promptctl. Same idea, 10x less copy-paste.
Why a single binary for promptctl? No npm install, no pip, no venv. You're in the terminal to fix a prompt — you run one command. Friction kills adoption. One binary, one install, done.
Most LLM cost problems aren't about model choice — they're about prompt structure. Unstructured prompts cost 2-3x in follow-up calls. We built a CLI that structures them. ~67% fewer tokens. Same quality.
prompt-ctl.com
Metrics I care about for promptctl: "did someone run it twice?" and "did they set a custom memory dir?" Usage > stars. Building in public, one release at a time.
You can try promptctl at prompt-ctl.com without installing anything: paste a prompt, hit enhance, see the result. Sign in with GitHub or Google for 10 free tries.
Still free. Still no sign-up required to read docs or install the CLI.
Built promptctl because I was tired of pasting "you are an expert X, do Y, output Z" into every chat. Now it's `promptctl enhance --template=review`. Templates live in ~/.promptctl. Same idea, 10x less copy-paste.
Why a single binary for promptctl? No npm install, no pip, no venv. You're in the terminal to fix a prompt — you run one command. Friction kills adoption. One binary, one install, done.
Most LLM cost problems aren't about model choice — they're about prompt structure. Unstructured prompts cost 2-3x in follow-up calls. We built a CLI that structures them. ~67% fewer tokens. Same quality.
prompt-ctl.com
Metrics I care about for promptctl: "did someone run it twice?" and "did they set a custom memory dir?" Usage > stars. Building in public, one release at a time.
You can try promptctl at prompt-ctl.com without installing anything: paste a prompt, hit enhance, see the result. Sign in with GitHub or Google for 10 free tries.
Still free. Still no sign-up required to read docs or install the CLI.
Built promptctl because I was tired of pasting "you are an expert X, do Y, output Z" into every chat. Now it's `promptctl enhance --template=review`. Templates live in ~/.promptctl. Same idea, 10x less copy-paste.
Why a single binary for promptctl? No npm install, no pip, no venv. You're in the terminal to fix a prompt — you run one command. Friction kills adoption. One binary, one install, done.
Most LLM cost problems aren't about model choice — they're about prompt structure. Unstructured prompts cost 2-3x in follow-up calls. We built a CLI that structures them. ~67% fewer tokens. Same quality.
prompt-ctl.com
Okay so here’s my actual Bluesky-is-dying hypothesis:
The entire web is dying. Users aren’t going from BlueSky to another site (x/insta/threada/tiktok). Users are going to chatbots.
I know traffic to news sites has cratered (like 90%). My hunch is traffic to all the social platforms is down too.
A black and white soccer ball with handwritten numbers resting on a crumpled red plastic tarp, urban documentary photography.
Random things left behind in the city can form such visually pleasing, quiet compositions. Finding unexpected color and texture on the streets of The Hague.
Fujifilm xt5
Fujinon 23mm 1.4
#xt5 #fujifilmxt5 #fujinon23mm14
Lovely colors and composition!
Swans chilling in The Hague park
Finally warm in The Hague 🦢
Metrics I care about for promptctl: "did someone run it twice?" and "did they set a custom memory dir?" Usage > stars. Building in public, one release at a time.
You can try promptctl at prompt-ctl.com without installing anything: paste a prompt, hit enhance, see the result. Sign in with GitHub or Google for 10 free tries.
Still free. Still no sign-up required to read docs or install the CLI.
Built promptctl because I was tired of pasting "you are an expert X, do Y, output Z" into every chat. Now it's `promptctl enhance --template=review`. Templates live in ~/.promptctl. Same idea, 10x less copy-paste.
Why a single binary for promptctl? No npm install, no pip, no venv. You're in the terminal to fix a prompt — you run one command. Friction kills adoption. One binary, one install, done.
Most LLM cost problems aren't about model choice — they're about prompt structure. Unstructured prompts cost 2-3x in follow-up calls. We built a CLI that structures them. ~67% fewer tokens. Same quality.
prompt-ctl.com