This project helps you generate detailed descriptions for kinks/activities using local LLMs through Ollama. It processes a CSV file containing kink information and generates structured responses with consistent formatting.
git clone https://github.com/drourke/kink-visuals.git
cd kink-visuals
npm install
# Install Ollama (if not already installed)
# See https://github.com/ollama/ollama for installation instructions
# Start Ollama server
ollama serve
# Pull the deepseek-r1:70b model (or any other model you want to use)
ollama pull deepseek-r1:70b
curl -fsSL https://ollama.ai/install.sh | sh
ollama serve
You can check if Ollama is running and if the required model is available:
npm run check
If the model is not available, you’ll need to pull it:
ollama pull deepseek-r1:70b
The script supports using different models with Ollama. By default, it uses deepseek-r1:70b
, but you can use any model available in Ollama.
Check available models:
npm run check
Use a specific model:
npm run start -- --model llama3
We’ve included shortcuts for some popular models:
npm run llama3
npm run llama3:one # Process just one kink
npm run check:llama3 # Check if the model is available
npm run mistral
npm run mistral:one # Process just one kink
npm run check:mistral # Check if the model is available
Run the script to process all kinks:
npm start
To process just one kink:
npm run one
To process a batch of kinks (default: 5) with a delay between each:
npm run batch
To process all kinks without prompting after each one:
npm run auto
To process a batch of kinks automatically:
npm run auto-batch
To force the script to start from the beginning, ignoring progress:
npm run force
To process just one kink, starting from the beginning:
npm run force-one
You can specify the number of kinks to process, the delay between them, and whether to skip prompts:
node index.js --count 10 --delay 15 --no-prompt --force-start --model llama3
--count <number>
: Number of kinks to process before stopping--delay <seconds>
: Delay in seconds between processing each kink--no-prompt
: Skip prompting after each kink (fully automated mode)--force-start
: Force start from the beginning, ignoring progress--debug
: Enable debug mode with verbose logging--model <name>
: Specify which model to useAll generated prompts and responses are saved in the kink_descriptions
directory:
[kink_id]_prompt.txt
[kink_id].md
Your progress is saved in progress.json
. If you stop the script and run it again later, it will continue from where you left off.
ollama serve
or opening the Ollama app.ollama pull deepseek-r1:70b
.