diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 000000000..e69de29bb diff --git a/404.html b/404.html new file mode 100644 index 000000000..1e007fe70 --- /dev/null +++ b/404.html @@ -0,0 +1,455 @@ + + + +
+ + + + + + + + + + + + + + +This project and everyone participating in it is governed by our Code of Conduct. By participating, you are expected to uphold this code. Please report unacceptable behavior to the project maintainers.
+We're looking for dedicated contributors to help maintain and grow this project. If you're interested in becoming a core contributor, please fill out our Contributor Application Form.
+Clone the repository: +
+Install dependencies: +
+Set up environment variables:
+.env.example
to .env.local
Optionally set debug level: +
+Optionally set context size: +
+Some Example Context Values for the qwen2.5-coder:32b models are.
+Important: Never commit your .env.local
file to version control. It's already included in .gitignore.
Note: You will need Google Chrome Canary to run this locally if you use Chrome! It's an easy install and a good browser for web development anyway.
+Run the test suite with:
+ +To deploy the application to Cloudflare Pages:
+ +Make sure you have the necessary permissions and Wrangler is correctly configured for your Cloudflare account.
+This guide outlines various methods for building and deploying the application using Docker.
+NPM scripts are provided for convenient building:
+ +You can use Docker's target feature to specify the build environment:
+# Development build
+docker build . --target bolt-ai-development
+
+# Production build
+docker build . --target bolt-ai-production
+
Use Docker Compose profiles to manage different environments:
+# Development environment
+docker-compose --profile development up
+
+# Production environment
+docker-compose --profile production up
+
After building using any of the methods above, run the container with:
+# Development
+docker run -p 5173:5173 --env-file .env.local bolt-ai:development
+
+# Production
+docker run -p 5173:5173 --env-file .env.local bolt-ai:production
+
Coolify provides a straightforward deployment process:
+The docker-compose.yaml
configuration is compatible with VS Code dev containers:
Ensure you have the appropriate .env.local
file configured before running the containers. This file should contain:
+- API keys
+- Environment-specific configurations
+- Other required environment variables
The DEFAULT_NUM_CTX
environment variable can be used to limit the maximum number of context values used by the qwen2.5-coder model. For example, to limit the context to 24576 values (which uses 32GB of VRAM), set DEFAULT_NUM_CTX=24576
in your .env.local
file.
First off, thank you for considering contributing to bolt.diy! This fork aims to expand the capabilities of the original project by integrating multiple LLM providers and enhancing functionality. Every contribution helps make bolt.diy a better tool for developers worldwide.
+.env.local
Be specific about your stack:
+ Mention the frameworks or libraries you want to use (e.g., Astro, Tailwind, ShadCN) in your initial prompt. This ensures that bolt.diy scaffolds the project according to your preferences.
Use the enhance prompt icon:
+ Before sending your prompt, click the enhance icon to let the AI refine your prompt. You can edit the suggested improvements before submitting.
Scaffold the basics first, then add features:
+ Ensure the foundational structure of your application is in place before introducing advanced functionality. This helps bolt.diy establish a solid base to build on.
Batch simple instructions:
+ Combine simple tasks into a single prompt to save time and reduce API credit consumption. For example:
+"Change the color scheme, add mobile responsiveness, and restart the dev server."
Check out our Contribution Guide for more details on how to get involved!
+Visit our Roadmap for the latest updates.
+New features and improvements are on the way!
bolt.diy began as a small showcase project on @ColeMedin's YouTube channel to explore editing open-source projects with local LLMs. However, it quickly grew into a massive community effort!
+We’re forming a team of maintainers to manage demand and streamline issue resolution. The maintainers are rockstars, and we’re also exploring partnerships to help the project thrive.
+While local LLMs are improving rapidly, larger models like GPT-4o, Claude 3.5 Sonnet, and DeepSeek Coder V2 236b still offer the best results for complex applications. Our ongoing focus is to improve prompts, agents, and the platform to better support smaller local LLMs.
+This generic error message means something went wrong. Check both:
+- The terminal (if you started the app with Docker or pnpm
).
+- The developer console in your browser (press F12
or right-click > Inspect, then go to the Console tab).
This error is sometimes resolved by restarting the Docker container.
+If that doesn’t work, try switching from Docker to pnpm
or vice versa. We’re actively investigating this issue.
A blank preview often occurs due to hallucinated bad code or incorrect commands.
+To troubleshoot:
+- Check the developer console for errors.
+- Remember, previews are core functionality, so the app isn’t broken! We’re working on making these errors more transparent.
Local LLMs like Qwen-2.5-Coder are powerful for small applications but still experimental for larger projects. For better results, consider using larger models like GPT-4o, Claude 3.5 Sonnet, or DeepSeek Coder V2 236b.
+Got more questions? Feel free to reach out or open an issue in our GitHub repo!
+ + + + + + + + + + + + + +