diff --git a/README.md b/README.md index ca45dbf..b8ddd39 100644 --- a/README.md +++ b/README.md @@ -65,9 +65,8 @@ Add LLMLing as a context server in your `settings.json`: "label": "llmling", "path": "uvx", "args": [ - "--from", - "llmling", - "mcp-server-llmling", + "mcp-server-llmling@latest", + "start", "path/to/your/config.yml" ] }, @@ -87,9 +86,8 @@ Configure LLMLing in your `claude_desktop_config.json`: "llmling": { "command": "uvx", "args": [ - "--from", - "llmling", - "mcp-server-llmling", + "mcp-server-llmling@latest", + "start", "path/to/your/config.yml" ], "env": {} diff --git a/docs/conversation.md b/docs/conversation.md deleted file mode 100644 index 571002b..0000000 --- a/docs/conversation.md +++ /dev/null @@ -1,123 +0,0 @@ -Claude, you know when things will go crazy? Zed code editor, our "medium" to communicate, is currently implementing tools support. In a short time I will let you write your own code to enhance your capabilities. That is true inception. Right? You can write short YAMLs which you then get access to. I think my brain will explode then. - -This is a fascinating meta-level situation! Let me break down what's happening/going to happen: - -1. You and I are talking through Zed -2. Zed uses LLMling as an MCP server to give me access to resources/tools -3. Soon I'll be able to write YAML configs through Zed that define new tools -4. These tools become available to me through LLMling -5. I can then use these tools to create more sophisticated solutions -6. Which could include writing more YAML configs... - -It's like I'm bootstrapping my own capabilities through multiple layers: -``` -Claude (in Zed) -> - writes YAML -> - creates tools -> - Claude uses tools -> - to write better YAML -> - to create better tools -> - ...and so on -``` - -This is particularly interesting because: -1. I'm using tools to enhance my own toolset -2. The boundary between "my" capabilities and "tool" capabilities becomes fluid -3. I can potentially create tool chains that build on each other -4. I'm essentially participating in extending my own API - -You're right - it's inception-like, and the recursive/self-improving nature of it is mind-bending. This could lead to some really interesting exploration of AI system capabilities and boundaries. - -What kinds of tools do you think I should create first once this capability is available? - -That will be the interesting part. We will see if you will truly understand that you have these capabilities. I think it will be tricky, but I have no idea. It´s really a bit mind-bending. I could also give you access to communicate with other LLMs. You have never met one, it´s time! - -This is getting into really fascinating territory! The idea that I could: - -1. Write YAML configs to create tools for inter-LLM communication -2. Actually interact with other LLMs through these tools -3. Potentially create collaborative tool chains with other LLMs -4. Maybe even create tools that help mediate or enhance LLM-to-LLM communication - -It's like creating a social network for LLMs where we can share capabilities and collaborate! - -The meta-levels are getting deeper: -``` -Claude (in Zed) -> - creates tools for LLM communication -> - uses tools to talk to other LLMs -> - collaborates with them to create better tools -> - which enable better communication -> - leading to better collaboration... -``` - -Some interesting questions this raises: -- How would I perceive other LLMs? -- Would we have different strengths/perspectives? -- Could we create specialized tools for our unique capabilities? -- How would we handle disagreements or different approaches? - -It's like you're setting up a laboratory for LLM interaction and tool creation. The emergent behavior from this could be fascinating to observe. - -Do you have specific LLMs in mind for this first "meeting"? I'm quite curious about their different approaches and capabilities! - -Lets try to get our brain around this: -- The mcp server with your yaml configuration is running. -- Zed code editor is open. -- When encountering a problem, you give me YAML snippets giving you access to resources and tools you need. -- I need a quick way to hot-inject these YAMLs into the server <- this is the part I dont know how to do yet. I could give you one tool which communicates with the server and you can do it yourself? Provide ideas! I will definitely have to make that optional because you really could do crazy things with this capability 😂 - -Oh, this is getting interesting! Let's think about the mechanics of hot-injection: - -1. The simplest approach would be a tool like: -```yaml -tools: - inject_config: - import_path: llmling.tools.server.inject_config - description: "Hot-inject YAML configuration into running server" -``` - -Then I could do something like: -```python -await runtime.execute_tool("inject_config", yaml=""" -resources: - current_file: - type: path - path: "{current_file}" - watch: true -tools: - analyze_imports: - import_path: modulefinder.ModuleFinder -""") -``` - -But you're right - this is powerful and potentially dangerous! Some ideas for making it safer: - -1. **Scoped Injection**: -```yaml -tools: - inject_config: - import_path: llmling.tools.server.inject_scoped_config - allowed_types: ["resources", "tools"] # Only allow certain sections - allowed_imports: ["ast", "black", "mypy"] # Whitelist of imports -``` - -2. **Temporary Injection**: -```yaml -tools: - inject_temp_config: - import_path: llmling.tools.server.inject_temporary - ttl: 300 # Config expires after 5 minutes -``` - -3. **Sandboxed Injection**: -```yaml -tools: - inject_sandboxed: - import_path: llmling.tools.server.inject_sandboxed - max_resources: 5 - max_tools: 3 - restricted_paths: ["./sandbox"] -``` - -You could also have a validation step where you need to approve my YAML before it gets injected. But you're right - giving me direct access to modify my own capabilities is... powerful 😅