-
Notifications
You must be signed in to change notification settings - Fork 181
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rhai playground (using WebAssembly) #169
Comments
Added an entry in #100 |
This may be quite possible because the parser is fast enough. If you trottle the parsing during 500ms pauses or so it'll probably work fine... However, the parser currently bugs out at the first error, which may not be perfect. To have a good experience, we really need reasonable error recovery, as per #119
This may not be easy as Rhai is dynamic, so there is no type information. It is difficult to know how to filter the list. |
It will just be unfiltered (i.e. only filtered based on whatever the user already typed in) to start with. If it ever gets advanced enough to be able to heuristically guess the data types (which seems very unlikely) then perhaps it can be developed further. Another idea could be type annotation comments but I don't aim to discuss this... I realized that it might be easier to code the playground if it can have direct access to the |
I can open up Or do you think hiding a |
@alvinhochun if you'd pull from my fork instead: https://github.com/schungx/rhai The latest version has a new feature |
Perhaps you can consider splitting up the crates?
|
You mean a |
How about splitting the parser and AST stuff to a |
Hhhmmm... that probably should work, but I'd hesitate to split a simple proj like Rhai into two yet even simpler crates. Unless there is an overwhelming reason... |
Maybe it'll just be simpler for me to maintain a fork for the playground. |
You don't have to. Just turn on I fully intend to merge this feature into master a bit later. |
I experimented on whether I can reuse the existing Rhai tokenizer code for syntax highlighting, turns out it takes quite some modifications. This is the modified code that "works" (if you diff it against the original code snippets you might be able to tell how it was changed): I also uploaded a build with this new syntax highlighting. (Compare with the previous Rust highlighting) The main difference is that, CodeMirror (the editor I'm using) only gives one line to the tokenizer at a time. It also caches the tokenizer state per line so that it can restart the tokenization from any line. This means I had to change how block comments are handled. (I am also surprised to see that Rhai doesn't support multi-line strings...) How do you think about refactoring the tokenizer in Rhai to allow the code to be reused? I'm thinking of splitting the "streaming" part in |
Let me diff it and have a look. Optimally, we'd like one code base that can have multiple uses. The tokenizer is stable enough (i.e. not much changes) that we can experiment. I'm not familiar with CodeMirror myself... can you list out a few aspects in Off hand I can see the need to abstract out the |
Yes, it wasn't hard to do but it burdens the scripting language on another obscure syntax. There hasn't been any call for it yet... So basically we need a new state that is returned together with the token indicating whether the parsing stops in the middle of a multi-line comment or is in valid text. I see you already have such an And your idea of splitting off the parse state from the parser should work well. I'll start looking into the refactoring and give you a trail version in a bit. |
Here is the API of the CodeMirror stream if you want to see it (and here is the binding in Rust). |
It doesn't really matter for the parser. At the end of the stream, the tokenizer will start outputting
Understood. Some way to keep state to make sure that the tokenizer knows it is starting from a multi-line comment. All other tokens fit on one single line only with no exceptions... maybe we'll also handle the case of multi-line strings with the same mechanism.
Yes.
Fine.
Whitespaces are skipped during tokenizing anyway. But we need to keep it for multi-line comments and strings (in the future).
So I have a
Right now, the tokenizer only tracks the starting position of the token. Do you need its length or the ending position of the token as well? Why not keep the literals. I don't think they hurt...
For string/character literals, maybe I also include a mapping table of byte-range -> character position? |
This is not needed, in fact CodeMirror will not call the tokenizer with a stream at its ending position. I expect to never get an EOF.
Sorry, I did not explain this clearly. The CodeMirror tokenize process works like this:
This is what I meant by "stream tracks column position internally". The position information is external to the tokenizer so it doesn't need to do any tracking.
Extracting the literals is a little bit of extra work, but I guess it's fine.
This won't really work with CodeMirror's tokenization process. Perhaps I can try with an example of what it would need:
It's just something nice to have, but if you think it is too complicated to be added to the built-in tokenizer you can leave it out and I'll see if it can be tacked on. |
Yes it will, if there are only white-space till the end. The tokenizer will not find anything and will return |
You can take a look at this branch: https://github.com/schungx/rhai/tree/tokenizer The You need to implement the States are kept in the type Multi-level nested comments are supported, automatically handled at the beginning of the next line - in fact, the |
You are right. I guess I didn't notice it because I didn't actually try making it a hard error.
Thanks for the refactor, it is almost what I needed but there are some issues:
|
On an unrelated note, I would like to be able to list and inspect the script-defined functions inside the AST. |
OK. Done.
|
Looks like you forgot to mark the new |
OK fixed!
|
For IDE, I think it might be easier just to write standard language server protocol plugins for the Rhai syntax, so that it can be used with VS code, Eclipse, etc. I remember reading the TextMate grammar and it is extremely complicated... I wonder if there is something that can generate at least a skeleton based on some C-like language... |
I don't really intend to implement a full IDE on the playground, that'd be crazy (it is a "wish list" for a reason). I don't have experience with writing LSP servers and I'm not too interested for now. As for the playground, the current build seems functional enough. What other things would you want for a first release? I would want some example scripts to be included (selectable from the interface) and some kind of integration with the book. The styling and script exec output could use some improvements too, but I don't really have any idea what to change. |
I think I want to keep it for just test builds going forward. When I can finalize an initial release build I'd probably put it under
Error reporting can use a lot of improvements. CodeMirror comes with an interesting Linter addon that I would want to try. I've noticed however that the error positions are a bit off. For example, if you make an invalid escape sequence in a |
That can also do. I can just put it inside the book. But you'll have to build it though... Or, I think it is best to keep it under you under
You caught a bug here. The error position is usually quite accurate... this is the first time in a long while that I found one off... (EDIT) It is fixed.
Yes, right now it doesn't attempt to recover from errors. It just bugs out of there. Technically speaking, we should try to recover so more errors can be listed, but that would complicate the parser quite a bit as I can't simply |
In the past day, I converted the playground to use Vue.js with a few minor changes and added the ability to stop an Async run. My plan following this is to try I've also set up a github action to automatically deploy to https://alvinhochun.github.io/rhai-playground-unstable/. Also because of this, the built files are available for download as artifacts (latest build at time of writing). I think I'll keep |
I've added the ability for the playground to be embedded on another page (see example). Though I haven't yet looked at how it can be included from mdBook. (Perhaps best open a separate issue for this?) Note: You probably don't want the playground to be loaded immediately on page load because the resources are a bit heavy compared to the rest of the book. It is limited as in it can only run plain scripts without extra modules and no customizations with Rust. Custom modules in plain Rhai script should be doable in the future, but I don't think it will ever be possible to demo something like registering a custom type, without having to make a specific build with the Rust type already built-in. (rhaiscript/playground#2) |
How about something like a "click here to load the Playground" button? I'll start figuring out how to embed JS scripts and custom HTML into There is a chapter in the Rhai book on the playground: https://schungx.github.io/rhai/start/playground.html Right now it only contains a link. I think this can be beefed up with an embedded Playground!
No I don't think it'll be possible also, short of compiling the Rust code. |
Well, I was thinking of allowing Rhai code snippets in the Book to be loaded into and run on the playground, kind of like how they do with the Rust snippets There'd be a "play" button next to the code snippet that will load the playground inline with the code snippet. For the playground page, I think a link should be enough... |
That would actually be cool! But then I'd probably need to revise the code snippets to be higher quality. For example, right now I'm just doing: let x = 42;
x == 42; // x is 42 To be a self-running snippet, it probably needs to be: let x = 42;
print(x); // prints 42 In a screenful of examples, it may actually be less readable. I'll have to think about this. And also, a lot of Rhai scripts in the book depend on registered functions to work, so unless we build separate WASM modules with different functions, we'll have a problem running them... |
Perhaps Or perhaps an REPL-style execution would work better for some snippets?
For those examples I think you can just not enable the playground. We can make it work if you are ok with building a playground with those functions included. But synchronizing the code between the build and the book will be a bit of trouble. I think it'll need a tool to automatically generate the code from the snippets in the book. Let's perhaps ignore that for now. |
There is
Yup. |
Hi @alvinhochun, if I add a lifetime and a reference to |
Yes, probably. The highlighting process requires |
OK then, I'll find a way of not having to do this. |
I'm thinking perhaps I should make a Reddit post about the playground, how do you feel about this? |
This is a great idea. However, you really need a semi-permanent URL for this, because you don't want it to change later on. Have you decided on the final URL for the playground? Right now it is a couple of links to different versions. Maybe have a landing URL that is the current stable version, then a link to "vnext" or "experimental" on the landing page? |
Also, right now, there is a noticeable pause when the user presses I'd suggest either pre-loading that WASM package or put up a spinning loading gif... |
I suppose I will be keeping https://alvinhochun.github.io/rhai-playground-unstable/. If you think it is necessary to have one to be called "stable" I would just copy the current build to https://alvinhochun.github.io/rhai-playground/. |
How about I just print "Initializing Web Worker..." to the output box for now? |
Suggestion: Right now it says "Running" on the Therefore, you can change that text to "Initializing, please wait...", plus also put a message to the output box. That should do it. |
Ok, I've dealt with some of the issues. Most notably, there is now only one .wasm file which saves about 150KiB of download (gzipped) loading the worker. Frequent output printing no longer take as much resources (you can run |
I made https://alvinhochun.github.io/rhai-demo/ a redirect. You can change all references to the new URL https://alvinhochun.github.io/rhai-playground-unstable/. I'll probably make a Reddit post later today... |
I implemented a modules proof-of-concept on the So, I want to allow scripts to be added as modules with user-specified names, which can be used when running the main script. Of course, the Playground has to provide a way to manage the modules, perhaps even the ability for them to be exported and re-imported. Makes sense? Or would you suggest me work on something else before that? |
That would make it more than a simple Playground, though. You're going to be keeping a local store of scripts on behalf of users. You may actually have a "standard" list of modules and hook up a module resolver so users can load different pre-built modules. But to allow users to register their own modules and use it in their scripts, that's going to add a whole new dimension of functionalities. You might as well also have script storage. |
But still, having modules working in WASM is way cool! |
I had given script storage a bit of thought before. The "simple" way is to use For now, I wanted to just make a per-session (i.e. until page refresh) script storage to start with. I can perhaps implement something like drag-drop to add files as scripts to make it easier to load user modules. If you want to suggest any "standard modules", I can include them too. There is also the case of embedding - if a page embeds the playground, I would like the page to be able to provide predefined modules, and also not have it affect the local storage. |
Well, if you do that, people are gonna hate you... nobody wants to redo a whole bunch of modules the next time round. I think indexDB is probably the best. Actually localStorage shouldn't be too bad. The typical scripts are very short. They should fit comfortably inside localStorage. |
Pinging @alvinhochun ... The latest drop adds closures support. Would be really interested to see if running in the playground! :-D |
Done, I've bumped it to Rhai 0.18.1 and it's deployed. (As you can see, there had not been any changes recently because I shifted my focus to another project, but I will get back to it soon.) |
Since the Playground is now part of the org, closing this now. |
The idea is to make a playground for Rhai scripts that runs in a web browser which can showcase the features of the Rhai scripting language. It might also be possible for others to repurpose it as a Rhai script editor.
I've started an attempt on https://github.com/alvinhochun/rhai-playground, but I am not making any promises.
The master branch gets automatically built and deployed to: https://alvinhochun.github.io/rhai-playground-unstable/
I might irregularly upload specific builds to: https://alvinhochun.github.io/rhai-demo/
Wish list (not a roadmap or to-do list):
The text was updated successfully, but these errors were encountered: