Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

improve runtime expression error description - fix #236 #240

Closed
wants to merge 1 commit into from

Conversation

b-laporte
Copy link
Member

This PR improves runtime error descriptions involving expressions. This is done in 3 steps:

  • first, by adding a toCode() method to each expression. This method simply rebuilds the expression code from the compiled version.
  • second, by modifying the compiler to add an extra parameter to expressions that reference global objects. In the previous implementation the "global" identifiers were indeed not passed, which prevented to see the expression in the error message (this was the case in issue Problem with expression evaluation should contain expression in question #236)
  • and last, by updating the current error descriptions to include a call to toCode()

To be complete a 2nd change should be also done to the compiler to pass the file name to the compiled template function (and then include it in all error messages) - but this can be done in a separate PR as code is independent from the current changes.

@PK1A
Copy link
Contributor

PK1A commented Jul 15, 2014

I didn't review the code yet, but looking at the tests this change rocks! It should really make life easier when dealing with buggy expressions.

Restarted Travis-CI as there was (again) a problem with Android...

@coveralls
Copy link

Coverage Status

Coverage increased (+0.37%) when pulling f20a6ea on b-laporte:issue236 into a223a0b on ariatemplates:master.

@benouat
Copy link
Member

benouat commented Jul 15, 2014

yes, we are above 90% coverage with this PR 👍 good job !

coveralls

@PK1A
Copy link
Contributor

PK1A commented Jul 15, 2014

I'm going to merge this since I think it is very much needed from the dev point of view. Still, I'm a bit worried by the amount of code it adds (ex.: b-laporte@f20a6ea#diff-140f5dd94b099dbe1a0557ace1e0c3daR722).

I would be very keen on experimenting with the runtime expression processing as I'm starting to suspect that it might be resulting in smaller code for applications (framework + compiled templates) and shouldn't degrade performance.

In any case I've got nothing to back-up such claims so we will have to have solid infrastructure for measuring framework / code size and runtime performance before making any changes.

@PK1A
Copy link
Contributor

PK1A commented Aug 1, 2014

@b-laporte so I've spent last couple of days looking more into the expression handling logic and I think that I've come up with an alternative implementation based on the Pratt's parser. I've pushed a POC here and I'm quite excited about the result since:

  • it's runtime part is < 120 LOC
  • all the expressions that one could imagine "just work": one can use brackets at will, pipes are interpreted everywhere etc. Tests show how many use-cases are handled.
  • pratt's parser is generating an AST that has many nice properties:
    • very small footprint (I believe that we could smaller compiled templates)
    • very, very easy to extend (after understanding the algorithm, that is...), so I'm pretty confident that we could implement many features we've discussed so far (ex.: assignments, [1..100], etc.) with ease, that is, in <2-4h per feature
    • ast makes it very, very easy to do verification checks to avoid access to "dangerous" objects (window, Function etc.) so we could have our security-part covered
    • ast makes it trivial to figure out which objects on a scope needs to be observerd

The best part is that decoupling expressions parsing from other parts of the code-base open doors to really exciting opportunities: ex.: we could install observers using the current algorithm in browsers that don't support O.o and switch to O.o in browsers that have it (Chrome for now). Not saying that this is 5-minutes job but at least the path is paved.

I didn't look to closely to perf numbers but looking at the algorithm I can't think of an obvious perf bottlenecks. Lexing is O(n), parsing is O(n) and evaluation is O(n) as well.

I'm really happy about those findings as I believe that we can have rock-solid expression parser with a smaller footprint so please, if you are planning any bigger changes in the areas touching expression parsing / observing, I would like that we sync up.

@marclaval
Copy link
Contributor

Nice 👍

@benouat
Copy link
Member

benouat commented Aug 1, 2014

@PK1A great job on that one !! pretty impressive

@PK1A
Copy link
Contributor

PK1A commented Aug 20, 2014

OK, since we are integrating different approach to expression parsing where we will have expression string ready, I think we can close this PR. But it was very valuable anyway since it triggered my quest for an alternative parsing technique.

@PK1A PK1A closed this Aug 20, 2014
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants