diff --git a/.markdownlint-cli2.jsonc b/.markdownlint-cli2.jsonc index a906579f..9dcace06 100644 --- a/.markdownlint-cli2.jsonc +++ b/.markdownlint-cli2.jsonc @@ -5,7 +5,7 @@ "ignores": [ "node_modules/**", "meetings/201*/*.md", - "meetings/202[0-2]*/*.md", + "meetings/202[0-1]*/*.md", "scripts/test-samples/*" ] } diff --git a/meetings/2022-01/jan-24.md b/meetings/2022-01/jan-24.md index 4387b52e..0dbbeee3 100644 --- a/meetings/2022-01/jan-24.md +++ b/meetings/2022-01/jan-24.md @@ -112,7 +112,7 @@ Presenter: Kevin Gibbons (KG) - [repo](https://github.com/tc39/ecma262) - [slides](https://docs.google.com/presentation/d/1aCtiVDE5K8WlykqV4G2YTy6Y4C8AeaHWskxZJwNzxKo/edit) -KG: So it's relatively short because the last month was not a time of doing things. Couple of things. The first one is not small, the can-call-user-code annotation, but I will talk about that in detail later. Other than that, we've committed a few more PRs that standardize some of the phrasing and notation we use within the spec, which is something that we're always trying to do more of just so that there's fewer different ways of writing the same thing. Also, we have landed one normative change. This one was discussed at the last meeting. This was the update to the script extensions values in the regex properties, as well as adding a note describing the process by which that table has historically been updated and will be updated in the future. In particular: we are not going to ask for consensus from the committee to make future normative changes that follow that process. We'll just do it whenever there's a new unicode version. +KG: So it's relatively short because the last month was not a time of doing things. Couple of things. The first one is not small, the can-call-user-code annotation, but I will talk about that in detail later. Other than that, we've committed a few more PRs that standardize some of the phrasing and notation we use within the spec, which is something that we're always trying to do more of just so that there's fewer different ways of writing the same thing. Also, we have landed one normative change. This one was discussed at the last meeting. This was the update to the script extensions values in the regex properties, as well as adding a note describing the process by which that table has historically been updated and will be updated in the future. In particular: we are not going to ask for consensus from the committee to make future normative changes that follow that process. We'll just do it whenever there's a new unicode version. KG: Upcoming work #1796 for completion record reform. I've mentioned this repeatedly, Michael has been making excellent progress on it. I also want to mention that as part of this. I will be updating ecmarkup in a way that will have a small breaking change for consumers. So for anyone using ecmarkup in a proposal there will be a slightly different Syntax for invoking it from the command line to get the 262 biblio, that is, the ability to link against the specification. That ability has not been updated since 2015 or 2016. So it doesn't have the right list of abstract operations and so on. As part of #1796 or I guess, coincidental with it, I will be updating ecmarkup so that the automatic linking against 262 is fixed. That's going to entail a breaking change for proposals. So just don't update your proposal a new major version of ecmarkup, if you don't don't know what you're doing. @@ -126,7 +126,7 @@ SFC: Okay, and then one other follow-up question. Have any delegates raised conc KG: No one raised that concern during the discussion that I can recall. I can't speak for implementers if they have this concern and just failed to raise it. -MF: Are there any implementers that can speak to that concern that SFC referencing on the number formatting? +MF: Are there any implementers that can speak to that concern that SFC referencing on the number formatting? YSV: I can't speak to that concern specifically, but we've recently had some staff changes. So we are catching up on the Ecma, 402 again. Let me ask internally and I can verify whether or not those concerned on our side. @@ -198,7 +198,7 @@ CM: It’s 2022 and all is well. Or at least all is well with respect to JSON. C Presenter: Rick Waldron (RW) -RW: Test is has actually some interesting Updates this time around. Usually, it's just like, yeah, that's tested being written. Things are good. Stage 3 stuff is progressing. That is all true. However, I am excited to tell you all about two very specific elements. First is sort of the onboarding of a whole Squad. It's called of the squad of galleons who I made committers to test262, they all have write access to test262 now because it makes sense, they are now the Handling test262 to testify. think for PA Gribble stuff that had previously been working on. So that is very exciting. So just a call-out. I just assumed that this fellow that I'll say this correctly because I actually don't know the person's name. Every time I doubt anyone does. MS2ger who has And writing a lot of Intl tests and now Temporal tests, tests, exciting SHO, who fun story. We met like a million years ago at a JavaScript meet up. She will also be now committed to test262 to work on the same stuff, stuff, and with Igalia Which is, I think right now in total and Temporal stuff, obviously, they'll be expecting out RCA as well. And gentlemen, you just heard from a few moments ago. There's USA. So those folks are now committers ontest262 and the that's exciting. So the bigger that team gets all the focus of more people working on tests262, the larget my heart grows like in a Actually healthy weight, like in a kind of righteous. In addition to that, there has been a proposal to create a test 262 maintainers group, and we're going to have our inaugural meeting next week. They're excited. So yeah, that's big moves there and I'm just excited because it just makes more and frequent work going into the test262 system. Oh, and one other minor thing after the last TC39 needed, had a like a private session with YSV and some for students. And I'm hoping that they will also start contributing to test262 as well. I thought that are our sort of like Q&A session went. Really, really well, and I hope that they feel inspired to continue or just pulled start, and in some cases continue playing tests. So yeah, test 262 is really healthy right now, which is Awesome, very pleasing to be able to say. That's that. Does anybody from my squad You want to add to this report because want to again? +RW: Test is has actually some interesting Updates this time around. Usually, it's just like, yeah, that's tested being written. Things are good. Stage 3 stuff is progressing. That is all true. However, I am excited to tell you all about two very specific elements. First is sort of the onboarding of a whole Squad. It's called of the squad of galleons who I made committers to test262, they all have write access to test262 now because it makes sense, they are now the Handling test262 to testify. think for PA Gribble stuff that had previously been working on. So that is very exciting. So just a call-out. I just assumed that this fellow that I'll say this correctly because I actually don't know the person's name. Every time I doubt anyone does. MS2ger who has And writing a lot of Intl tests and now Temporal tests, tests, exciting SHO, who fun story. We met like a million years ago at a JavaScript meet up. She will also be now committed to test262 to work on the same stuff, stuff, and with Igalia Which is, I think right now in total and Temporal stuff, obviously, they'll be expecting out RCA as well. And gentlemen, you just heard from a few moments ago. There's USA. So those folks are now committers ontest262 and the that's exciting. So the bigger that team gets all the focus of more people working on tests262, the larget my heart grows like in a Actually healthy weight, like in a kind of righteous. In addition to that, there has been a proposal to create a test 262 maintainers group, and we're going to have our inaugural meeting next week. They're excited. So yeah, that's big moves there and I'm just excited because it just makes more and frequent work going into the test262 system. Oh, and one other minor thing after the last TC39 needed, had a like a private session with YSV and some for students. And I'm hoping that they will also start contributing to test262 as well. I thought that are our sort of like Q&A session went. Really, really well, and I hope that they feel inspired to continue or just pulled start, and in some cases continue playing tests. So yeah, test 262 is really healthy right now, which is Awesome, very pleasing to be able to say. That's that. Does anybody from my squad You want to add to this report because want to again? RPR: Looks like we've also got another test 262 agenda item coming up. @@ -210,11 +210,11 @@ SHO: Oh, you're just interrupting me talking for you. Yes, if yeah, that's all t RW: yep, so that that is I think all the update that I need to get. Like I said, if anybody from g… like to give update status of temporal testing or interesting you have time to have time to do that. This is your moment. If not, we can just give the time back. -SHO: I think we can give it back to the committee. +SHO: I think we can give it back to the committee. RW: All right, there it is. Time is see the rest of our time to the committee. Thank you very much -YSV: I have one response on the queue, which is regarding the students. So I'll poke them and I'll see if they can find some time. But because they have other classes now, they have limited time to work on this. However, we will probably run that class again in the near future, so we may get new students and some of them may stay, see if we can get a retention pipeline going with Leiden University. +YSV: I have one response on the queue, which is regarding the students. So I'll poke them and I'll see if they can find some time. But because they have other classes now, they have limited time to work on this. However, we will probably run that class again in the near future, so we may get new students and some of them may stay, see if we can get a retention pipeline going with Leiden University. RW: Fantastic. That's great. Okay, great. @@ -226,7 +226,7 @@ Presenter: Brian Terlson (BT) - [repo](https://tc39.es/code-of-conduct/#code-of-conduct-committee) -BT: Hello friends. So this last little bit has been pretty quiet. We do have we are working with one individual. And I think for those that are involved in that I think aware of what's happening. But other than that, I really wanted to ask that anyone who has an interest and helping the code of conduct committee out. Please reach out the code of conduct committees. Email address is in our code of conduct. I can post it in Matrix as well. are often finding ourselves short-staffed when there is, you know an issue that we need to discuss. So if helping everyone be happy and helping productive Members of our committee is something that you think you might be passionate about, please reach out. We'd love to have help. if any other members of the committee would like to speak feel free. Otherwise that is it for me. +BT: Hello friends. So this last little bit has been pretty quiet. We do have we are working with one individual. And I think for those that are involved in that I think aware of what's happening. But other than that, I really wanted to ask that anyone who has an interest and helping the code of conduct committee out. Please reach out the code of conduct committees. Email address is in our code of conduct. I can post it in Matrix as well. are often finding ourselves short-staffed when there is, you know an issue that we need to discuss. So if helping everyone be happy and helping productive Members of our committee is something that you think you might be passionate about, please reach out. We'd love to have help. if any other members of the committee would like to speak feel free. Otherwise that is it for me. RPR: So we have a request for volunteers for the COC committee. @@ -260,7 +260,7 @@ MM: Well, I so There's enough proposals that where the proposal is proposing to JHD: Well, so at a time when the proposal is finalized, then is, it great to have a spec compliant polyfill that installed like, for example, … -MM: I understand that we're talking about an earlier phase, but I'm making this comment specifically about that earlier phase where it's just for experimenting and learning about right, +MM: I understand that we're talking about an earlier phase, but I'm making this comment specifically about that earlier phase where it's just for experimenting and learning about right, JHD: There's very little to be gained from… let's say you're talking about `Array.prototype.flat`, using it as a standalone function, that takes an array and the arguments, versus installing on `Array.prototype` and doing `.flat`. The difference between those two is ergonomic, but you don't need to actually try it out in your code. @@ -278,7 +278,7 @@ JHD: correct. And also it's that this is intended to be a strong recommendation, MM: Okay. This makes sense to me. I agree. I support it with both parts. -JHD: thank you very much. +JHD: thank you very much. PFC: I'm not sure that "we already understand that having methods on the prototype is good ergonomics" is going to be a compelling argument for, say, community members who are interested in trying out the feature in a playground environment. But I can say, what we did with Temporal before it reached stage 3 was we had a polyfill that did not install a global Temporal object, and then in the playground environment we loaded the polyfill and then— just only in that playground environment— we installed it on to the global. People who want to try out the full experience of having, say, an .at method on String.prototype, would be able to load a polyfill that gives them that function, and then just with a one-liner install it, like `String.prototype.at = whatever`. I agree with JHD that we want to avoid polyfills being published that do that installation already, right? @@ -286,7 +286,7 @@ JHD: The playground approach I'm completely content with, because we can never s JSC: So I wanted to add on to I guess JHD already covered this though that most proposals that you know, are about a new function or whatever. Most of the value is in this and testing it in isolation. There are on the thread. If you want to read through it, like back and forth. About which proposals it's super important to add on the global cross-cutting cross-cutting like certain well known symbols or certain things involving method to keep the Prototype chains, but really, as far as I can tell the, the large majority of proposals involving new functions or syntax even do not involve modifying the global environment. Now having said that I have a future thing on the Queue about making it more specific, but this item is done. -JHX: Okay, Okay, and then. Yeah, just so I'm to a provided example, for example, if we want to upgrade its iterator protocol like double ended iterator proposal need to modify global Symbol object or submit may need to modify that or it's or it's hard to get the behavior. That's it. +JHX: Okay, Okay, and then. Yeah, just so I'm to a provided example, for example, if we want to upgrade its iterator protocol like double ended iterator proposal need to modify global Symbol object or submit may need to modify that or it's or it's hard to get the behavior. That's it. JHD: That's a fair point. And for that type of scenario as long as the symbol wasn't actually installed on `Symbol` itself, then making an arbitrary symbol and installing that symbol as a property on the various built-ins wouldn't actually create a web compat risk because that wouldn't collide with anything that TC39 would produce. The risk, though, is that if you put it as a string property on `Symbol` and then we want to use that string property name, then there might be a collision. But yeah, there's there's definitely a lot of gray area here, here, but the my experience is that the folks who can do this safely, don't need this guidance to do it correctly and the guidance is meant to be for those who have not done it or, or don't know how to do it safely. @@ -300,19 +300,19 @@ YSV: So I think I think like this is just a clarification and also PFC. I want I JHD: Yeah, I think that's very true. I think the definition of polyfill does not actually have a single cohesive definition even now - defining it may be tricky, but I think defining what we want in stage one, I think is important. I agree with you that the understood meaning of polyfill has shifted since this document was written in 2013 or 2014 or whenever. -YSV: I would say we Define it for the scope of this document. This is just a local definition for this document. +YSV: I would say we Define it for the scope of this document. This is just a local definition for this document. JHD: Yeah, my general philosophy here has been that there's a lot of places in the spec and in these sort of documents where folks outside of TC39 use it as a justification for something, and because we are in some ways community stewards, it's important that we be aware of unintended consequences of our wording, and that we update our wording when we know of unintended consequences, to attempt to avoid them. JSC: So, I guess I have two items in a row, the first one quick reply to YS. One. Is that the current wording, We can go back and forth. So like obviously on the PR wording, I've made some like I made some suggestions that got command. I made some more suggestions, but the current wording, which I think can from me both avoid the word ‘must’ we use the word ‘should not be’, whatever the phrase. And the other thing is that the current wording actually for when I cited, doesn't use the word polyfill anymore. We could talk about whether it's valuable to conclude it again, but the way I saw it was that since polyfill is a confusing word like we could reference the what would finding we could? could point to articles, articles, or whatever. But like, you know, polyfill is something. Lots of things to lots of people. It might be best to just avoid it right now. The phrasing is demos and experimental reference implementations, which are a little more wordier than polyfill. But which I think might be good for me right now. So right now we should think about whether it's even valuable to include the word polyfill at all of this document, as far as, like, the finding on polyfills, Etc. That's my reply. -JSC: The other thing is, my next topic is I think there's an important distinction to be made between should not modify globals versus should not modify globals with feature detection. And by feature detection, mean, you know, like checking for the existence of some, some member on some built-in global object and then monkey patching or whatever if it doesn't exist. this, I think that that's the by far, the most dangerous web compatibility risk because like people modify even built in globals all the time without web compatibility risk because they're not feature detecting, they're doing it unconditionally. So with the browser environment changes, its Behavior is not going to change because they might be monkey patching but they're doing it unconditionally. So the browser environment doesn't matter anymore. So that's why talked about this in mind. Since the founding of the long where that I also added some more suggestions. I think it's important to call ??? and in particular as something to avoid. That's I think that's a super important distinction because I think that is the crux of the web compatibility risk as far as I can tell. +JSC: The other thing is, my next topic is I think there's an important distinction to be made between should not modify globals versus should not modify globals with feature detection. And by feature detection, mean, you know, like checking for the existence of some, some member on some built-in global object and then monkey patching or whatever if it doesn't exist. this, I think that that's the by far, the most dangerous web compatibility risk because like people modify even built in globals all the time without web compatibility risk because they're not feature detecting, they're doing it unconditionally. So with the browser environment changes, its Behavior is not going to change because they might be monkey patching but they're doing it unconditionally. So the browser environment doesn't matter anymore. So that's why talked about this in mind. Since the founding of the long where that I also added some more suggestions. I think it's important to call ??? and in particular as something to avoid. That's I think that's a super important distinction because I think that is the crux of the web compatibility risk as far as I can tell. JHD: So my reply on the queue is that the problem is that presence detection is not actually feature detection, and presence detection is the problem. If you actually are detecting it works that the way your code expects it to work, then theoretically it should be fine because either the newly provided built-in will do that and you won't polyfill it, or it won't do that and you will - and then either way your code keeps working. But to correctly, do feature detection is difficult and almost nobody does, so feature detection in general is probably best to discourage. JSC: Okay, so it's okay. Yeah, I think that's a good point. It. Yeah, yeah, that's a good point. -SYG: So, I have a few comments here, One is, I think you answered this already, JHD, but it's the only in recent memory is the only actual occurrence of this threat. The .at thing that you mentioned, from CoreJS, has. +SYG: So, I have a few comments here, One is, I think you answered this already, JHD, but it's the only in recent memory is the only actual occurrence of this threat. The .at thing that you mentioned, from CoreJS, has. JHD: the only one that will that I'm aware of that, at least recently, that would have that risk to disrupting our proposals, `core-js` has in the past speculatively polyfilled things that weren't very far in the stage process and that has caused other breakage - but not breakage to proposals. @@ -334,11 +334,11 @@ JHD: That sounds great. LEO: Yeah, hi, so I tried to get myself involved in the thread and trying to understand all the problems. I kind of agree with a lot of things that JHD say here. Also, JSC saying that this might be a thing. not only for API. But like, ref implementations because think there's a like we probably can get more examples for this from proposals that work through the sync text like ???, such as decorators, private Fields. I think we get more examples from that, and I think it's important to understand a message what we have today. And if I try to understand the scenario here, is that like we have corejs. Somehow using the process document to validate one thing in. Like, if we change the process document that will like feels like invalidates, their work for 8 years. I totally understand and respect their work, but at the same time, I don't want our process document validating like being used as a document to validate someone's work, like the process document should not generate anything that is compliant or not. The process just says, like how we want to advance things. The choice of words from what Jordans proposing In here, I kind of appreciate that and the same should also don't want someone to do some wordsmithing here and I totally understand the reasons and support that but think also saying like TC39 doesn't recommend anyone to do that is different from how it affects them. Like we are not making anything compliant or not. I think this is the message like the process document should be like this process documents should not be used to make to tell anything is compliant or not. I kind of support that for these reasons. I try my best to understand everything like how this got coreJS involved, but at the end, I still support these change. Like for these very reasons. -JHX: I agree that modifying Global is a problem, but essentially it's essential problem is how to Mark a PR as experimental. Currently. We do not have official way to do such things. I mean, currently mean, the pollyfills or the other reference implementation. There are just normal modules and you import them. So it have risk, but if we have some official way to mark and use the experimental API, it may be the solution. I don't know. How we could have that in it, but I really think this may be the real solution to this problem. +JHX: I agree that modifying Global is a problem, but essentially it's essential problem is how to Mark a PR as experimental. Currently. We do not have official way to do such things. I mean, currently mean, the pollyfills or the other reference implementation. There are just normal modules and you import them. So it have risk, but if we have some official way to mark and use the experimental API, it may be the solution. I don't know. How we could have that in it, but I really think this may be the real solution to this problem. JHD: I'm not sure it would be a solution because marking it is experimental would just be a way to tell people don't rely on it. But if they do and their website breaks, then it's still a web compat issue just the same. -JHX: Sorry. Yeah, so I think we need some strong mechanism. For example, the reference implementation need to readjust it to experimental design and and maybe we can't have a API for experimental. So you are the people use that and it could have some mechanism, for example, if it may have the expiration time. So people use where you see, it's more careful that way. Yeah, something like that. +JHX: Sorry. Yeah, so I think we need some strong mechanism. For example, the reference implementation need to readjust it to experimental design and and maybe we can't have a API for experimental. So you are the people use that and it could have some mechanism, for example, if it may have the expiration time. So people use where you see, it's more careful that way. Yeah, something like that. PHD: I'll be brief. I think I support the discussion about encouraging people, not to modify globals, I think it might not be obvious for people what options they have to not do that. And so, and I found the discussion for example, example, briefly about what Temporal did to be really interesting and helpful. And so I just like to suggest that maybe in addition to saying, what people shouldn't do, we could provide some options or recomendations about what what they should do based on the experience of the committee. @@ -361,7 +361,7 @@ JHD: Thank you. I will come back in the future plenary after we've workshopped s ## Process Clarification -Presenter: Philip Chimento (PFC) +Presenter: Philip Chimento (PFC) - [pr 32](https://github.com/tc39/process-document/pull/32) - [pr 1073](https://github.com/tc39/agendas/pull/1073) @@ -386,7 +386,7 @@ JHD: I do support the change, but I think that the deadline may be an issue in t PFC: I don't disagree with you, that it would be unfortunate if we had to delay it, but I think that phrasing it this way with the deadline puts the burden on the person proposing the change, rather than putting the burden on a delegate who might be uncomfortable giving their okay to something that they didn't feel that they had enough time to review. In practice — for example in the October meeting, FYT found a bug in the Temporal proposal just the day before our presentation. We fixed that and we asked if anybody had a problem with presenting that for consensus the same day and nobody had a problem with it. I think in practice, we do have this flexibility. Though I get what you're saying, on the other hand, I feel it would be equally unfortunate if I were able to propose some sort of major normative change just the day before the meeting and put the burden on other delegates to object to it on the basis of it not being on the agenda. That's my feeling about this, I don't know if other people see it differently. -SYG: I want to ask for a clarification from JHD about the urgency of such bugs. So suppose one were to be discovered. Where stage 3 normative changes would be urgent are things like something is, you know, I guess insecure or something has a major problem that we didn't foresee. I'm not sure in that case the plenary should be the first line of defense for that, like, if that such a, if such urgent concerns were surfaced regardless of whether it advances or if browsers, for example, or other implementations have shipped. I imagine we will try to undo that and that's a separate process than awaiting consensus. Like, if something is so urgent. It's like an insecure thing. We must not let it continue to be on the web. I don't think browser's are going to be Waiting for a consensus to do something about it. So what is it? +SYG: I want to ask for a clarification from JHD about the urgency of such bugs. So suppose one were to be discovered. Where stage 3 normative changes would be urgent are things like something is, you know, I guess insecure or something has a major problem that we didn't foresee. I'm not sure in that case the plenary should be the first line of defense for that, like, if that such a, if such urgent concerns were surfaced regardless of whether it advances or if browsers, for example, or other implementations have shipped. I imagine we will try to undo that and that's a separate process than awaiting consensus. Like, if something is so urgent. It's like an insecure thing. We must not let it continue to be on the web. I don't think browser's are going to be Waiting for a consensus to do something about it. So what is it? JHD: I wasn't speaking of the like when `SharedArrayBuffer`s were removed. I'm not talking about that kind of concern. I'm saying if one browser implements something and discovers that there needs to be - or that maybe we want a normative change. That in the mean time until the next plenary when it can be put on in for it in advance of the deadline, a second browser may implement it or a third and it may get to the point where we no longer are able to change it due to web compatibility concerns. In other words, the window to make normative changes for things that are shipped is small in general. And so I mean, hopefully we would all be able to make these decisions ad hoc, right? their case by case rather and like and then somebody wouldn't lock solely on the basis of the deadline for something like that and you know, so on and so forth, but it's kind of that's what I'm thinking of is losing the opportunity to make a normative change due to web compatibility concerns caused by the delay, does that I see clarify @@ -408,7 +408,7 @@ MM: okay. In that case I do not object. I think that being able to ask the commi YSV: Thank you, Mark. The queue is currently empty and we have five minutes left. Do we have any last comments that people want to get in before we close this topic? -LEO: I understand MM’s question. And like I am in favor of this. I think it's nice to be clear to Mark here. I think there is one catch. We are still creating documentation telling delegates, They're not required to wait on consensus for something that is added after the deadline. We can ask for the exception, yes, but there's still the chance of one delegate might come in and say, I didn't have time to reveal this work so I cannot wade in because of that. you can ask for the exception, but they also need to rely on like other people accepting this. +LEO: I understand MM’s question. And like I am in favor of this. I think it's nice to be clear to Mark here. I think there is one catch. We are still creating documentation telling delegates, They're not required to wait on consensus for something that is added after the deadline. We can ask for the exception, yes, but there's still the chance of one delegate might come in and say, I didn't have time to reveal this work so I cannot wade in because of that. you can ask for the exception, but they also need to rely on like other people accepting this. MM: I'm not understanding if your are you disagreeing with the stance that I took because I didn't hear disagreement? @@ -506,7 +506,7 @@ KG: I would prefer to have it behave differently than including an asynchronous SYG: You have the double await problem because you're assuming the omitted identity is the synchronous identity. -JSC: Yeah, that is a third alternative. Yeah, we could do that. I think that's reasonable. We could do that, I would like to get a temperature check about that result. And then perhaps, if I commit to that, would I be to get just get stage 3 now or what? I have to come back next plenary with the updated spec text. +JSC: Yeah, that is a third alternative. Yeah, we could do that. I think that's reasonable. We could do that, I would like to get a temperature check about that result. And then perhaps, if I commit to that, would I be to get just get stage 3 now or what? I have to come back next plenary with the updated spec text. SYG: It's a tangent from from that point, but for stage 3, when the editors were triaging proposals going for stage 3, when we looked at it on the agenda. I don't think it was marked as going for stage 3. So we have not reviewed it editorial yet. @@ -516,7 +516,7 @@ JHD: The identity function is like `x => x`, right? If you just stick an `async` JSC: It should be yes. So to that, that should be the same. -JHD: OK, I was originally in favor of the double `await` because I want the consistency there. It does seem completely compelling to me to say the identity function for `Array.from` is `x => x`, and for `Array.fromAsync` is `async x => x`. That seems very straightforward and intuitive to me. So I'm on board with that change. +JHD: OK, I was originally in favor of the double `await` because I want the consistency there. It does seem completely compelling to me to say the identity function for `Array.from` is `x => x`, and for `Array.fromAsync` is `async x => x`. That seems very straightforward and intuitive to me. So I'm on board with that change. JSC: so WH, just to clarify for you, what we're talking about is the behavior of the function, when no mapping callback is supplied. What should it be? What kind of call back? Should it be equivalent to no callback. if it should be a good one to call back. right now, Right now. If you don't supply in mapping callback, it's equivalent to supplying async, sync function, sync, identity function x to the X return X, but we are, but That necessitates us a winning every waiting, Everything twice. When we don't support. When we Supply, that synchronous identity function. And when we omit the mapping function. @@ -578,7 +578,7 @@ Presenter: Shu (SYG) - [issue](https://github.com/tc39/ecma262/issues/2555) -SYG: Okay, I have also have no slides for this. This is mainly to come back to when we brought it up the first time and MM had raised concerns that he would like more detailed review of the algorithm since which I believe he has done on a GitHub thread. thread. There are so, I was trying to get something actionable out of that thread and please correct me if I'm wrong MM my understanding of your current position is that still are in favor of some group within TC39 taking over the structure clone algorithm, but you are not in favor of the being with ecma262 to the document. Is that correct? +SYG: Okay, I have also have no slides for this. This is mainly to come back to when we brought it up the first time and MM had raised concerns that he would like more detailed review of the algorithm since which I believe he has done on a GitHub thread. thread. There are so, I was trying to get something actionable out of that thread and please correct me if I'm wrong MM my understanding of your current position is that still are in favor of some group within TC39 taking over the structure clone algorithm, but you are not in favor of the being with ecma262 to the document. Is that correct? MM: That is exactly correct. Thank you. Structured clone is incompatible with the semantics of JavaScript. It does not belong in the language, but the maintenance, the maintenance issue of keeping structure clone itself the language, which it needs to do is much better done us. So yes, you got my position, exactly. @@ -594,7 +594,7 @@ KG: MM I'd like to talk to you about proxy, but please go through the thread fir SYG: In the meantime I guess I would like to clarify our position here. So structured clone as it exists today as specified by the HTML specification works a certain way, and it's not open to being changed because the web platform depends on it. That is the existing behavior. However, specifically in the case of proxies, all proxies cause this algorithm to throw currently. And the usual game we play with web compatible changes, is that we are able to change things from throwing to not throwing. So it is out of scope for this particular proposal, which is just taking over maintenance of the thing verbatim. Um, Chrome in V8 are certainly not against extending this proposal to have it work with proxies given that they do not work with proxies at all. So I want to clarify that we are not against any proposed changes, but we are not proposing changes and we want to not propose changes in the scope. Taking over maintain ownership. Then just do it later. Okay? -MM: Given that I'm going to, I think I can reply in a sufficiently general way. As to cover the things from that thread. not remembering. Also, I'll stipulate there. All of my objections are in the thread. I review the algorithm carefully at the time and any And so I'm content to to stipulate that that only the objections that I raised In the thread are ones that I'm expecting to come back to. Your position, the position you just stated on proxies. Let me see if I can State a general form of that if there are adjustments to the structured clone algorithm that the browser vendors agree is sufficiently web compatible - it doesn't have to be web compatible in theory, we've often broken things that are web compatible broken, Web compatibility in theory, but not in practice. If they agree that it's web compatible enough that they're willing to consider those changes. Then a structured clone that that is modified to deal with the show-stopping algorithms might at some point, be on the table, and certainly if it meets the objections that I consider show-stopping, whatever those are then I would be very happy to see it enter 262. +MM: Given that I'm going to, I think I can reply in a sufficiently general way. As to cover the things from that thread. not remembering. Also, I'll stipulate there. All of my objections are in the thread. I review the algorithm carefully at the time and any And so I'm content to to stipulate that that only the objections that I raised In the thread are ones that I'm expecting to come back to. Your position, the position you just stated on proxies. Let me see if I can State a general form of that if there are adjustments to the structured clone algorithm that the browser vendors agree is sufficiently web compatible - it doesn't have to be web compatible in theory, we've often broken things that are web compatible broken, Web compatibility in theory, but not in practice. If they agree that it's web compatible enough that they're willing to consider those changes. Then a structured clone that that is modified to deal with the show-stopping algorithms might at some point, be on the table, and certainly if it meets the objections that I consider show-stopping, whatever those are then I would be very happy to see it enter 262. SYG: Okay, I'll clarify with a quick response. All I'm saying is that agree that Chrome is open to extension here and at the same time also saying that Chrome and specifically myself will not be taking the initiative to do the extension, we won't be objecting to the extension. if it's web compatible as you say, @@ -610,7 +610,7 @@ MM: Great. JHD: Without having anything concrete in mind, I had a hypothetical question. SYG. You were very careful I think to use the word "extensions". What if there are changes that are web compatible, are not extensions, that drop functionality or alter existing functionality that already works. Is that something that potentially Chrome is open to, or is that something that you think wouldn't be able to ship? -SYG: the year re? Yes, but it definitely has a higher bar because that runs the risk of convincing a browser vendor Chrome or Firefox or whoever wants to take the initiative to ship and see, right. It's it be we can do corpa analysis as we have in the past. Need to try to do static analysis on web archive or something like that. But at the end of the day, non extensions, changing things that don't throw into other behavior that also don't throw, or throwing I guess, it's just harder to figure out if they're truly compatible. +SYG: the year re? Yes, but it definitely has a higher bar because that runs the risk of convincing a browser vendor Chrome or Firefox or whoever wants to take the initiative to ship and see, right. It's it be we can do corpa analysis as we have in the past. Need to try to do static analysis on web archive or something like that. But at the end of the day, non extensions, changing things that don't throw into other behavior that also don't throw, or throwing I guess, it's just harder to figure out if they're truly compatible. JHD: So no philosophical objection, just the standard technical barriers to making potentially incompatible changes. @@ -644,7 +644,7 @@ MM: Right, right, right. Okay, but there's not a delicate, these two ways of say KG: Yes. It would have the same semantics, just the test for whether to use the semantics would mention proxy explicitly. -MM: Now with regard to, let's take Map specifically. We do have an extension point that we reach for one occasion. On occasions like this, which is the committee can create new well-known symbols and define behaviors in terms of the well known symbols that have no observable difference for any code that doesn't use those. Those new well known symbols, so, so for example, a symbol that says, I want to be treated as map like and then if that symbol exists and structured clone tests for it, then it proceeds to interact with the alleged map behaviourally rather than reaching into internal slots and then a proxy that' membrane aware, that it needs to decorate. the the proxy with the symbol in order to be transparent through, membranes could go to go ahead and do that. I'm not proposing that specifically want to raise that as an example of how we might extend the sweet spot without having enumerate, lots of special paste, proxy behaviors, but still have more objects be practically transparent across membranes. +MM: Now with regard to, let's take Map specifically. We do have an extension point that we reach for one occasion. On occasions like this, which is the committee can create new well-known symbols and define behaviors in terms of the well known symbols that have no observable difference for any code that doesn't use those. Those new well known symbols, so, so for example, a symbol that says, I want to be treated as map like and then if that symbol exists and structured clone tests for it, then it proceeds to interact with the alleged map behaviourally rather than reaching into internal slots and then a proxy that' membrane aware, that it needs to decorate. the the proxy with the symbol in order to be transparent through, membranes could go to go ahead and do that. I'm not proposing that specifically want to raise that as an example of how we might extend the sweet spot without having enumerate, lots of special paste, proxy behaviors, but still have more objects be practically transparent across membranes. KG: I think that would be a much larger change than the one I was discussing to just treat proxies like plain objects, and I think much more contentious, but you are welcome to pursue it. @@ -656,7 +656,7 @@ SYG: I think Mark that there is agreement that that you do, find the maintenance MM: I am in favor. -SYG: Okay, hearing no other concerns. I'll take that as consensus. And as part of that work, I am partial to the proxy solution that KG outlined. I would be pretty skeptical - and right now would be against kind of adding new possible user code calls in structure clone. It sounded like with the well-known symbol idea it's possible that it results in a bunch of arbitrary, user code, being called when you try to structured clone stuff. That doesn't seem great to me, but we can certainly discuss. +SYG: Okay, hearing no other concerns. I'll take that as consensus. And as part of that work, I am partial to the proxy solution that KG outlined. I would be pretty skeptical - and right now would be against kind of adding new possible user code calls in structure clone. It sounded like with the well-known symbol idea it's possible that it results in a bunch of arbitrary, user code, being called when you try to structured clone stuff. That doesn't seem great to me, but we can certainly discuss. MM: Yeah, the proxy solution also causes user code to happen during structure clone @@ -686,7 +686,7 @@ SYG: Thank you. CP: I can review it as well. I also have a clarification question. It was my assumption that the biggest challenge for implementers was hitting user land code in the structured clone code path. Can you confirm that this is not really an issue anymore? -SYG: I unfortunately cannot confirm that is going to be a non-problem going forward. But I mean, it's engineering, I suppose. It'll have to be looked at, right? But I don't think It's like so architecturally, impossible or something. it's kind of undesirable, but that's a separate thing. +SYG: I unfortunately cannot confirm that is going to be a non-problem going forward. But I mean, it's engineering, I suppose. It'll have to be looked at, right? But I don't think It's like so architecturally, impossible or something. it's kind of undesirable, but that's a separate thing. Okay, so that's the end of the queue. Do you want to proceed? diff --git a/meetings/2022-01/jan-25.md b/meetings/2022-01/jan-25.md index d339ec98..40e0839e 100644 --- a/meetings/2022-01/jan-25.md +++ b/meetings/2022-01/jan-25.md @@ -33,11 +33,11 @@ MAH: So let's look at what's currently happening with a synchronous iterator and MAH: None of these cases had a, the async wrapper come into play. If we change and instead of using an async iterator. We actually use a sync iterator with an async iteration, that is where the surprise happens. It's very hard to see the difference. The difference here is, there is a `async` keyword. That's the difference. And in that case. The wrapper gets closed. However, actually, the wrapper does not get closed because it gets yields. a value ends up being rejecting. -MAH: So here, let me explain what actually is going on. If an iterator throws, or yields a rejected promise for an async iterator, the consumer assumes that the iterator had the chance to clean up before and it will not try to explicitly close it. However, the async iterator that was wrapped for it. It yielded a promise. That is an entirely value for a simple generator to return. So it doesn't assume it caused an error. So there is an impedance mismatch here between the async iteration and the sync iterator, and it's an impedance mismatch that the async wrapper is not fixing. So what the PR does is to close the sync iterator that it wraps, whenever it yields a rejected promise and the sync iterator believes it was not done already. So that means if you call next and the - and the sync iterator yields a rejected promise, before rejecting the next call before it will incur a rejection. Next call. It will go and close the sync iterator. +MAH: So here, let me explain what actually is going on. If an iterator throws, or yields a rejected promise for an async iterator, the consumer assumes that the iterator had the chance to clean up before and it will not try to explicitly close it. However, the async iterator that was wrapped for it. It yielded a promise. That is an entirely value for a simple generator to return. So it doesn't assume it caused an error. So there is an impedance mismatch here between the async iteration and the sync iterator, and it's an impedance mismatch that the async wrapper is not fixing. So what the PR does is to close the sync iterator that it wraps, whenever it yields a rejected promise and the sync iterator believes it was not done already. So that means if you call next and the - and the sync iterator yields a rejected promise, before rejecting the next call before it will incur a rejection. Next call. It will go and close the sync iterator. MAH: so this is the first issue that I noticed, and while digging into it I actually ended up realizing there was a second issue, which is little bit more complicated. But first, I'd like to know if there is any questions on this first part and I can show. Also the diff of the pr. -MAH: Here is a difference. What happens is for next, we end up ? the iterator record and saying like you should close if it's it's a rejection. Same thing for throw it because it is possible for throw to actually say - so when you call throw, it's possible for throw to catch it and return a value saying I'm not done yet. So actually that is returning ?. in same thing for that was returned and same thing for throw. And so I say like in the wrapper continuation, we plumb this through and we have the main cases. The main case that we check - basically, if we're not done, we add a reject Handler to the promise and the reject Handler will close the iterator before returning the original error thrown. Which is the behavior that yield* has. So is there any question on the behavior of this change? +MAH: Here is a difference. What happens is for next, we end up ? the iterator record and saying like you should close if it's it's a rejection. Same thing for throw it because it is possible for throw to actually say - so when you call throw, it's possible for throw to catch it and return a value saying I'm not done yet. So actually that is returning ?. in same thing for that was returned and same thing for throw. And so I say like in the wrapper continuation, we plumb this through and we have the main cases. The main case that we check - basically, if we're not done, we add a reject Handler to the promise and the reject Handler will close the iterator before returning the original error thrown. Which is the behavior that yield* has. So is there any question on the behavior of this change? JHD: So I'm not sure, but does this affect `Array.fromAsync`? Does it have to make sure to do the same behavior? @@ -65,9 +65,9 @@ MAH: Now what happens when you remove the throw from the iterator that yield *wr MAH: Let's see what happens when the iterator that is forwarded, is synchronous, instead of asynchronous. So, same as in for-await. Now, all the sudden what we have, we have, we, we never have the synchronous generator getting closed. and instead of a type error, we actually have the error that was thrown bubbling back out. So what happens, it goes all the way into the yield-star, the yield star forwards to the async iterator wrapper, the sync iterator wrapper throw implementation. The only thing it does is re throws the error and Doesn't touch the rap, thinkety thinkety, thinkety reader. This is again surprising. -MAH: And so What I believe we should do here. No matter what we should always close. The. I think wrapper should close the sync iterator wrapper if it happened in those cases because we're trying to call throw again on something that internally doesn't have a throw. I also believe it is a contractual error like the same the same as a yield * would behave. And we should consider changing - instead of re-throwing the error, we should consider changing to a type error. So that is a little bit - I wasn't exactly sure, so currently the PR doesn't do that, but I would love to have the opinion of the committee on that change as well. And I can show the change in the PR. So, it is basically. +MAH: And so What I believe we should do here. No matter what we should always close. The. I think wrapper should close the sync iterator wrapper if it happened in those cases because we're trying to call throw again on something that internally doesn't have a throw. I also believe it is a contractual error like the same the same as a yield * would behave. And we should consider changing - instead of re-throwing the error, we should consider changing to a type error. So that is a little bit - I wasn't exactly sure, so currently the PR doesn't do that, but I would love to have the opinion of the committee on that change as well. And I can show the change in the PR. So, it is basically. -MAH: so on the throw implementation, instead of basically rejecting the promise with the thrown value I go in and close the iterator and then re-throw the value. However, I believe instead of getting the value. We should actually throw a type error the same way yield star does. Instead of this. +MAH: so on the throw implementation, instead of basically rejecting the promise with the thrown value I go in and close the iterator and then re-throw the value. However, I believe instead of getting the value. We should actually throw a type error the same way yield star does. Instead of this. MAH: Any questions and what is the opinion regarding what should be thrown here? @@ -221,7 +221,7 @@ RRD: We can continue this discussion and table that decision whether it's it's r LEO: Yeah, just want to emphasize that we just want to make this work around to address concerns. For the champions group, none of this will block or improve our use cases the application for what we want with this proposal. So I think if the concerning Point refers only to register symbols. That's also fine. I am pretty flexible here for because what I want is being able advance this proposal. So I think we should also like understand if this concern is like, we should, we should not worry about well known symbols, but stick only to register symbols. I'm fine with. This was just watching as this is a champion, a coach, and owner of this. I think we can move. Ahead, without you well known symbols in restrict. This contention only to register singles. -MM: And yeah, the problem is, we need something that has consensus. Yes. And for me, it's very clear that the important distinction is eternal vs. non-eternal. The important distinction is not registered vs non-registered. +MM: And yeah, the problem is, we need something that has consensus. Yes. And for me, it's very clear that the important distinction is eternal vs. non-eternal. The important distinction is not registered vs non-registered. LEO: Yeah. that was the understanding like, from the Champions group. That's why we are also adding well known symbols to List, we were never like, in my perspective. I was never seen as being like the number of symbols in stick to that, but just the fact of liveness is compromised and not by the numbers. but the liveness of like one symbol can be compromised but that's it. Like that's my that was my understanding that were how we are trying to address. This may be the co-champions, my have a little different perspective, but I'm sharing my I just want to make sure like tell everyone, We are flexible. We want to move this ahead. The concern. we are trying to is Is the actual use cases in, which this restriction is not like an improvement or blocking for any of our applications? @@ -305,13 +305,9 @@ RRD: could we request a continuation tomorrow? If that's possible? BT: It should be possible. We'll get back to you at the time. Okay? -RRD: Yeah, let's then if we okay. Let's do a quick temperature check just to have an idea in general for everyone and we do know that there is potential work if we had restrictions but the options for that temperature, temperature check would be -the heart would be no restrictions -Plus would be No registered symbols, Symbols, +RRD: Yeah, let's then if we okay. Let's do a quick temperature check just to have an idea in general for everyone and we do know that there is potential work if we had restrictions but the options for that temperature, temperature check would be the heart would be no restrictions Plus would be No registered symbols, Symbols, the eyes be no registered symbols or well known symbols, -and the unconvinced one. The last one is will block regardless. even if we do have or not have restrictions in place -Actually probably we don't need to minute the details of this -add a legend somewhere in the chatter. Yeah, let me try to do it in this queue entries. I think that's the most accessible sometimes. Okay, Trent reason I'll do that and do that. Just Okay. Do you want to have pizza just while Robin puts the things in the key? Yeah. Yeah, we can that. +and the unconvinced one. The last one is will block regardless. even if we do have or not have restrictions in place Actually probably we don't need to minute the details of this add a legend somewhere in the chatter. Yeah, let me try to do it in this queue entries. I think that's the most accessible sometimes. Okay, Trent reason I'll do that and do that. Just Okay. Do you want to have pizza just while Robin puts the things in the key? Yeah. Yeah, we can that. PHE: Thank you. So my I'm trying to understand how this will be used. And I, main I talked about implementation complexity for excess to garbage, collect symbols, would be a big deal. It would be a lot code, a lot of work and it would increase memory use. So it's not super exciting from the comments. I understand that Mozilla and Google's engines, don't garbage collect symbols at this time either. So, I'm sort of taking from that. I'm not sure what to take from that either that this feature that we're contemplating adding of symbols as weak map Keys is useful and valuable even on engines that don't collect keys, or adding this feature, which seems possible will increase the use of symbols in a way that might necessitate implementing garbage collection of symbols. And I'd like to understand that better before we get to stage 3 because the the relative effort to implement is considerable. @@ -364,7 +360,7 @@ SYG: I don't quite understand how this solves the original issue, is we behave d LCA: Sure. Yeah. Yeah, that's obviously concern and and there's unfortunately nothing we can do about that. But, having this alternative behavior if people search for split in their IDE for example can type in split, and they'll get this other method and that may prompt them to think about what behavior they actually want. There's two split methods that might initially confuse them and think oh why is there two split methods? And then they'll look at the documentation and see that maybe the they wanted is actually the splitn behavior. So yeah, this is obviously a concern, but I don't think we should block improvements just for the reason that we cannot change the original behavior. So, because the original behavior exists, we should add a new maybe better maybe more user friendly behavior. And im not specifically saying this behavior is better. I'm just saying this behavior is more familiar to many people I think. I do think this is a valid problem. -JHX: Yes, we discuss this proposal in JS IGB meeting last week, and it seems that many participants even if they are experienced JavaScript programmer for many years, they don't know split has an optional argument. So adding a new method, which the name indicates that it can with splitn(). I think it may solve the problem. +JHX: Yes, we discuss this proposal in JS IGB meeting last week, and it seems that many participants even if they are experienced JavaScript programmer for many years, they don't know split has an optional argument. So adding a new method, which the name indicates that it can with splitn(). I think it may solve the problem. LCA: Yeah, this is. let's see rust. I think that's the same thing where you have a split method and a splitn method, don't quote me on that, But I think I remember that's how it works. Where one is bounded and the other ones is unbounded. @@ -376,25 +372,25 @@ RW: Is that the thing that JHX was referring to? LCA: Yeah. -MM: I just wonder if WH remembers why we did it the way we did. And was there any reason to do it that way rather than what's being proposed? +MM: I just wonder if WH remembers why we did it the way we did. And was there any reason to do it that way rather than what's being proposed? WH: No, I did not work on `split`. JHX: In previous. JSCI meeting. Also. Someone ask though, why the split behave like that and someone checked some old browsers, and it seems this is added in ES3. And it seems some version of Netscape added this and it did not exist. In JS1 or JS2. -MM: Okay. Thank you. +MM: Okay. Thank you. JRL: Okay, starting with the primary thing that I see split used for, is they don't actually care about everything in split. They care about a particular index. So they're making the split so they can get the array and immediately accessing a single index of the array. So they want the first thing up to the equal sign or they want the second thing after the equal sign and they care about nothing else. I'm curious if the need that we see for split-n is because they want a similar behavior for a split[index] and they can't get it with split, or it's difficult to use split correctly because it's different from the other languages. And if instead we tackle this as getting a splitAt() or some other named method that extracted a particular index of the split without actually creating the array as an intermediate, if that would solve the main use case that people want. LCA: yeah, I think that would solve a use case, or at least, there's can be use case. For example, you just want to get the key for a key value pair that separated by an equal sign. But let's say, you want to get the actual value here and not the key. It would not solve that use case because for that use case, the current behavior is insufficient because you can't just get the second item (so index one of the returned array) because the value may contain another equal sign. That's this prefix split issue. That would need to use the split end for like a reversible split. But yeah, it for specifically the scenario where you just want to put once and get everything before a given separator one. Could imagine that an additional method is added? I don't know prefixUntil or something. Something of that nature that returns some string, value up to a separator. -JRL: I hadn't considered the case where the value had an equal sign in it as well. So that's a good point. Okay, great. +JRL: I hadn't considered the case where the value had an equal sign in it as well. So that's a good point. Okay, great. JHX: Yeah, I want to show my strong support to this proposal because I have been caught by this several times. My use cases. I want to split several. That's a part of the string and I want the remainder, and deposits in some other. Legs, But it's the current split is just useless as the current behavior just you spitted. It's all and you slice, it. It's behave like, like, that but I don't think it has any usefulness. So I strongly support this proposal and especially the split end of a separate massive because it's also, so solved visibility are often. Like I said before that, many people never know you have the second optional parameter. Personally I slightly prefer the Rust version where n is the first argument. It might decrease some confusion, with the original function. So a generous support that. Yeah, Yeah, that's it. MAH (via queue): +1 to the proposal -CM: I kind of like this except I hear echoes of `substr` versus `substring`, where we ended up with two different functions with almost the same name and subtly different semantics and they sit there in documentation introducing, as I think SYG pointed out, a discoverability issue. It leaves us with this situation where you have this little bundle of confusion that gets introduced to people who are learning the language, and it adds to the overall complexity cost that those people have to suffer. I'm genuinely on the fence whether the modest improvement pays for this cost. The newer function, in both cases, has what I think are clearly superior ergonomics, but I'm not sure that pays back the additional cost in confusion to people who are learning the language. And when I say “I'm not sure” I genuinely mean that as I'm not sure, and not that I oppose this, but I think we should be factoring that cost into our deliberations. +CM: I kind of like this except I hear echoes of `substr` versus `substring`, where we ended up with two different functions with almost the same name and subtly different semantics and they sit there in documentation introducing, as I think SYG pointed out, a discoverability issue. It leaves us with this situation where you have this little bundle of confusion that gets introduced to people who are learning the language, and it adds to the overall complexity cost that those people have to suffer. I'm genuinely on the fence whether the modest improvement pays for this cost. The newer function, in both cases, has what I think are clearly superior ergonomics, but I'm not sure that pays back the additional cost in confusion to people who are learning the language. And when I say “I'm not sure” I genuinely mean that as I'm not sure, and not that I oppose this, but I think we should be factoring that cost into our deliberations. PFC: (via queue) “Agree strongly with CM's point about language learners” @@ -406,7 +402,7 @@ MM: while I agree with some of the discomfort, for stage 1 I think it clearly qu WH: I support this for stage 1. -LCA: And I don't see any other objections. So thanks. That's that. Yeah, thank you for that. +LCA: And I don't see any other objections. So thanks. That's that. Yeah, thank you for that. USA: Rob. Mentions plus one for stage 1, as well as lie on the chat. So thank you very much. @@ -425,7 +421,7 @@ JHX: Okay, Okay, let's start. Hello everyone. everyone. I'm Hax. Yeah, now 2022. JHX: First, a recap of the proposal. This proposal propose a new syntax, class.hasInstance check whether the value O is an instance constructed by the class in the nearest lexical context. the concept and the use case of this is very close to run time type checking of objects in other programming language. Typical uses is as follows in. For example. This is a very simple example, it's a Range, so it would use an immutable pattern, so you can, if we ignore the other part, we write an equals() method that we check whether our the that is also our range. If not, we return false. or we just compare the fields and give the result. -JHX: The motivation when entering State 1. It's General 2021. I have already elaborated on the existing ways of checking instance, including `instanceof` which exists since the early days of JavaScript, which is based on the prototype chain. So, even if the result is true there's no guarantee that is really constructed by your constructed in the past. Also, you can use WeakMap WeakSet manually and adding Brands to every instance. This is possible and has maximum flexibility, but require some boilerplate code. However, many JavaScript developers are not familiar with WeakSet API and them more importantly, most JS developers do not understand the Concept of brand. So this pattern is only used by a few Advanced developers in the world. In addition it was mentioned some data starts weeks at implementations in Kirk engines, may have GC effects and performance different from properties of fuse improve objects, which may also cause some developers to abandon this pattern. so, with the addition of the private fields feature, feature, developers can check instanceof in an interactive Way by accessing private fields leave age of somatic, effect of the objects, which you do not have class specific private fields, which trigger TypeError. However, this model needs the bolierplate code based on, try catch. And by its very nature, an abuse of private fields. as problem from problematic, for both eligibility, eligibility and it's maintainability. I don't think meeting last week and asked for your January private-in-in proposal, which is extension of the private fields, masses and others. And to stage three hand with these proposal went to stage 4 last year. This proposal makes it easy for developers to check if private atoms exist on an object to. So you can also use it checking instanceof. although the proposal where a limit to the try cash, but other problems with using private fields to check whether and instance is constructed by the constructor do exist +JHX: The motivation when entering State 1. It's General 2021. I have already elaborated on the existing ways of checking instance, including `instanceof` which exists since the early days of JavaScript, which is based on the prototype chain. So, even if the result is true there's no guarantee that is really constructed by your constructed in the past. Also, you can use WeakMap WeakSet manually and adding Brands to every instance. This is possible and has maximum flexibility, but require some boilerplate code. However, many JavaScript developers are not familiar with WeakSet API and them more importantly, most JS developers do not understand the Concept of brand. So this pattern is only used by a few Advanced developers in the world. In addition it was mentioned some data starts weeks at implementations in Kirk engines, may have GC effects and performance different from properties of fuse improve objects, which may also cause some developers to abandon this pattern. so, with the addition of the private fields feature, feature, developers can check instanceof in an interactive Way by accessing private fields leave age of somatic, effect of the objects, which you do not have class specific private fields, which trigger TypeError. However, this model needs the bolierplate code based on, try catch. And by its very nature, an abuse of private fields. as problem from problematic, for both eligibility, eligibility and it's maintainability. I don't think meeting last week and asked for your January private-in-in proposal, which is extension of the private fields, masses and others. And to stage three hand with these proposal went to stage 4 last year. This proposal makes it easy for developers to check if private atoms exist on an object to. So you can also use it checking instanceof. although the proposal where a limit to the try cash, but other problems with using private fields to check whether and instance is constructed by the constructor do exist JHX: So here are some simple comparisons between them first in the mental models class hasInstance, is literally means checking whether object is an instance of class and privately in. Oh, that word means checking whether object O has a private name. Although the facts were similar there, Of course, significant difference in mental models the formal reflects the general concept of OOP, which is common all or programming language for application. Developers are accustomed to this mental model, the existence checking of some private name. does not match and express the high-order intentions of the developers. Well, the property in reflect the brand directly with the cola private name, but it should be clicked collateral would clarify that, although, the, this proposal called class-brand-check, it's more for the purpose of naming, the proposal based on the terms that JS will not has established. Well, the reality is that brand new track, and not a concept that JavaScript developer and the even the Larger community of programmers, has a common perception and understanding to the best of my knowledge is concept of brand. Is rarely used by other mainstream program language coding communities. So although it seems possible to express the concept level brand directly with the code level private name, It does not Necessarily match the programmer's mental model. @@ -437,7 +433,7 @@ JHX: The second issue is about so whether the class has instance to the function JHX: The third one is The behavior of eval. Obviously. first one should always work and the second one should throw syntax error, if there is no class outside, but what about this? Because the class only have the brand if the we can static check it how contains class.hasInstance call. So, it's a question of what it returns. We would prefer it to throw a SyntaxError. Oh, but there are another case that it have the static class hasInstance. But also will, this is really a trouble one. Currently, the champions like to also throw type, sort of syntax error here to Simply simplify the case or the developer may be confused that sometimes it's so simple syntax error sometimes and not. -JHX: and the next is the syntax problem. The class has instance access to have overlaps with RBN's Class Property, access expression, proposal. And Ron said that he will considering visiting that proposed syntax to allow this proposal to move forward and reserved class-dot-syntax for other possible matter properties of methods. and the spec taxed, which stage 2 require: it's hard to read spec text and even harder to write spec text, Especially this proposal. There was such important mechanism, has objects and classes. This is the first time that close Kon and I have written such a complicated spec. Again, we said RBN and because we learned and copy the manager of the spec text of, from the Class Property access expression, proposal to our earlier version this purpose, in spite of our best efforts. We are sure that if there will be some mistakes. I hope you help us to correct them. Here is the spec text. we add other slots, called class Brands, which is a list of credit Casper and and we also add slot slot After the function. The function will have a class brand and When when Constructor it kills, it will check if the path brand is not empty. It will add The brand to the class Brands list of the object. Yeah, there are also here is the syntax and The runtime see is first to check if the instance is an object and then it's use the get class environment to gas the branded class. And if, the place Brands include that brand never return. +JHX: and the next is the syntax problem. The class has instance access to have overlaps with RBN's Class Property, access expression, proposal. And Ron said that he will considering visiting that proposed syntax to allow this proposal to move forward and reserved class-dot-syntax for other possible matter properties of methods. and the spec taxed, which stage 2 require: it's hard to read spec text and even harder to write spec text, Especially this proposal. There was such important mechanism, has objects and classes. This is the first time that close Kon and I have written such a complicated spec. Again, we said RBN and because we learned and copy the manager of the spec text of, from the Class Property access expression, proposal to our earlier version this purpose, in spite of our best efforts. We are sure that if there will be some mistakes. I hope you help us to correct them. Here is the spec text. we add other slots, called class Brands, which is a list of credit Casper and and we also add slot slot After the function. The function will have a class brand and When when Constructor it kills, it will check if the path brand is not empty. It will add The brand to the class Brands list of the object. Yeah, there are also here is the syntax and The runtime see is first to check if the instance is an object and then it's use the get class environment to gas the branded class. And if, the place Brands include that brand never return. JHX: So, so summary, there are summary here that we add a class has instance call and add a class Brands slot and the cost for an assault on the function. And special thing we add causing environmental record because currently it's just a single normal declarative environment record, and now we add a class environment called, called, which how class Constructor field which you can use it, too. Class. so, if we can advance to stage 2, we will continue to refine the spec text and we will use the transpires to exploring old cronikeys and we hope to get the user feedback from the, for example, the bubble implementation. Okay. @@ -469,7 +465,7 @@ YSV: I'm rephrasing, All happy path Versions of this can be expressed by ergonom JHX: Oh, you mean in the common cases, yes, I think we in the common cases there might be not have no much difference, -YSV: Great, So what I would really be interested in seeing is interest in this feature that we've already shipped because there's such a similarity between the two of them. I would just be interested in understanding what uptake we have and how that's doing in the wild so far. +YSV: Great, So what I would really be interested in seeing is interest in this feature that we've already shipped because there's such a similarity between the two of them. I would just be interested in understanding what uptake we have and how that's doing in the wild so far. JHX: Okay. I think, yeah, I think maybe. Well, we’ll have the Babel implementation land soon. it may be better to collect some feedback from the users. @@ -511,7 +507,7 @@ JHX: I think WH was asking about computed properties. Am I right? WH: I posted this on chat. The issue I'm having with this is that the Contains operator in the existing spec is a bit weird. If you use Contains in an expression which includes a class, Contains will peek into the computed property names inside the class. If you use Contains on the class itself, it will also peek into the computed property names in the class. This proposal is trying to use Contains to manage a class scope, which means that things inside computed property name expressions are considered to be both in the class for the purposes of Contains, and not in the class for purposes of Contains. This is weird, and I suspect this will be a bug farm. As discussed on the chat, I would prefer the approach of using something other than Contains to look for things which are class scoped. -JHX: And just as this part is very hard, I'm not sure I figure out the correct way to spec that. But what I can tell you is with the champions we had some chat about the computed property, we tend to believe it's better to Throw a syntaxerror. but what I haven't figured out is how to spec it. So as currently speced as I suppose it's allowed, but will always give the if there is there's way or its we're always give you the false results because in that time no one can get the class. So, so there's no instance of that class. So I ever check it. Just give me false, but it seems useless so we prefer to give a syntax error. We don't want it to behave like ‘this’. `this` in the computed property exists outside of the class, which is I think very surprising. +JHX: And just as this part is very hard, I'm not sure I figure out the correct way to spec that. But what I can tell you is with the champions we had some chat about the computed property, we tend to believe it's better to Throw a syntaxerror. but what I haven't figured out is how to spec it. So as currently speced as I suppose it's allowed, but will always give the if there is there's way or its we're always give you the false results because in that time no one can get the class. So, so there's no instance of that class. So I ever check it. Just give me false, but it seems useless so we prefer to give a syntax error. We don't want it to behave like ‘this’. `this` in the computed property exists outside of the class, which is I think very surprising. WH: We need to make up our minds about whether computed property names are part of the class scope or not part of the class scope because it will make a difference here. @@ -545,7 +541,7 @@ JHX: Yeah, it's hard for me to express that. I think the feedback side coming fr SYG: sorry. Let me ask a different question. Could you articulate, what is the mental model That you think programmers have that this would directly reflect? -JHX: so, it's more like the way you write code in other languages just very close to type checking. When you write code, you type check, it narrows the type and you can do the correct operation on that objects. +JHX: so, it's more like the way you write code in other languages just very close to type checking. When you write code, you type check, it narrows the type and you can do the correct operation on that objects. SYG: Right, but this doesn't let you do that because of the Prototype thing. It only lets you do that for Fields, Basically. which is a part of it. As I said, I'm not saying that's nothing but but it's not the whole model. @@ -620,7 +616,7 @@ JWK: and auto increments, as you can see, many languages can have how to compute JWK: The next one is, should we allow syntax collision with Typescript and Flow? so, if we try to avoid syntax collision, we have to choose some different Syntax, For example, we use the `enum Direction for symbol` so they have significantly different syntax. Any semantics that's not compatible with the current typescript and FlowJS enum, they need to have something like what they do to the class fields today. Now typescript has the `useDefineForClassFields` flag to switch to ES semantics. -JWK: And the final one is, should we allow default member types? if we can make it default to single, or we can allow it without the same strain as the key value, or We can make its default type to be number like any other languages, too. +JWK: And the final one is, should we allow default member types? if we can make it default to single, or we can allow it without the same strain as the key value, or We can make its default type to be number like any other languages, too. JWK: And the final one is, should We allow iterators on it and what should it yield? @@ -634,7 +630,7 @@ JHD: I was hoping to get more of an idea, “why do you need an enum?” It seem BT: I think like a lot of queue entries that are along that, so maybe we can just go to the queue and let folks ask the ask questions. - Is Jack, if we drain the queue before continuing with the slides. +Is Jack, if we drain the queue before continuing with the slides. JWK: Yes, okay. @@ -651,7 +647,7 @@ MM: I just don't find that this constant construct pays for itself, that the pro JWK: Okay, I think. It's a fair point because we already can have the same weight as similar ways to define enums today, but they are inconsistent. For example, DOM has enumerations like In XHR that have some readyStates defined. and in some other libraries like react enums are defined as all uppercase Variables in the file. I think we can benefit from a unified form. The more important motivation is to bring us the ADT enum . This kind of syntax is much more useful than the normal enum, but it's very hard to specify, so I have it as an add-on. So my current presentation is intended to gather the interest from the committee. And so I am so we can gather a group of people to gradually develop this proposal forward in the future, and I'm not intending to push it too fast. before we figured all the details out. -MM: With regard to the multiple competing patterns for expressing this currently in the language. If they're using the language as-is to express the concept. Work well enough And the problem is that there are multiple them than the that I think we should do is to pick one and bless it or promote it or something, But if you can express it currently in the language well enough then that that even more strongly says we shouldn't change the syntax of the language to express something, you can already express well enough in the language as is. And with regard to blessing one of those, I think that's a separate question. I don't have an opinion on that because I don't know what those are, but I'm skeptical on that as well. I think allowing one of those to rise to a de-facto standard by competition is the right way to resolve that in terms of the multiple ways to say. +MM: With regard to the multiple competing patterns for expressing this currently in the language. If they're using the language as-is to express the concept. Work well enough And the problem is that there are multiple them than the that I think we should do is to pick one and bless it or promote it or something, But if you can express it currently in the language well enough then that that even more strongly says we shouldn't change the syntax of the language to express something, you can already express well enough in the language as is. And with regard to blessing one of those, I think that's a separate question. I don't have an opinion on that because I don't know what those are, but I'm skeptical on that as well. I think allowing one of those to rise to a de-facto standard by competition is the right way to resolve that in terms of the multiple ways to say. JWK: As you can see in this picture, we can have something for Symbol or for BigInt, or maybe we can choose to bless maybe a symbol or number. I don't know. so, Yes, it's reasonable to be Skeptical, if the new syntax is worth it. I'm here only intending to gather committee interests on both plain normal enum and sum type. @@ -695,17 +691,17 @@ JWK: Yes. Hmm. I think the React example has what's I just mentioned about. They GKZ: Hi, my name is George. I'm on the Flow team at Meta, and I designed and implemented Flow Enums which we open sourced this past summer. So I can speak to our use cases. The majority of the value from implementing Flow Enums was for the value in the type system: because the enum declaration creates a new type, we supply exhaustiveness checking and so on. I should mention that Flow Enums are very different from TS enums. I can share a little comparison here. But if you ignore the types, there are a couple of things that users derive value from enums, just from the runtime. And we have on the order of magnitude around 10,000 Flow Enums in our code base right now. So this is something that users have adopted a lot. We provide several useful methods that don't come easily from the previous pattern that users would use, which is just `object.freeze` on object literal with literal values. So we provide a cast function that is a way to check if the value is a valid one, we also provide a `members` method which provides an iterator for the values of the enum. But again, most of the value comes when you use the syntax with a type system, I can see there being a value of creating this for Typescript and Flow, if there's a standard way to do enums. I intentionally designed Flow Enums to be very restrictive in their definition, so if in the future we standardized an enums feature, it would hopefully be a superset of whatever Flow Enums is. I'd even be happy to make reasonable modifications to Flow Enums to fit whatever that standard is. Yeah, that's all. Happy to answer questions on different use cases that people use enums at the company. Since we have very many different use cases. -JWK: Thanks for the support and I want to reply to SYG. That's yes, there are constantly adding and I'm once won't automatically convert existing code, but for new codes, we can have other things use relative. +JWK: Thanks for the support and I want to reply to SYG. That's yes, there are constantly adding and I'm once won't automatically convert existing code, but for new codes, we can have other things use relative. PHE: Sure, so I just talking about uses of enum in the embedded Universe, we end up, writing a lot of code that deals with data sheets with that come with c-code with long lists of constants that are often expressed as enums. It's so common that at some point I wrote a quick tool to convert C enums to JavaScript and so I'm, you know, That works. It would be nice to have that in the language, but I, you know, I agree also with Shu's point that there are lots of solutions for enums out there today, and those are going to keep going and maybe our tool will too, but adjust to output enum syntax instead. So I'm interested to see this explored. RBN Had done some interesting work before. LEO: just taking the hook as you just say, I think we have many different works for in nam here and I would like for I see in am keeps bouncing back to TC39 being presented in. There is a lot of stage zero enum proposals. I think there are clearly some loose ends. So I like having a better description. Convincing use cases for what we want for Nam. Clear path ahead for the syntax and everything. I wish we could just explore this as a stage one to have like a one proposal that we can work on top. I think like the signal for stage one is that what we want to explore and understand better these motivations and like what is the path ahead. It won't reach stage two without answering those but it's at least like not yet another enum proposal. And I think this one captures like some part of like from RBN and RW. I think we can explore and work on that. -BT: Okay, so we're out of time. So I think at this point we can say yes, I think you. We can ask for support to investigate the space. I think it's fair to say that the sum type enum is a later stage concern.. So who I think we're just talking stage one to investigate the problem space of enums. Thanks. Are there any objections to that? +BT: Okay, so we're out of time. So I think at this point we can say yes, I think you. We can ask for support to investigate the space. I think it's fair to say that the sum type enum is a later stage concern.. So who I think we're just talking stage one to investigate the problem space of enums. Thanks. Are there any objections to that? WH: Yeah. I'm uneasy about this. This is too vague at this point. I don't see the problem space identified. I see a lot of different ideas here and pretty much the only thing they have in common is that they all use the keyword `enum`, but they're so different from each other. So this is too vague for stage 1. -JWK: so, As I said, I didn't have a design before I figured out the constraints. I just want to explore this idea before continuing. +JWK: so, As I said, I didn't have a design before I figured out the constraints. I just want to explore this idea before continuing. WH: I have nothing against including enums in the language, but this is not stage 1 at this point in time, this is stage 0. diff --git a/meetings/2022-01/jan-26.md b/meetings/2022-01/jan-26.md index b5f4328d..73425649 100644 --- a/meetings/2022-01/jan-26.md +++ b/meetings/2022-01/jan-26.md @@ -114,8 +114,7 @@ JHP: So, this is our new framework called ESMeta, which stands for a metalanguag SAN: Hi, I will briefly introduce an Alpha version of our interactive ES debugger, which extends the interpreter of JISET. If you want to try it by yourself, please follow the instructions in the slide. Okay, I’ll start a demo. -SAN: The main purpose of this ES debugger is to help understand how ES executes a JavaScript program. So, as like a normal debugger, it controls the execution of a JavaScript program and visualizes the state of execution with respect to -ES. +SAN: The main purpose of this ES debugger is to help understand how ES executes a JavaScript program. So, as like a normal debugger, it controls the execution of a JavaScript program and visualizes the state of execution with respect to ES. SAN: On the top, there is a toolbar to control a debugger. I’ll explain each button later. On the left, there is a simple text editor where we can write a JavaScript program, on the center, there is a specification viewer, which shows you a single abstract algorithm in ES. @@ -137,7 +136,7 @@ JHP: They're still not implemented. WH: I'm also looking at your formalism. I can't figure out how you would implement something like the shared array buffer memory model in this. -JHP: It is a reasonable extension but we currently do not consider expressing that memory model in our formalization. +JHP: It is a reasonable extension but we currently do not consider expressing that memory model in our formalization. WH: Yeah. I'm really impressed with what you've done! You've kind of done the reverse of what I did on the committee back 18-20 years ago. The existing spec was partly machine-generated and verified at that time. I can give you a link to a bit of that: @@ -191,7 +190,7 @@ YSV: I'm just +1ing what Mark said. It would be very good to have a way to explo BN: First, this is truly amazing work. My mind is blown. I tried to get screenshots, just to remember the slides and then was just taking screenshots of every slide. So I stopped. Clearly you enjoy a challenge, shall we say, but I wonder if you have any general recommendations about, you know, ways the specification could be restructured or formatted differently to be even more machine-readable so that your tools don't need to handle as many special cases? -BN: And then the next question is also mine, which is just about whether you're comfortable with these slides being shared among co-workers or not - happy with any guidance there. I'm just really excited and I want to show people. +BN: And then the next question is also mine, which is just about whether you're comfortable with these slides being shared among co-workers or not - happy with any guidance there. I'm just really excited and I want to show people. SRU: We are more than happy to share our slides so that you can just do whatever you want to do with them. About the specification recommendations, now that we are re-implementing our tool, we are trying to use as few rules as possible and we are planning to add pull requests about irregular grammars and things like that. As you know, in ES, many chapters are written very nicely, but some chapters like regular expressions were written very differently and that they required special rules for our parsers. So previously when we worked on JISET in the beginning, we had to write them manually in various cases, but now that we are in a slightly better position, we'd like to ask the spec writers to revise those sentences to be more like the common style. @@ -258,13 +257,13 @@ JSC: Okay, great. Hi, everyone. My name is Joshua Choi. I am a delegate from Ind JSC: First is the pipe operator, The pipe operator has a long and storied history. I wrote a history document on the history.md. On the repository if you want to get where we got, but right now, it is an operator that you use the bar symbol than greater than, the left-hand side gets bound lexically to a certain to a certain special symbol the You can think of it, like, a know nullary operator that placeholder. We call it the topic records, that that, so the left hand side, gets evaluated becomes the topic and gets bound to the topic reference. Which in this case. It's the pound sign. We're bike shedding about that. It doesn't really matter in this discussion, so it could be a function call. It could be an array literal. It could be an array literal, in a function call, you could have properties. It could be whatever expression on the right hand side. So it's very versatile. And the reason why this may be good is because it allows us to to rearrange the word order of deeply nested Expressions. you know, in everyone's code there exists some just about all code bases, have like deeply really deeply nested, expressions with deeply nested parentheses, so it can be really hard to follow the the order operations with them.. Especially with prefix/ infix, suffix operations. so and like, including function calls with multiple arguments, so people could try by naming variables for each intermediate step, but people don't do that and And the and one big reason why they probably don't do that because naming is hard. So people just do Nested, it really deeply nested Expressions, which is fine, but it's also more. It's also maybe Harder to read, especially when word order is mixed up and also over relying on a variable names, especially, when names might end up being just the name of the operation you're doing. Anyway, might make things even harder to read some people might argue. That's why we're where the pipe. Operator may be good. That's why I'm a co-champion of the pipe operator. operator. The pipe operator is very versatile. Just about every other proposal. Does something that the pipe operator can do , all be it they might do it more concisely. While the pipe operator might be slightly more verbose because you have the topic reference, but it's still less verbose than trying to think of a variable name. And then, and then, and you ended making a const decleration or whatever. So, that's the pipe operator. It's at stage 2 right now. -JSC: Next, I would like to look at `function.pipe`. If you could hand their function, pipe is a really simple, is a really simple proposal. This isn't syntax. It's just a function or a set of functions if you will. It works on unary functions only so it allows you to chain unary functions. A lot of people in the community have wanted this. If some of you may recall that the pipe operator has gone through a long history of going back and forth between emphasizing tacit unary function calls. So this, because a lot of people like creating pipelines based on unary function calls. So this a simple API that does it without syntax. It's simple to it's simple to do, write, but it's also simple simple to use. and it would accommodate quite a few coding Styles without necessarily encouraging constructing, a lot of unary function calls, which I know some Representatives have been concerned about. So it's really simple. You the first argument to the pipe function is the initial value and then all the rest of the arguments are unary functions, and they get called consecutively on the value the result. There was written the pipe function Returns. The result of the Last function, it's basically function composition, but you're applying. There's `function.flow`. There, might like that's also in the proposal, which is just function composition without immediately applying the for the, an initial value. It creates a function call back. So as you can see function function.pipe and function, the pipe operator overlap, but, you know, overlap in a In a way that is simple and is arguably small. Yeah, you can do it through syntax, but it's also nice to have a function that that works on unary functions. the examples they're marked with FP, that's `function.pipe`. There's an abbreviation next to each title and po that stands for pipeline operator, that that's an example with pipeline operator. So there are two pairs of examples there that are equivalent. There's also an open area of overlap there that I will get to when I talk about extensions. If we could go to bind this. +JSC: Next, I would like to look at `function.pipe`. If you could hand their function, pipe is a really simple, is a really simple proposal. This isn't syntax. It's just a function or a set of functions if you will. It works on unary functions only so it allows you to chain unary functions. A lot of people in the community have wanted this. If some of you may recall that the pipe operator has gone through a long history of going back and forth between emphasizing tacit unary function calls. So this, because a lot of people like creating pipelines based on unary function calls. So this a simple API that does it without syntax. It's simple to it's simple to do, write, but it's also simple simple to use. and it would accommodate quite a few coding Styles without necessarily encouraging constructing, a lot of unary function calls, which I know some Representatives have been concerned about. So it's really simple. You the first argument to the pipe function is the initial value and then all the rest of the arguments are unary functions, and they get called consecutively on the value the result. There was written the pipe function Returns. The result of the Last function, it's basically function composition, but you're applying. There's `function.flow`. There, might like that's also in the proposal, which is just function composition without immediately applying the for the, an initial value. It creates a function call back. So as you can see function function.pipe and function, the pipe operator overlap, but, you know, overlap in a In a way that is simple and is arguably small. Yeah, you can do it through syntax, but it's also nice to have a function that that works on unary functions. the examples they're marked with FP, that's `function.pipe`. There's an abbreviation next to each title and po that stands for pipeline operator, that that's an example with pipeline operator. So there are two pairs of examples there that are equivalent. There's also an open area of overlap there that I will get to when I talk about extensions. If we could go to bind this. JSC: The bindThis, operator Works in a different space, but if it overlaps in certain ways with the pipe operator, and with some other operations, the function.prototype.bind call method. Is very very common people use.call in various ways it for various reasons. It's a and I'm talking about binding the `this` value to that this receiver. for a function, you know as everyone knows in JavaScript. Every function has a this binding, a this Dynamic binding and they use it in various ways. So sometimes you want to conditionally, flip between two methods, and you need to use a certain `this` value on them, or sometimes people will use array prototype, methods or object prototype methods, and they applied that they need to apply them to certain this values. `.call` is very common. We you can view our methodology. We did a corpus analysis. You can see it on the purple on the GitHub repository for this proposal. We exclude a transpiled code, even with that `.call` is more common than `.push`. Its more common that than dot set for the various datasets you have in the 10,000 most popular. Unloaded npm packages at least up 2018 or whatever.call is really common but doc calls clunky and it's clunky for two reasons order and verbosity word order would also, you know instead of having a `receiver.method` and then arguments, you to do `method.call( receiver)`, and then the arguments, that's very clunky and very A verbose. So, and for a very common method, like `.call`, it Maybe we're optimizing for in the language. Furthermore. There's a it also would be good. Syntax would also allow us to not have to depend on, on the specific function. It wouldn't be forgable. It wouldn't be modifiable in the global. In the, in the function,.prototype of global. So, so, you know, so there you see. Receiver double colon to double pulling can be byte. Shedded owner dot method creates creates a bounded function and then you can call it as usual. Yes. Sure. Yeah. So so there ‘BT’ is bind this and you can see examples where beat the PO examples, the pipeline operator examples solve also solve the order. It's just more verbose, the ‘ex’ examples. There are for the extensions proposal, which I'll talk about next to the bit. So like for instance, if you have, if you want to call with one argument on your receiver, `receiver::owner.method`, that method and, and you can supply an argument and and this would this would optimize for the very common use case of using `.call`. Now this gets a little more complicated when you consider another proposal, an earlier proposal called extensions. JSC: Extensions is a stage one. Proposal from JHX. It. It does a couple things it. Introduces several new syntax has involving either methods property descriptors. So fine, but we're bind this. When you omit arguments in the function, call. It creates a bounded function. Whereas extensions does something slightly different instead. It does something with property, descriptors where it assumes that if you omit a method called like, arguments in parentheses that you're trying to use a property descriptors’s getter. So like if you extract a property descriptor from some owner object, and if you want to apply that, it's getter to a receiver. You can do that with extensions. You can do that with bind this to or with the pipeline operator. It's just, you have use the `.get` method. You can also do this with the, with Setters too, so it allows you to use assignments in text. The logistics of the setter syntax TAB, and I are not quite sure of yet, but the goal is to allow to property descriptors on arbitrary receivers to to an assignment syntax on them. This isn't a common use case right now, but the idea is that it would become more common. With that box on the right, bind this extensions and pipeline operator, all overlap. That's that's where bind this, extensions and pipeline operator All overlap with varying levels of conciseness, but they all have the same word order at the very least. So, like with the pipeline operator with a pipe operator, you can just use `.call` with the extensions with the bind-this. this operator You use, you can use syntax without having to use duck calls. So it also doesn't depend on the function, dot prototype, and with extensions, you don't have if you have a property, descriptor object. You don't need to say dot. Get. And you can also use assignments attacks for dot Set. extensions also comes with a couple of other things, like it comes with a special that that column on the right on the right. there. It comes with special extraction syntax. This is a statement level thing that uses an import level syntax to try to extract property descriptors from owner objects, which is the same as using object.get on property descriptor. It it also has a polymorphic syntax that doesn't involve the `this` binding at all. It's it's a little complicated. But perhaps explain verbally, but basically the right, the right hand, if you there's a trinary form that allows you to specify the owner object especially and depending on whether that middle operand, the owner object is a Constructor or not. It's a Constructor then you It calls, binds and uses the property, the sorry the constructors prototype to extract method while with if it's not a Constructor, it assumes that it's a static function that doesn't use this at all. -WH: Looking at your chart, I don't have all of the various proposals swapped in at the moment. So I'm having a hard time figuring out what some of these things actually do. It might help if you also listed how you would write something in existing ECMAScript. Or what the thing does. +WH: Looking at your chart, I don't have all of the various proposals swapped in at the moment. So I'm having a hard time figuring out what some of these things actually do. It might help if you also listed how you would write something in existing ECMAScript. Or what the thing does. JSC: Yeah, I will be so I would be happy to modify the diagram to have that in In the meantime, in this meeting. Did you want me to go more slowly step by step, like over? @@ -280,11 +279,11 @@ JSC: So like so, you know, WH if you a look so think about we have Real world ex WH: That's not quite what I was after in asking that question. I understand the motivation for a pipe operator, but for some of the extensions things, some of the — -JSC: Oh, you mean the other proposals, okay. +JSC: Oh, you mean the other proposals, okay. WH: Yeah, I don't know why these are interesting use cases and specifically since, because of grammar, these things will often still be nested. You cannot put an arbitrary expression in front of `::` and have it parse. So it'll still be nested. -JSC: So I am not the champion for the extensions, proposal JHX. That's why I was hoping he'd be here and if you are here, JHX feel free to add yourself on the queue, but I could, I could do my best to explain for JHX. I am the champion of combined business tax is present. to WH about some of the. Well, what am I? Could you point out some of the specific use cases, you'd like hacks to, to, to elaborate on like, like, are you, are you would you like him to elaborate on property? Descriptors or Y or the polymorphism between Constructor? Not Constructor objects or whatever, or are you just confused about how it works, how extensions works? +JSC: So I am not the champion for the extensions, proposal JHX. That's why I was hoping he'd be here and if you are here, JHX feel free to add yourself on the queue, but I could, I could do my best to explain for JHX. I am the champion of combined business tax is present. to WH about some of the. Well, what am I? Could you point out some of the specific use cases, you'd like hacks to, to, to elaborate on like, like, are you, are you would you like him to elaborate on property? Descriptors or Y or the polymorphism between Constructor? Not Constructor objects or whatever, or are you just confused about how it works, how extensions works? WH: I'm confused. So let's pick the example that’s now in the lower right of the screen now. I don't see why you'd want to write it in the Ex form rather than as just a regular plain old function call. I don’t see the motivation for including that use case. @@ -292,7 +291,7 @@ RPR: One thing. I just like to check here is we're halfway through the time box JSC: So, I would like to limit the explaining details of proposals to five more minutes overall so we could give another minute to to, to extensions JHX. If you could really brief about getting a love, little, be brief. -JHX: Yeah, design. So extension. Actually is not that, that flow, it can be for that flow. Just let me cause the, it's a method essentially To just like a method. The only difference the normal methods is look up from the receiver, but the extension method of the, you get the extension methods from the Declaration. or it's look up the method of from the []? here. So, it can be used as a book for that flow. It just because we currently could use the method chain as for that. So in this example, in this is specific example, it could be used as that, but don't think it's, you couldn't not you, if you don't want that, you could not use as that. So anyway, I are trying to give updates of extenions in next meeting. So maybe I can explain more in next meeting. Thank you. +JHX: Yeah, design. So extension. Actually is not that, that flow, it can be for that flow. Just let me cause the, it's a method essentially To just like a method. The only difference the normal methods is look up from the receiver, but the extension method of the, you get the extension methods from the Declaration. or it's look up the method of from the []? here. So, it can be used as a book for that flow. It just because we currently could use the method chain as for that. So in this example, in this is specific example, it could be used as that, but don't think it's, you couldn't not you, if you don't want that, you could not use as that. So anyway, I are trying to give updates of extenions in next meeting. So maybe I can explain more in next meeting. Thank you. JSC: Yeah, okay, so that sounds good. I will say that extensions and also bind-this. Have been called out like anything involving. `this` bindings. Have been out of concern that their overlap with the pipe operator And with also with a partial function application have been called out of concern because they can be used in this way and they may encourage use in this way. It's such that may encourage use to use in these more linear Expressions. Sorry, WH that we don't want to really get into the nitty-gritty of like this the like exactly how each thing works, but I do want to emphasize that. Yeah each, it's true. each proposal. It's true. What JHX said the extensions proposals main purpose isn't for data flow. and that's also true of bind this and also true a partial function application. However, they have been called out as as concern because they do overlap. They can be used in overlapping ways. So it's important to examine how they overlap and how they don't overlap like if their purpose isn't data flow, then should we change them to overlap less, @@ -322,13 +321,13 @@ RPR: Thank you very much. So we will continue with this topic after lunch. YSV: The reason that I am concerned about the image that we see here is because what we have with pipeline is a sort of super-powered proposal which, you know, it, as I see pipeline it is allowing functional programming to utilize one of the benefits of object-oriented programming, which is chaining and it does so in a very natural way, but we see it being sort of extended into this space: of what is object oriented programming, the bind this operator? Now we have the argumentation that this is about improving the word order, but I believe that for object-oriented programming the word order and The ordering of operations isn't really a problem. And that it's either not improving the situation in that case, or it's making it worse. That said, I think that there is an additional issue I'd like us to think about as language designers, which is that constraints are a source of creativity and a source of thinking harder about problems. So, one thing that I liked about the previous version, the simple version of the pipeline operator, was that it was more constrained. It didn't have as many superpowers, As this does specifically it only accepted function calls and even in the most minimal, and especially in the most minimal version. It did not even accept await. Now, I want to highlight one feature of pipeline, which is that it allows you to change the underlying type that you're working on and it even encourages this because it is part of the syntax. So for example, you can go from a list and immediately from that list. It is supported that we go to, you know an element of that array or some other very different data type. There's no encouragement to split this into two separate variable names, which I believe had, you know, people could, of course. So one discussion we had is that people can write whatever kind of ugly JavaScript. They like JavaScript is a very provisional language. It has a lot of flexibility. However, continuing to add more ways in which the language is provisional isn't necessarily a benefit. There comes a cost with too much freedom in the language and this is my fundamental concern that we see here. That said, I have not been blocking pipe. I just want us to think about this and to think about why we would add a feature that has so much flexibility when in fact constraints may benefit us. The other thing is that I would say if we include this into the language, then I would argue against MM’s point that we shouldn't include any other proposal from those listed here in particular. I believe that their that while pipeline covers, the use case of bind this or some form of it, and it doesn't have to have new syntax attached is an independently useful concept for programmers. working the object-oriented Paradigm. That pipeline doesn't solve very well. And I think that's it. -JSC: thanks YSV, So just just going really quickly backwards. I'd encourage everyone to take a look at TAB’s call this proposal to, which is like bind-this except Slightly different and it doesn't overlap with pipe at all, but it solves the use case that you of making dot call the really, really common method. the really, really common object oriented method much more ergonomic. I'd encourage everyone to look at TAB’s call this operator proposal, at stage zero, Although our JHD has argued that that we could just change the bind-this operator to be like, call this. and it and the problem space has already achieved stage one, whatever. We want. next thing is that it is true that you can obfuscate code by hiding data type changes. When you, when you step through a transformation, I would point out that's not quite unique to the pipe Operator, but is in fact present. I know I told you this already. It is present with dot a method chains and it is present with sequential function calls, like including unary function syntax. Anyway, I don't think that's quite unique to the current pipe operator before that is a more fundamental issue of prescriptivism versus universalism. I think SHO might have a topic to touch on that, maybe in any case, it's 5 operator. Tries not to be prescriptive whether that's a good or bad thing is, is, you know, can be the plenaries’ opinion. It is certainly true that people can write that difficult to read JavaScript code with the pipe operator, or with any, or with dot method calls, or with whatever and not split stuff into variable calls. That's just, yeah, that's a fundamental skill. That programmers need to have and some programmers of them. Unfortunately, don't whatever you want to add it to the language. It depends on the benefits of fluency with like function calls, outweigh the risks, but either way, I don't see a way to fix this while allowing an ??? function calls without Arrow functions and await which I know isn't a big deal to YSV. You'll be upload like a in any case scenario function calls. I don't see a way to fix that, at least without partial function application, which has is running into it’s own problems. +JSC: thanks YSV, So just just going really quickly backwards. I'd encourage everyone to take a look at TAB’s call this proposal to, which is like bind-this except Slightly different and it doesn't overlap with pipe at all, but it solves the use case that you of making dot call the really, really common method. the really, really common object oriented method much more ergonomic. I'd encourage everyone to look at TAB’s call this operator proposal, at stage zero, Although our JHD has argued that that we could just change the bind-this operator to be like, call this. and it and the problem space has already achieved stage one, whatever. We want. next thing is that it is true that you can obfuscate code by hiding data type changes. When you, when you step through a transformation, I would point out that's not quite unique to the pipe Operator, but is in fact present. I know I told you this already. It is present with dot a method chains and it is present with sequential function calls, like including unary function syntax. Anyway, I don't think that's quite unique to the current pipe operator before that is a more fundamental issue of prescriptivism versus universalism. I think SHO might have a topic to touch on that, maybe in any case, it's 5 operator. Tries not to be prescriptive whether that's a good or bad thing is, is, you know, can be the plenaries’ opinion. It is certainly true that people can write that difficult to read JavaScript code with the pipe operator, or with any, or with dot method calls, or with whatever and not split stuff into variable calls. That's just, yeah, that's a fundamental skill. That programmers need to have and some programmers of them. Unfortunately, don't whatever you want to add it to the language. It depends on the benefits of fluency with like function calls, outweigh the risks, but either way, I don't see a way to fix this while allowing an ??? function calls without Arrow functions and await which I know isn't a big deal to YSV. You'll be upload like a in any case scenario function calls. I don't see a way to fix that, at least without partial function application, which has is running into it’s own problems. SHO: So, thank you, JSC for doing this. I did want to address. A little bit YSV point about the pipeline operator, to me. It does not vary much, which is why it seems like it does a lot, right? It is a lightweight thing that fits in with Expressions as they exist, and I personally have a lot of issues with bind-this, you know, but more to the point think, and sort of the problem with this discussion is on the one hand, we can talk about the specific proposals in front of us, and whether we like this proposal, on the other hand, there's the bigger question of how we group those. proposals, right? So I think the question of universality versus prescriptive - that JSC brought up in the article, is a very good framing for this, which is more desirable, right? That's like an open question, likewise. You can look at foregrounding this and not foregrounding this right? Some people's issues with bind-this include right that write free-this functions and pulling this. Something that people find very confusing more forward into the language, a good choice, right? That's the way we can look at. We could look at the concrete proposals, for the usage of each of the items and discuss whether, you know, the actual places that these various proposals would be used overlap In fact, and I think it makes sense somewhere to draw lines that we can't have everything, but I think if we're not talking about the bigger reasons, we're drawing those lines and coming to some consensus on that. Then we're looking at things like drawing lines based on the time and like, write that certainly isn't fair to say that. Well. If you wanted more FP support, I guess you should have gotten all those proposals in before people improved classes, right? That's not like a desirable way to make decisions. I think going forward. I mean I write like it doesn't seem like a way to get to a good and useful language. Overall, I think it be useful for us to spend more time talking about the broader goals and coming to some consensus on that. Because otherwise, time you take a group of proposals and try to pick, which one is the most important. It's just going depend on the particular framing you picked up at the time and that's it. it. Thank you. JSC: How should I try to arrange an [ad hoc meeting at the end of plenary](https://github.com/tc39/incubator-agendas/blob/a7ba2259b8874b0adc9d30d60b4f1db0c6f0db42/notes/2022/01-27.md)? Like, so should I like, would it be like overflow time or what? I have to arrange something Outside of plenary. How should I work on the fourth day? -RPR: I mean, how what we can I'll tmu and we can work ahead and set that up, but he'll be simplest if we just reuse the meeting time that we would have used mark. +RPR: I mean, how what we can I'll tmu and we can work ahead and set that up, but he'll be simplest if we just reuse the meeting time that we would have used mark. JSC: Okay, I'm sorry hacks Waldemar, Richard and Justin. I know you had items. I'd encourage you to try to show up where that, that overflow time. Okay. Can we just wait for the note takers to take the queue and A so we can put it in the notes. Sure. Yeah, let me know Robin when done. And thank you everyone for participating in the discussion. I I'm debating on. Yeah, depending on overflow. I may take present this again next plenary because I think this is going to be this. We need to talk about this more. @@ -348,11 +347,11 @@ SYG: Okay, I will ask a concrete question then and I would, sorry to put you on RW: I'm excited that SHO independently came that same I think intuitive conclusion. That's actually what we've been doing for years, which is to harass, I would say the proposal Champions into reviewing the tests for their proposals. I mean like nicely, harass them. That is, we made a maintainers have put it in a concerted effort for at least I would say since the like 2016/2017, which is basically, when the proposal document, really. To be the process document was really solidified and was firmly any place. first, we would look, we had hoped that like, full proposal offers with, like, LEO said, would just, you know, write their own tests, but it was that good, but that's fine. because some people love to write tests, some people don't know. What we did do was we made sure to verify with one more one or more of the type of person Champion, Champion, spec author,or an individual, who has expressed interest in a given proposal and has put in a substantial time and effort, but maybe isn't part of the Champions group. But has demonstrated a breadth of knowledge of the given subject area for example Andre bargull, right? right? Your proposal author. What a hell of a test review or author? Test driver. Oftentimes. We will reach out to Subject matter experts. So for example, if somebody wants to write tests for regular Expressions, I will always reach out to MB. I don't want to review a regular expression tests. That's just not available. He is good at that stuff. You could say expert. So one or more people from that sort of General group of officers and champions or deeply invested contributors in that subject, or generally domain experts to sign off. Would I like to see a formalized process for that? Yes, absolutely. Not sure what that looks like, but 100%, I would love to see that become like a requirement. I also picked up on a thing that you mention in there. I think you placed it in the wrong place, but it's okay. Your intention was, right. This is something about having blowing. I think you were actually talking about entrance criteria for stage 4. So never mind you're right. There is like this. An issue that exists and everybody is this a like I know exactly, which is when something becomes stage 3 is the race is to write tests for the future before implementers start turning up the implementation, because would be better for them to have something testing. It's great. And if that particular feature is not a prioritized saying sponsored contributors by the The entity sponsoring it then the well test might not get written, directly exist before implementation starting and you're kind of working backwards now, and it just sucks. Trust me in like the decade almost to, or whatever. I've learned that sucks. But yeah, so it would be cool if there was like a stage two and a half. I've always wanted a stage two and a half where it's basically stage 3 implementations shouldn't even really be allowed to start until there's some kind of test material, but that brings us to this other problem, right? That you also called out which is like oftentimes we start writing these tests and you know, frequently in the past. Like the day after a TC39 meeting is done. We would have a meeting. It's like all right, what things are stage 3? Let's prioritize those. And we start writing tests and frequently, there is no implementation, which is, can be a wild ride, like iterator helpers. I don't think it was actually stage 3 when we start working on that. We're trying again.What? Things like that with With that is where I was really feeling like I wish there was stage 2.5 again, Not sure what that stuff looks like, but these are formal processes that I have personally felt pain the need for in the past, many throughout the entire time. So, so yeah, as far as like having a formal Pipeline and again, I'm going to hop back to the call stack Back to what you were saying. Where I am excited that Sarah came to the same conclusion in the country. Speaks volumes that like the need is their right to have proposal Champions the Champions group. Have some responsibility to signing off on the tests for the thing they are creating. -SYG: Thank you. I yes, enthusiasm for Andre Bargull is shared by all. and I was not so I didn't want to come to this with a change in process already formulated because I sure if there would be any concern or opposition in adding more process specifically for tests, I guess in, before we continue with the queue. At the end of this discussion. I think I would like consensus or not on I guess elevating test 262 status and incorporating it more into the staging process itself to get ahead of Some of the implementation pains that I have felt over many years at different companies. +SYG: Thank you. I yes, enthusiasm for Andre Bargull is shared by all. and I was not so I didn't want to come to this with a change in process already formulated because I sure if there would be any concern or opposition in adding more process specifically for tests, I guess in, before we continue with the queue. At the end of this discussion. I think I would like consensus or not on I guess elevating test 262 status and incorporating it more into the staging process itself to get ahead of Some of the implementation pains that I have felt over many years at different companies. SYG: Okay, and with that, let's continue with the queue. Next up is LEO -LEO: Yeah, first of all, to connect it to the main point is the main thing. You pointed out. I am supportive today. I am in no way objecting to this main question. I was thinking it always seems like I need to bring some context that test 262 has historically been a thing where people use it and people has a lot of opinions over it and unfortunately, not too many people get involved directly at test262 by many reasons like time available and everything. I'm trying to collect ideas as well because I'm not putting this as a problem like my perspective, my historical experience with many others. Many champions. Is that like some of them get involved in a very healthy way that I actually appreciate some of them are, somehow like not getting too much involved, but appreciative of the work. And there are people unfortunately, that just goes like, ‘I don't care’ with like a massive ‘I don't care’ to the point like where we reported like some tests were ready and they were not different to that person. like regardless as they could ignore them. The work that we did like that, the work that the are people paid for. That's just x 2 is mostly writing tests reviewing test, test. But maintenance is the burden that comes bundled with. Where there is a lot of things here and there that's just a lot of things are Legacy and people would love to change that. and a lot of people like to voice how much we would like to change that test262, especially to make it compatible with an implementation X or Y, but being the support for the maintenance. Means you're going to get this communication from these many people. People who say, it's like we should bring this bundle to test262. We should modify these requirements for test262. We should these requirements to test 262, although, these opinions, they conflict. and the level of conflicts of these opinions are so high, that always like, got to the end where I try to work around these and find a working point, which I believe, I it successful because that's 262 is still works for many of us. but people feel like we are gatekeeping and this maintenance causes a lot of burnout. and it's so, so bad. I just hope people understand to the point, like, when we say that's test262, should improve that should be better on this and some of the time it's things like people have already discussed. I'm glad it's in the Plenary Now, because most of the time it doesn't fit business time to be in the TC39 plenary, but people like to change a lot of things. Unfortunately. We cannot change all of them and we can when we cannot change that, we got to the point who, no matter how we work in good faith. We're like we have some new proposals, new Champions getting involved and say, yeah, should change the whole structure of test262 to fit my test in, This happens. It's not like one person. It happens. This happened in the past and I was always in this spot that I like, it always felt somehow in a spot in to work with this, but I if I really get entertaining seeing how this project makes its own through, like how this project is Well done. I always act in good. Good faith. I know a lot of people recognize that, I know and but the voice of the people who doesn't care about that and they just want their own specific business thing on the project, but cannot understand the total of how many people use tests262 is hard. So bringing a thing like this and SYG. Let me make it clear. It was always a pleasure to work with you. and I'm not putting you in this spot because like you are the person who I always had the pleasure to work with, a lot of other people here as well. But also like this, this is the thing that happens. People cannot see that and it's unfortunate because like when we try to say a thing or not cannot happen, it's because we know I'm pretty sure Igalia is going to face that. I'm not being involved with the project anymore, but I'm pretty sure you Igalia is going to see that. I'm done speaking. Thank you. +LEO: Yeah, first of all, to connect it to the main point is the main thing. You pointed out. I am supportive today. I am in no way objecting to this main question. I was thinking it always seems like I need to bring some context that test 262 has historically been a thing where people use it and people has a lot of opinions over it and unfortunately, not too many people get involved directly at test262 by many reasons like time available and everything. I'm trying to collect ideas as well because I'm not putting this as a problem like my perspective, my historical experience with many others. Many champions. Is that like some of them get involved in a very healthy way that I actually appreciate some of them are, somehow like not getting too much involved, but appreciative of the work. And there are people unfortunately, that just goes like, ‘I don't care’ with like a massive ‘I don't care’ to the point like where we reported like some tests were ready and they were not different to that person. like regardless as they could ignore them. The work that we did like that, the work that the are people paid for. That's just x 2 is mostly writing tests reviewing test, test. But maintenance is the burden that comes bundled with. Where there is a lot of things here and there that's just a lot of things are Legacy and people would love to change that. and a lot of people like to voice how much we would like to change that test262, especially to make it compatible with an implementation X or Y, but being the support for the maintenance. Means you're going to get this communication from these many people. People who say, it's like we should bring this bundle to test262. We should modify these requirements for test262. We should these requirements to test 262, although, these opinions, they conflict. and the level of conflicts of these opinions are so high, that always like, got to the end where I try to work around these and find a working point, which I believe, I it successful because that's 262 is still works for many of us. but people feel like we are gatekeeping and this maintenance causes a lot of burnout. and it's so, so bad. I just hope people understand to the point, like, when we say that's test262, should improve that should be better on this and some of the time it's things like people have already discussed. I'm glad it's in the Plenary Now, because most of the time it doesn't fit business time to be in the TC39 plenary, but people like to change a lot of things. Unfortunately. We cannot change all of them and we can when we cannot change that, we got to the point who, no matter how we work in good faith. We're like we have some new proposals, new Champions getting involved and say, yeah, should change the whole structure of test262 to fit my test in, This happens. It's not like one person. It happens. This happened in the past and I was always in this spot that I like, it always felt somehow in a spot in to work with this, but I if I really get entertaining seeing how this project makes its own through, like how this project is Well done. I always act in good. Good faith. I know a lot of people recognize that, I know and but the voice of the people who doesn't care about that and they just want their own specific business thing on the project, but cannot understand the total of how many people use tests262 is hard. So bringing a thing like this and SYG. Let me make it clear. It was always a pleasure to work with you. and I'm not putting you in this spot because like you are the person who I always had the pleasure to work with, a lot of other people here as well. But also like this, this is the thing that happens. People cannot see that and it's unfortunate because like when we try to say a thing or not cannot happen, it's because we know I'm pretty sure Igalia is going to face that. I'm not being involved with the project anymore, but I'm pretty sure you Igalia is going to see that. I'm done speaking. Thank you. SYG: Thank you, LEO. I am. sorry to hear the difficulties You have faced in the past. test 262, being a TC39 project. I think it should be in scope for the code of conduct committee and enforcement of code of conduct violations. If not in the past, certainly going forward. If there's unbecoming behavior in treating the maintainers group. @@ -370,11 +369,11 @@ SYG: Thank you, SHO. yes, I think I agree the stakeholders question is an import SRU:: Hi, we wanted to contribute our generated tests that revealed bugs in specifications and JS engines to Test262, but it was not clear how to do that nor whom to ask for a process. Also, we were wondering whether it would be helpful for proposal writers if we can generate an extension of an existing interpreter with a proposal. So that the proposal writer can test their proposal, using their own tests and then check the validity of the proposal and submit the tests to Test262. -SYG: That would be the best thing since sliced bread. If that was possible. +SYG: That would be the best thing since sliced bread. If that was possible. -RW: I've heard twice now that about this idea that it's unclear how to add tests to test 262, frankly. I think that that's a question that should be proposed to GitHub. if that is unclear, because GitHub is where the tests are and we rely on github's Tools that they have, you know, in a we're just using a regular public repository for you know, you open pull requests. So if it's if it's unclear how to open a pull request that is Not something we can change the GitHub website. But that's all. that's all I've expected. And With regard to like, what's expected of tests. The contributing document is thorough in the interpreting document, which is useful for off is not geared towards authors. It's actually for anybody who's writing a runner. So consumers, but it is useful for informing test authors in how their tests will be executed. So between the two, you could internalized the information and basically create like a Testing runtime in your mind from the information there and we actually use the contents of the interpreting file is what the version number of tests262 is predicated on as it turns out. So, the only time we bump the diversion number of tests 262, is when there is ever a change in, how tests are expected to be interpreted by consumers. Because consumers, I think this goes to the stakeholders question. Honestly, anyone, any entity that is Consuming test 262. For the purpose for any purposes that, you know, you don't actually have to consume test262 to validate with a JavaScript, runtime fact, we know many use cases that just use it as large blobs of JavaScript that make for great test fixtures. That is a valid use case. Moving on, the directions to those files are actually right on the test uses to repository readme. I would the primary stakeholder The Entity to which tests 262 ultimately answers to is Ecma and I guess TC39 by way of, as test262 is actually part of the technical report. That is part of a set of documents, which I usually remember off the top of my head and can not apologize for that. But all the information is argue about is about the about the technical Point 104. And which is included under the Ecma 414 suite of specifications. So that's our primary. That's the answer to the exact day, when +RW: I've heard twice now that about this idea that it's unclear how to add tests to test 262, frankly. I think that that's a question that should be proposed to GitHub. if that is unclear, because GitHub is where the tests are and we rely on github's Tools that they have, you know, in a we're just using a regular public repository for you know, you open pull requests. So if it's if it's unclear how to open a pull request that is Not something we can change the GitHub website. But that's all. that's all I've expected. And With regard to like, what's expected of tests. The contributing document is thorough in the interpreting document, which is useful for off is not geared towards authors. It's actually for anybody who's writing a runner. So consumers, but it is useful for informing test authors in how their tests will be executed. So between the two, you could internalized the information and basically create like a Testing runtime in your mind from the information there and we actually use the contents of the interpreting file is what the version number of tests262 is predicated on as it turns out. So, the only time we bump the diversion number of tests 262, is when there is ever a change in, how tests are expected to be interpreted by consumers. Because consumers, I think this goes to the stakeholders question. Honestly, anyone, any entity that is Consuming test 262. For the purpose for any purposes that, you know, you don't actually have to consume test262 to validate with a JavaScript, runtime fact, we know many use cases that just use it as large blobs of JavaScript that make for great test fixtures. That is a valid use case. Moving on, the directions to those files are actually right on the test uses to repository readme. I would the primary stakeholder The Entity to which tests 262 ultimately answers to is Ecma and I guess TC39 by way of, as test262 is actually part of the technical report. That is part of a set of documents, which I usually remember off the top of my head and can not apologize for that. But all the information is argue about is about the about the technical Point 104. And which is included under the Ecma 414 suite of specifications. So that's our primary. That's the answer to the exact day, when -SYG: I don't think SHO meant stakeholder death since the legal structure here. +SYG: I don't think SHO meant stakeholder death since the legal structure here. RW: I agree that that actually took the words right out of my mouth a dead singer said, That was where I was going next, which is I don't think that that's the kind of stakeholder buy-in by think that primarily we’re talking about implementers. I believe, I've always believed to be the primary. The primary stakeholders in the sense that as consumers of test 262, they have the most on the line or test your 16, being stable, reliable and economical. implementers of JavaScript tools within implementers JavaScript, tools being active. so tools, write it down JavaScript for that includes, includes. Static analysis. Analysis compilers. And, and I think that there's two primary stakeholders with regard to Consumers. I think there is a group of stakeholders that are is important to that are sort of ephemeral or maybe not. I'm thinking in terms of like proposal authors and the spec editors are also stakeholders. It's important that test 262 to actually reflect what the spec authors. And when I say spec authors that also from now on that includes proposal authors, because they're writing spec too. It's the most important thing that test262 to be representative of what's right now. Like from, So, I would say that. you know, a 10,000-foot view. Those are the largest groups. And there are, of course, smaller subsets, and then like smaller sort of Fringe groups there. But those are definitely the larger groups. Frankly, I don't think we have that written down somewhere. There should be a markdown file. That says all of that. It is definitely important. Would be nice to be able to point to that when people are saying. can't we just use mocha. No. you know? Why? because how does that serve these groups and point to document? That would be fantastic. Yeah. Again, I think a lot of this, there's a like, a lot of domain knowledge. knowledge, and I think SHO is right the money that needs to be like pen to paper. @@ -388,7 +387,7 @@ SYG: in my experience, many tests have not been adequately reviewed. I disagree RW: Okay, so either one, either. What I'm hearing is that you're saying during the tenure and which myself LEO and probably was not about. -SYG: Sorry, Rick, this feel like it's getting heated for some reason that I don't understand why, I am not blaming you. +SYG: Sorry, Rick, this feel like it's getting heated for some reason that I don't understand why, I am not blaming you. RW:I'm just trying to understand. What you're like, what you're trying to say. @@ -436,15 +435,15 @@ Presenter: Frank Tang (FYT) FYT: Thank you for coming. My name is Frank. Tanya phones, my man and Chinese. And today I'm going to talk to about Intl.Segmenter API version 2 to proposal for stage 2. So the motivation is that we have already a intersect B under API at 142 in stage for a couple months ago. And in this version of to proposal we were at Intl library, Grant analogy to search for high-end agnostic rib, that can handle for figure out how to do a logical linebreak point. Whenever in those contacts, they already have a way measure text with. And so, therefore they can do line layout in those conditions. -FYT: So, the problem statement, for this particular proposal, is that currently segment already support preferring break word break and sentence break. But in particular if you look at the history with web HTML5, we have browser have gone in CSS, which overcomes high-level aspect, right? So you have a tan API CSS JavaScript can have a high level line layout just construction, though, and the div will do the layout for you. But in the last 20 years, HTML5, canvas adding sund support API that you would do a low-level construct. They have fuel tax and scroll tags and to render piece of text on the canvas and also have major tags to measure the width. And so there is a need for those thing and there's there's other cop places like, SVG PDF, or some other condition that charged groups, the handle itself, not in a DOM context lines on the rendering condition that requires some low-level API to support, to figure out where is the opportunity of line break. So they can do that wrapping correctly. I think Richard Gibson naturally mentioned some of the usage here. And also we Find this in a lot of other places like GS P.DF. They do generate PDF file They are try to use him JavaScript. in those contacts. They don't have palm tree. They will generate the line break a PDF and this will be a very good usage for that. Currently. They have some bottle 19, which can only handle western-style routine. But a line break opportunity will help them to figure out how to do this thing. We also have a lot of requests from different places. I think one of the user mention about what webXR, and webgl, and some other 3D engine have that walk. Interesting happened. Was that we doubt that a lot of time and another motivation for doing that is V8 in before the ecma 4 at my for transforming in about 2010 ship of V8 break iterator, which have the word break, preferring break sin has break and also line break. and we really tried to obsolete. I think, but we cannot do so on until we have a standard normal implementation that can bring the parity with that and we can encourage our current user for that to migrate to the new standard API. +FYT: So, the problem statement, for this particular proposal, is that currently segment already support preferring break word break and sentence break. But in particular if you look at the history with web HTML5, we have browser have gone in CSS, which overcomes high-level aspect, right? So you have a tan API CSS JavaScript can have a high level line layout just construction, though, and the div will do the layout for you. But in the last 20 years, HTML5, canvas adding sund support API that you would do a low-level construct. They have fuel tax and scroll tags and to render piece of text on the canvas and also have major tags to measure the width. And so there is a need for those thing and there's there's other cop places like, SVG PDF, or some other condition that charged groups, the handle itself, not in a DOM context lines on the rendering condition that requires some low-level API to support, to figure out where is the opportunity of line break. So they can do that wrapping correctly. I think Richard Gibson naturally mentioned some of the usage here. And also we Find this in a lot of other places like GS P.DF. They do generate PDF file They are try to use him JavaScript. in those contacts. They don't have palm tree. They will generate the line break a PDF and this will be a very good usage for that. Currently. They have some bottle 19, which can only handle western-style routine. But a line break opportunity will help them to figure out how to do this thing. We also have a lot of requests from different places. I think one of the user mention about what webXR, and webgl, and some other 3D engine have that walk. Interesting happened. Was that we doubt that a lot of time and another motivation for doing that is V8 in before the ecma 4 at my for transforming in about 2010 ship of V8 break iterator, which have the word break, preferring break sin has break and also line break. and we really tried to obsolete. I think, but we cannot do so on until we have a standard normal implementation that can bring the parity with that and we can encourage our current user for that to migrate to the new standard API. FYT: The API shape of why I'm trying to propose here is like this, right? So basic word. You have a segmenter and we just have a new value for the granularity in line and you have passing, for example here Japanese packs and creates a current API that you've used in word break. It will break were but now it will break in the second floor of the library opportunity and caller can calling whatever routine true figured out the with of that in the particular context, usually not in the HTML DOM environment. Because in that kind of environment, you should just use basic CSS. DOM to do that. But in usage, for example, SVG or web ml or, you know canvas that you don't have a DOM tree, you can have to do that kind of thing because we are working a lot of contacts and also there will be need help, additional attribute with line-break style this compare with a CSS back. back. They have three different Line breaks style to support a line break. So this is a basic high level definition have a drafts specs. Can take a look and also have a V8 prototype that can take a look to here. You can click on a link to look at that. So in stage one proposal we have the this draft. I mean, this already State one, but in order to do that, I want to repeat here. Is that one of the things, the issue criteria is to have to think it out potential, cross-cutting And one of the cross-cutting concern, is that a line break opportunity cannot work alone without a text measure, API, and as I mention SVG and canvas both have a magic way to measure the text measure tags. Here. I show the Safari HTML5 canvas guide, which I think the APIs are supported by all browser already by quoting Apple’s documentation here. They have matter text and SVG document documentation. Also show W3C computer text line to measure attacks in SPG, contacts. contacts. So both the line. Packed measurement, a PR are already available in those environment. FYT: Because this will be added to ECMA402 and I come out for to teaching to we have additional discussion guideline for stage2 advancement, we have to figure out whether it have prior Arts. Where is that difficult to implement userland or whether they have prior art, I want to address here. So first for prior art, one of the prior art is of the a break iterators I mentioned there's a shit about 10 years ago and we have actually about Out one or two and half years ago. I think I start to instrument. What kind of usage are out there. So you can see there are some certain usage of the line break. There are still some a lot of people using it. And here are some examples signs above about those usage. What interesting thing that we see most of them coming from showing the elections for some pie chart or something that based on SVG or convex. Yeah, because And otherwise, they were just using HTML and CSS and DOM to do that. And usually, you can that be if you do with in rendering for some crap that's hard to do within your HTML. They may have some environment have to do this kind of thing and you can look at more example in those pages. -FYT: Another part I think in some other programming environment. You'll see those kind things. APIs is Mac OS have to version of that and the low-level construct this here and Java have this API since MacOS 8 1998. I see you. I see you for Jay and have those things to summon once for more than 20 years. I your Forex, currently the rust. think one of my teammates working on that, too. So there are a lot of Prior Arts. Also, the algorithm also pretty well-known. So a Unicode standard 14 have be published for years. They using the material based on chicks 4051, which also will jointly published. I believe, in 1998. I think this is a 2004 is the second version that they also can Lundy's books 1999, already mention of loud, basic version of the algorithm. And so we also using some reference. Type of defined, by the number W3C CSS, standard Level 3. And so a lot of things, this thing LW H. PR pretty well defined spec. We are referring to that. The second thing 40G to concentration weather is difficult to incoming visualize, right? You've got something very easy to implement in userland. No need to put it here. So one big comparison, is that one of our canvas is working our canvas kit, which is basic that you want. SDK toolkit to from CSS into C++ into JavaScript and they figure out if they pay me to not have to say PID to increment it because they were needed for complex language, how to have tie or Burmese for the dictionaries is pretty big. So such compilation, if we including the whole thing while about what makeup bytes data. Tai,. Korean lat c'mere and a Burmese. And so this is a very difficult to you implement the usual and because a huge need a huge amount of data. The other way to do it is using the V8 break iterator pipeline in Chrome, but used the other way among Chrome. Chromium browser, the problems that will be harder for developer to manage but also costing a lot of page load latency for other non-Chromium browser, which segregate the web, which we don't think it's a good idea. So, it's show there's difficulty to implement in Google and because complex language breaking a lot of data. So here are the third icon is brought appear, whether this is really needed by developer. I show sound the sound hole and you get click on the link on top. Does you see? They are reply. One is from just JS-PDF and they mentioned they need a thing for that. And they mentioned emails or mention the magic packs and get computed text length in the canvas context as an EEG can be used on other. Remember Opera web also? Incheon their design, they mentioned the you can click on that, it will show you their architecture. This will fit a very important need will it's a low-level API, which a high level API cannot fulfill\l. We also have someone from figma which is a webgl, application asking for this particular the asking to us to expose unique online breaking hours at which this particular APIs design for a low-level Construct. And they believe the line-break API in this will eliminate the external library dependency and reduce their bundle size. The bundle size is important. Thing. I mentioned is our previous level usual and so we see would leave the all three criteria for fuel will have prior Arts. Arts. We have difficulty incrementally in userland, and, and have broad appeal. +FYT: Another part I think in some other programming environment. You'll see those kind things. APIs is Mac OS have to version of that and the low-level construct this here and Java have this API since MacOS 8 1998. I see you. I see you for Jay and have those things to summon once for more than 20 years. I your Forex, currently the rust. think one of my teammates working on that, too. So there are a lot of Prior Arts. Also, the algorithm also pretty well-known. So a Unicode standard 14 have be published for years. They using the material based on chicks 4051, which also will jointly published. I believe, in 1998. I think this is a 2004 is the second version that they also can Lundy's books 1999, already mention of loud, basic version of the algorithm. And so we also using some reference. Type of defined, by the number W3C CSS, standard Level 3. And so a lot of things, this thing LW H. PR pretty well defined spec. We are referring to that. The second thing 40G to concentration weather is difficult to incoming visualize, right? You've got something very easy to implement in userland. No need to put it here. So one big comparison, is that one of our canvas is working our canvas kit, which is basic that you want. SDK toolkit to from CSS into C++ into JavaScript and they figure out if they pay me to not have to say PID to increment it because they were needed for complex language, how to have tie or Burmese for the dictionaries is pretty big. So such compilation, if we including the whole thing while about what makeup bytes data. Tai,. Korean lat c'mere and a Burmese. And so this is a very difficult to you implement the usual and because a huge need a huge amount of data. The other way to do it is using the V8 break iterator pipeline in Chrome, but used the other way among Chrome. Chromium browser, the problems that will be harder for developer to manage but also costing a lot of page load latency for other non-Chromium browser, which segregate the web, which we don't think it's a good idea. So, it's show there's difficulty to implement in Google and because complex language breaking a lot of data. So here are the third icon is brought appear, whether this is really needed by developer. I show sound the sound hole and you get click on the link on top. Does you see? They are reply. One is from just JS-PDF and they mentioned they need a thing for that. And they mentioned emails or mention the magic packs and get computed text length in the canvas context as an EEG can be used on other. Remember Opera web also? Incheon their design, they mentioned the you can click on that, it will show you their architecture. This will fit a very important need will it's a low-level API, which a high level API cannot fulfill\l. We also have someone from figma which is a webgl, application asking for this particular the asking to us to expose unique online breaking hours at which this particular APIs design for a low-level Construct. And they believe the line-break API in this will eliminate the external library dependency and reduce their bundle size. The bundle size is important. Thing. I mentioned is our previous level usual and so we see would leave the all three criteria for fuel will have prior Arts. Arts. We have difficulty incrementally in userland, and, and have broad appeal. -FYT: During the TC39 by 2021 December meeting when we discuss your vendor stage 1. The question being asked is why not add this functionality to higher level API Like Houdini. This is an interesting question. So I did my homework before I come back ask for stage, two File request, December 16. No one responds. Actually, it is ideal with bring up in November, 2018, when the line granularity was originally, That the ones that you see. I and in November 2018. I think we have some discussion see whether they can bring in the houdini and that time we said we agree. Okay. Well just like, take out the lineup, maybe houdini do you even do that? Unfortunately, after 38 months, not days. Now, 38 weeks ago, thirty three months, the community who need shoulder. No interests, Don't know, concept there. No commitment. There are no research to work on this thing. So that's that's one thing here, but that's not only reason. one of our contributors from Flutter. All This is not a good idea to put that thing into houdinii and And one of the particular reason, this should be in the low level API. This is will be because it will be agnostic to rendering technology, right? And there are a lot of contacts or line breaking usage is not for rendering of for computation Houdini's. Exclusive a CSS feature. So whenever you need it in a non-CSS context, it won't work. So there's a very calm Concrete, answer from someone more working in this space to say why this is not the idea of a Houdini, but we don't stop here, right? +FYT: During the TC39 by 2021 December meeting when we discuss your vendor stage 1. The question being asked is why not add this functionality to higher level API Like Houdini. This is an interesting question. So I did my homework before I come back ask for stage, two File request, December 16. No one responds. Actually, it is ideal with bring up in November, 2018, when the line granularity was originally, That the ones that you see. I and in November 2018. I think we have some discussion see whether they can bring in the houdini and that time we said we agree. Okay. Well just like, take out the lineup, maybe houdini do you even do that? Unfortunately, after 38 months, not days. Now, 38 weeks ago, thirty three months, the community who need shoulder. No interests, Don't know, concept there. No commitment. There are no research to work on this thing. So that's that's one thing here, but that's not only reason. one of our contributors from Flutter. All This is not a good idea to put that thing into houdinii and And one of the particular reason, this should be in the low level API. This is will be because it will be agnostic to rendering technology, right? And there are a lot of contacts or line breaking usage is not for rendering of for computation Houdini's. Exclusive a CSS feature. So whenever you need it in a non-CSS context, it won't work. So there's a very calm Concrete, answer from someone more working in this space to say why this is not the idea of a Houdini, but we don't stop here, right? FYT: We have asked another question. Well, if that Houdini, whether they have a better outer place to begin, its I searched around, I feel my homework. Well, we actually find a place that rendering API maybe handle multi textile multiform by diameter line staff, staff, but there's one place called canvas format attacks. attacks. I think always remain composed by. Microsoft, they have a GitHub post there. and interestingly is after no activity after September 2021 and I post a question there. there. There's no response. that one place that could be a solution for canvas, but then we have the looking to whether that will really be the answer for what we intend we intend to propose. So, put down the thing in sound analysis. The First Column is the name of the thing to consider about, the second column is the V2 API, the third Columns of Houdini, the forth Collins canvas form, its text, which I just mention the valve, as you understand the Houdini a canvas format packs are high level API and they have a very particular rendering technology, depending So, one of the difficulty the Intl segment of the V2 API we’re proposing here. Intend to be in low-level API. It's like you have a tire, you can put on the Toyota, I could put on the home that you can put on the machine events, right? But if you're Houdini, or a canvas format, the API their high level constructs, they have a higher dependency. So, one of the difficulties that you put on high level API, it will not be useful for low level constructs in other places in particular as SVG as JS PDF will not be able to address. @@ -460,15 +459,15 @@ FYT: I think the issue I would like to defer to you. Is that a procedural violat SFC: TG2 to does not have does not have power over stage advancement. This meeting right here is procedurally the stage advancement. Meaning the meeting we had two weeks ago, TG2, was an opportunity to get input from some folks in TG2. I think that Frank has, you know, revised this presentation a bit since that time, so I don't see anything procedurally wrong with having his presentation here. -TAB: Yep, this is slowly falling upon MCM’s from what I could tell from this presentation. There hasn't been any work to address MCM’s concerns. You did instead go through and find it where previous efforts that MCM had cited as attempts to solve. This properly. Hadn't been significantly worked on in particular, the Houdini spec for text layout API. Which was originally on, WE Google were on the hook for that. And so we Google complaining that we Google didn't prioritize a better solution to this and so we should use a less good solution to this. Is it a great look? Ultimately, it's a prioritization problem within the Chrome team and we can solve it within Chrome team. If necessary. I don't think we should go from this hasn't been worked on because the person working on it passed away. We should do this This. Other thing instead is necessarily a jump is wanted to make here. +TAB: Yep, this is slowly falling upon MCM’s from what I could tell from this presentation. There hasn't been any work to address MCM’s concerns. You did instead go through and find it where previous efforts that MCM had cited as attempts to solve. This properly. Hadn't been significantly worked on in particular, the Houdini spec for text layout API. Which was originally on, WE Google were on the hook for that. And so we Google complaining that we Google didn't prioritize a better solution to this and so we should use a less good solution to this. Is it a great look? Ultimately, it's a prioritization problem within the Chrome team and we can solve it within Chrome team. If necessary. I don't think we should go from this hasn't been worked on because the person working on it passed away. We should do this This. Other thing instead is necessarily a jump is wanted to make here. -FYT: I believe the Houdini will not be able to possible to address the issue with endless. +FYT: I believe the Houdini will not be able to possible to address the issue with endless. -TAB: So that's my next issue and we just jump over into that. I think you are wrong about that. The point of the Houdini text layout API is you give a chunk of markup along with a few constraints, like what width and height, you're laying it out into whatever, whatever other information you need and it pops out a bunch of information about each text fragment giving the width height and the position that it would be positioned at. That does have a slight connection to HTML. You need to give it a fragment of document. It can definitely work with SVG in theory or you can just feed it. Well, like raw text and then position that manually with SVG if you're going in a completely DOM-less environment. Like you're just using NodeJS, I agree. That wouldn't do enough but my argument there is raw text without a DOM is not enough to format text across the world, the, the big problem here is a number of languages are hard to segment. Expensive to segment. They require dictionary support to know where you can split words in half. And so that's an expensive library to ship and having it built-in would be useful at. So, the main argument here, but knowing where to segment things is not nearly enough to actually Implement text layout in an international sense. In the simplest example, if you're mixing English and Hebrew the bidary algorithm will put things in places. You would not expect. and if you just look at segmenting information, you don't get that. You need significantly more information about actual text layout, and all of that is part and parcel of rendering text well, regardless of what your output surface is going to be whether it's going to canvass going to JS-PDF, whatever you need to know positions of text fragments as well as just break points. And that is the precise information that we had planned to offer via the Houdini text layout API, which is not explicitly CSS tied. but which works with it. But is explicitly layout tied and gives you all the necessary information to do text layout. On your own side. If you need by actually running text out as the browser would. Anything less than more it, while it help binary sizes in some cases will still not address the need that you are asking for and that's a problem because break iteration like this is purely a text layout concern, unlike the other iterator types which are semantic in nature and can be used for things other than layout. If we're going to do layout, we should do layout, right. Houdini's. One way to do it. Maybe there's other ways we could do it instead, but just addressing this won't do enough and will be an attractive nuisance for people trying to do internationalize text in the future. If they try to use this, they'll still need to use significant additional work, which requires fairly expert guidance to write, which has already been done by the browser anyway, +TAB: So that's my next issue and we just jump over into that. I think you are wrong about that. The point of the Houdini text layout API is you give a chunk of markup along with a few constraints, like what width and height, you're laying it out into whatever, whatever other information you need and it pops out a bunch of information about each text fragment giving the width height and the position that it would be positioned at. That does have a slight connection to HTML. You need to give it a fragment of document. It can definitely work with SVG in theory or you can just feed it. Well, like raw text and then position that manually with SVG if you're going in a completely DOM-less environment. Like you're just using NodeJS, I agree. That wouldn't do enough but my argument there is raw text without a DOM is not enough to format text across the world, the, the big problem here is a number of languages are hard to segment. Expensive to segment. They require dictionary support to know where you can split words in half. And so that's an expensive library to ship and having it built-in would be useful at. So, the main argument here, but knowing where to segment things is not nearly enough to actually Implement text layout in an international sense. In the simplest example, if you're mixing English and Hebrew the bidary algorithm will put things in places. You would not expect. and if you just look at segmenting information, you don't get that. You need significantly more information about actual text layout, and all of that is part and parcel of rendering text well, regardless of what your output surface is going to be whether it's going to canvass going to JS-PDF, whatever you need to know positions of text fragments as well as just break points. And that is the precise information that we had planned to offer via the Houdini text layout API, which is not explicitly CSS tied. but which works with it. But is explicitly layout tied and gives you all the necessary information to do text layout. On your own side. If you need by actually running text out as the browser would. Anything less than more it, while it help binary sizes in some cases will still not address the need that you are asking for and that's a problem because break iteration like this is purely a text layout concern, unlike the other iterator types which are semantic in nature and can be used for things other than layout. If we're going to do layout, we should do layout, right. Houdini's. One way to do it. Maybe there's other ways we could do it instead, but just addressing this won't do enough and will be an attractive nuisance for people trying to do internationalize text in the future. If they try to use this, they'll still need to use significant additional work, which requires fairly expert guidance to write, which has already been done by the browser anyway, -FYT: I don't want to point out that when you say you need to buy. I died too complex scripts, support supported rash is correct? The partially not biders algorithm and actually are published for 30 years, and there's already jar screwed Library handle that and I think one the fluttered Mansion also they mentioned they had show that about hop to underline the curl, which they can handle the bidi there. I was the tech lead in Mozilla in 2002 to increment by Dy was held from IBM Israel. So there are open source reference how to do bidi and complex script in Mozilla open source for more than years, years and it's nothing Right? And the bidi algorithm implementation is already there. So, you're right. You need to have a biidi for complex script. Actually are not needed. So here's one example, the left hand side is showing the V8 break iterator showing the this show in the Hebrew and Arabic. And I think the second the fourth line, Fifth Line, the right hand side is showing demonstrate and Safari Mozilla and you can see the Chinese and Japanese cannot be segmented so they have to just go over the edge. So when you say there's a complex screen issue, it is true. If you have to do is acquire frame rate, but not the line break Library internally in the library are reason. it will break that incorrectly the unit. That will not have the complex script issue. I understand your concern. I think that were more tied to with what's are called grapheme cluster breaking but now with language +FYT: I don't want to point out that when you say you need to buy. I died too complex scripts, support supported rash is correct? The partially not biders algorithm and actually are published for 30 years, and there's already jar screwed Library handle that and I think one the fluttered Mansion also they mentioned they had show that about hop to underline the curl, which they can handle the bidi there. I was the tech lead in Mozilla in 2002 to increment by Dy was held from IBM Israel. So there are open source reference how to do bidi and complex script in Mozilla open source for more than years, years and it's nothing Right? And the bidi algorithm implementation is already there. So, you're right. You need to have a biidi for complex script. Actually are not needed. So here's one example, the left hand side is showing the V8 break iterator showing the this show in the Hebrew and Arabic. And I think the second the fourth line, Fifth Line, the right hand side is showing demonstrate and Safari Mozilla and you can see the Chinese and Japanese cannot be segmented so they have to just go over the edge. So when you say there's a complex screen issue, it is true. If you have to do is acquire frame rate, but not the line break Library internally in the library are reason. it will break that incorrectly the unit. That will not have the complex script issue. I understand your concern. I think that were more tied to with what's are called grapheme cluster breaking but now with language -SFC: Yeah, I'll just be fairly quick speaking as a member of the Google internationalization team. We receive requests very, very often for solving this specific problem of the payloads requirement for a proper line break segmentation, because it does require the dictionary or LSTM data, etc. And it's definitely the biggest chunk of the [text layout] engine. There are other pieces of the layout engine—that is absolutely true—but the piece that is the most problematic for our clients is the segmentation piece. +SFC: Yeah, I'll just be fairly quick speaking as a member of the Google internationalization team. We receive requests very, very often for solving this specific problem of the payloads requirement for a proper line break segmentation, because it does require the dictionary or LSTM data, etc. And it's definitely the biggest chunk of the [text layout] engine. There are other pieces of the layout engine—that is absolutely true—but the piece that is the most problematic for our clients is the segmentation piece. `[call interrupted; waiting for everybody to rejoin]` @@ -498,13 +497,13 @@ SYG: You just sent a way that Node can actually implement it. It would be possib TAB: I am not gonna count on anything. I don't have a strong opinion between any specific one here. But yes, all of these, even if we make sure that using it with HTML is very convenient. It's absolutely on the cards to make sure that it works Beyond HTML. - Okay, so we are well over time for this. there is, there are a few comments on the cube, but I find it difficult to that, that we'd be able to get through those in a tiny amount of time. So, Frank, would you like to ask for consensus? Another possibility would be to have this on some sort of offline call may be incubated call. I would like to request to come either prove or the banks to stage 2. Okay, there's somebody Objective C H2 for a segmenter V2. +Okay, so we are well over time for this. there is, there are a few comments on the cube, but I find it difficult to that, that we'd be able to get through those in a tiny amount of time. So, Frank, would you like to ask for consensus? Another possibility would be to have this on some sort of offline call may be incubated call. I would like to request to come either prove or the banks to stage 2. Okay, there's somebody Objective C H2 for a segmenter V2. MCM: Yes. [blocking] JHD: I explicitly support, but the objection obviously holds. -USA: Okay, let's have the have a clear statement of the objection and the and the reason behind it so that the Champions can work on that. +USA: Okay, let's have the have a clear statement of the objection and the and the reason behind it so that the Champions can work on that. MCM: Sure, I have written this up in a GitHub issue. Can I just linked to that issue? Sure. @@ -522,7 +521,7 @@ Presenter: Robin Ricard (RRD) MM:. Yes. Um, so when we talk about it before, I had this rude surprise of a recently discovered security concern discussions offline and especially SYG’s point online before the offline discussions made me realize that I was thinking about the whole issue completely backward. On confining information, our whole approach, which just the sound is to deny, The By the authority to read. channels not try to suppress the writing of covert channels. And in particular to hardened JavaScript already denies access to the WeakRef Constructor the finalization registry default what denies them by default, because being able to sense garbage collection at all being able to observe it already opens a side channel. And with regard to the new WeakRef interaction we that Doubles would provide. It does not threaten the solution that we have and the the property that it does threaten was already long lost on a and that's that's all I feel like I need to say. -YSV: I want to make a clarification about what I said yesterday about SpiderMonkey. It was incorrect. We do garbage collect registered symbols as an optimization. And the way that it was described wasn't quite right. It was described as being garbage collected when the reference to the string is garbage collected, but rather, we count the references to the symbol itself. And if there are no references to it, we garbage collect it. That said we do have concerns - we ultimately do have concerns about using registered symbols as weakmap keys, because it complicates our reachability rules. At the moment using something as a key in a WeakMap or a weak set, or weakref, does not make it reachable. Like that property. Does not make reachable, but if registered symbols Were treated as valid Keys than the spec for all of these would need to be updated to make any key Target reachable. If it is Affordable. So, conversely, the description of whether something is reachable becomes needlessly complex. Registered symbols are reachable by the, by the usual rules. But if they are used as key Target in a WeakMap, or any of the weak data structures, yeah, so they will also be reachable by these rules As well as but the normal rules. So again, this is specific to SpiderMonkey because we are garbage collecting these registered symbols, but it may be interesting for others to reflect on. So there's a weird period of time after something's been removed as a weakref Target, but the JavaScript hasn't run to completion or like, you know, the micro tasks queue hasn't emptied or whatever. It probably won't be a problem for implementations, but it feels like it's asking for trouble in semantics if an implementation wanted to be more precise in collecting registered symbols. And that's it. +YSV: I want to make a clarification about what I said yesterday about SpiderMonkey. It was incorrect. We do garbage collect registered symbols as an optimization. And the way that it was described wasn't quite right. It was described as being garbage collected when the reference to the string is garbage collected, but rather, we count the references to the symbol itself. And if there are no references to it, we garbage collect it. That said we do have concerns - we ultimately do have concerns about using registered symbols as weakmap keys, because it complicates our reachability rules. At the moment using something as a key in a WeakMap or a weak set, or weakref, does not make it reachable. Like that property. Does not make reachable, but if registered symbols Were treated as valid Keys than the spec for all of these would need to be updated to make any key Target reachable. If it is Affordable. So, conversely, the description of whether something is reachable becomes needlessly complex. Registered symbols are reachable by the, by the usual rules. But if they are used as key Target in a WeakMap, or any of the weak data structures, yeah, so they will also be reachable by these rules As well as but the normal rules. So again, this is specific to SpiderMonkey because we are garbage collecting these registered symbols, but it may be interesting for others to reflect on. So there's a weird period of time after something's been removed as a weakref Target, but the JavaScript hasn't run to completion or like, you know, the micro tasks queue hasn't emptied or whatever. It probably won't be a problem for implementations, but it feels like it's asking for trouble in semantics if an implementation wanted to be more precise in collecting registered symbols. And that's it. SYG: I just want to +1 that there is a bit of implementation complexity for allowing registered symbols. Because the weak collection marking in all engines is one of the most complicated parts of the GC it's ephemerons and it's fixed points and it's harder to kind of make concurrently with mutator and make parallel. So complicating that is undesirable. It's I wouldn't say it's impossible or hard blocker, but it is definitely undesirable. @@ -560,7 +559,7 @@ SYG: Yes, specifically. The typeof consistency versus - I think we're talking ab JHD: Related to YSV's comment regarding usage, search on GitHub is tricky and searching codebases can easily get narrow perspectives. My experience is that it's often used in polyfills. Node core uses registered Symbols for example for `util.inspect`, the protocol is a registered Symbol. All the browser polyfills for node core modules that almost every bundler grabs by default will use a registered Symbol for that as well. I also have a library that's used heavily on airbnb.com that’s intended to mitigate bundle splitting issues that uses attaches registered Symbols as properties on the global object to cache things across bundles. I don't think it really changes the discussion; I just wanted to point out there's lots of other examples beyond the ones you cited. But I agree it's a niche feature more often used by library authors than by the average developer. -JHD: My next item was I just want to make sure for the notes and for myself - It sounds like the three options are either, “no progress”, “all symbols are allowed”, or “only unregistered symbols are allowed”, meaning a well-known symbol and a unique symbol are both unregistered. Does that match everyone's understanding of the three paths forward here? Separate from blocks. +JHD: My next item was I just want to make sure for the notes and for myself - It sounds like the three options are either, “no progress”, “all symbols are allowed”, or “only unregistered symbols are allowed”, meaning a well-known symbol and a unique symbol are both unregistered. Does that match everyone's understanding of the three paths forward here? Separate from blocks. WH: A lot of us don't care between the options that were listed on the poll of allowing only unregistered symbols or only symbols which are both unregistered and not well-known. We haven't had that discussion between those two fully, so some people here may care. I'm not one of them, the key part for me is that registered symbols are not allowed as keys. @@ -588,7 +587,7 @@ JHD: Aren't you doing both? SYG: Well, yeah, but it's the false to True is the one that would be web breaking. I guess you to go down a different path in your code, right? It would suddenly change the code that you shipped. It will take a different path right? -JHD: So that's what I would like to hear more from you two, or anyone else about. Can you contrive me some code, that is reasonable/likely to exist, that would use the predicate that would actually break? The use case I’m envisioning is if you currently pass a Symbol it goes into the strong Map instead of the WeakMap. And with this change, it would start going into the WeakMap and there'd be no difference because all of the logic would be using this predicate for the correct path. +JHD: So that's what I would like to hear more from you two, or anyone else about. Can you contrive me some code, that is reasonable/likely to exist, that would use the predicate that would actually break? The use case I’m envisioning is if you currently pass a Symbol it goes into the strong Map instead of the WeakMap. And with this change, it would start going into the WeakMap and there'd be no difference because all of the logic would be using this predicate for the correct path. MM: Oonce the predicate exists people will put will make other logic conditional. You can't provide people a true/false predicate and assume that, you know, all the uses that people will make it. diff --git a/meetings/2022-03/mar-28.md b/meetings/2022-03/mar-28.md index abef3ca8..bc3d5900 100644 --- a/meetings/2022-03/mar-28.md +++ b/meetings/2022-03/mar-28.md @@ -35,11 +35,11 @@ YSV: Hi everybody. If you don't know me already. My name is Yulia Startsev from YSV: So that's what we discussed and at the end of this meeting, we voted on allowing the alternative license, and this alternative license because it is coming from TC39 and it was proposed by Mozilla specifically for TC39, we were explicitly talking about ECMA262 as a potential adopter of this license. -YSV: So what is this license? This document is available as an Ecma document. So, if you haven't had access to the ECMA documents, please get in touch with Isabel or talk to me, and I'll send you the updated license document. Here it is. What is the license? It's called the alternative copyright notice and copyright license. I'll just read it very quickly: [reads license] That is full license. +YSV: So what is this license? This document is available as an Ecma document. So, if you haven't had access to the ECMA documents, please get in touch with Isabel or talk to me, and I'll send you the updated license document. Here it is. What is the license? It's called the alternative copyright notice and copyright license. I'll just read it very quickly: [reads license] That is full license. YSV: All right, in addition we made some changes to the FAQ, the one that I considered to be more most significant to was regarding the main objectives of the ECMA copyright policy. Here's how it changed. [Showing Slide # ] So the original modifications that Mozilla proposed are in blue and they have been updated with the red text after a review by the IPR policy committee. In particular the last sentence is important, ergonomic a also choose to publish standards under alternative copyright license and copyright notice and license, which is what we're discussing here. Where for example, Example, doing so would facilitate alignment with the policies, governing a related third-party standards. -YSV: Okay, so the last thing discussed were the ISO comments. So there is a project at ISO called ISO smart. It's under development which will test different ways of making their standards available. So this will allow for different ways for them to be used, including derivative works. This currently doesn't exist. It's an upcoming project. For now ISO cannot make use of the alternative ergonomic copyright notice, instead what will happen is ECMA may publish its documents with its copyright license, which may be the alternative license and I so will you continue to use their current copyright notice. But this is of course may change in future when ISO smart is finished and becomes available. There's a precedent for this kind of setup which we've seen with W3C submissions. So this shouldn't be an issue having two different documents. +YSV: Okay, so the last thing discussed were the ISO comments. So there is a project at ISO called ISO smart. It's under development which will test different ways of making their standards available. So this will allow for different ways for them to be used, including derivative works. This currently doesn't exist. It's an upcoming project. For now ISO cannot make use of the alternative ergonomic copyright notice, instead what will happen is ECMA may publish its documents with its copyright license, which may be the alternative license and I so will you continue to use their current copyright notice. But this is of course may change in future when ISO smart is finished and becomes available. There's a precedent for this kind of setup which we've seen with W3C submissions. So this shouldn't be an issue having two different documents. YSV: Finally, the process for adopting the alternative license. Let's say that we as a committee decide that, yes, we are going ahead with ECMA262 becoming available under the alternative license. The process looks like this: for alternative policy. The TC asks for permission just before the GA final vote on the standard. And I understood from the EcmaSecGen - Now. I was also sick at the time when we had this meeting, so other people who are in the room right now may be able to correct me if I'm wrong - I understand that this is done by adding an extra line, when submitting the standard saying that we intend to use the alternative license and updating the spec in kind. For any other standards that we may produce or may be produced by other standards bodies. Such as ones that may be produced in the future or that are currently ongoing. The IPR committee must meet and discuss whether or not it is safe to move to the alternative license for that specific standard, so there is more of a process than what will happen for ECMA262. @@ -67,11 +67,11 @@ WH: Do we want to adopt the alternative copyright policy for anything besides EC MM: I certainly do, but I figured we can just do it for 262 first and then we have a precedent. -YSV: It makes sense since we've only got a week to do as MM suggested but SYG has a response. +YSV: It makes sense since we've only got a week to do as MM suggested but SYG has a response. -SYG: No need to speak. [on tcq: At least 402 as well?] +SYG: No need to speak. [on tcq: At least 402 as well?] -BT: Yulia or someone why why we wouldn't do 402 at this time as well. At least get consensus for at whether it's practical or not. +BT: Yulia or someone why why we wouldn't do 402 at this time as well. At least get consensus for at whether it's practical or not. YSV: So the reason why is that any I believe any spec aside from Ecma 262 will need to go through the IPR committee because that's - like we were very explicit about saying like we intend to use this for the ecmascript specification, but we didn't mention other specs. So the IPR Committee has not reviewed other specs for adopting it. So this would kick off like an entire process with that. That's not a problem. It will just take more time and may complicate the process. @@ -117,13 +117,13 @@ Presenter: Istvan Sebestyen (IS) - [slides](https://github.com/tc39/agendas/blob/main/2022/tc39-2022-12.pdf) -IS: I think some of the points we have already discussed in this meeting i.e. the copyright parts, so I will be jumping over those slides. So again, I just show you the list of the relevant TC39 and ECMA GA documents that you normally do not see over the GitHub, so that will be a very quick one. And then there are two issues, one of them we have already dealt with the alternative copyright license. And the other one is just a recap for ES2022 approval related items. I will be very, very quick on that because that will be also part of this meeting anyway, so it is just to make it complete and then status of TC39 meeting participation that just one slide, and the TC39 standards download statistics. There is also not really anything new. I just point to those slides and that's it. So, actually, I hope - you know - that I can be very, very quick on the whole presentation. +IS: I think some of the points we have already discussed in this meeting i.e. the copyright parts, so I will be jumping over those slides. So again, I just show you the list of the relevant TC39 and ECMA GA documents that you normally do not see over the GitHub, so that will be a very quick one. And then there are two issues, one of them we have already dealt with the alternative copyright license. And the other one is just a recap for ES2022 approval related items. I will be very, very quick on that because that will be also part of this meeting anyway, so it is just to make it complete and then status of TC39 meeting participation that just one slide, and the TC39 standards download statistics. There is also not really anything new. I just point to those slides and that's it. So, actually, I hope - you know - that I can be very, very quick on the whole presentation. So this slide is the list of the new formal Ecma TC39 documentation. Some of them, of course, you already know. One document, which we already discussed five minutes ago is also listed there. IS: So this slides here document is ES2022 approval process. So, this document is the report of the chair group to the Ecma Execome Committee next week. By the way, my understanding is that with this new decision of approving the alternative copyright policy and license that TC39 would also like to apply the procedure with this alternative copyright. Therefore - I was told by the Ecma SG - we have to update that chair group document. This is what we have to do. -The next 2 slides contain the list of to TC39 relevant GA documents - I have again taken out from all GA documents. If people are interested in some of those documents, then please go to your GA representatives, as only he has access to those. +The next 2 slides contain the list of to TC39 relevant GA documents - I have again taken out from all GA documents. If people are interested in some of those documents, then please go to your GA representatives, as only he has access to those. So, for instance, here, this contains the alternative copyright policy document. So this is what also came from, Mozilla, as a proposal, etc. And this document is the so called “voting intention” to the GA meeting which he had last week, etc. @@ -188,9 +188,9 @@ MF: Yeah, so this was a really big feature for us. I think our first commit was MM: What is an AO? -MF: An abstract operation, these are like spec functions. Like `ToBoolean` is an AO. So AOs that cannot return abruptly no longer return completion records. They actually return the value that they return. There's no implicitness there. That means that you shouldn't use an exclamation point when calling those AOs and they shouldn't have ReturnIfAbrupt or question mark or throw in any of their steps. Next thing is for AOs that don't really produce anything, they’re just used for their effects. These are procedure like AOs. They should note that by using this spec enum `~unused~` as the return type or if they can return abruptly, but otherwise have no meaningful return value, you use completion records containing `~unused~`. And at every call site to these procedure-like AOs you should use “perform” to invoke these. +MF: An abstract operation, these are like spec functions. Like `ToBoolean` is an AO. So AOs that cannot return abruptly no longer return completion records. They actually return the value that they return. There's no implicitness there. That means that you shouldn't use an exclamation point when calling those AOs and they shouldn't have ReturnIfAbrupt or question mark or throw in any of their steps. Next thing is for AOs that don't really produce anything, they’re just used for their effects. These are procedure like AOs. They should note that by using this spec enum `~unused~` as the return type or if they can return abruptly, but otherwise have no meaningful return value, you use completion records containing `~unused~`. And at every call site to these procedure-like AOs you should use “perform” to invoke these. -MF: One thing to note about our design philosophy here is that whenever a completion record as a value is held, it should be obvious from within that algorithm that you are holding a completion record, so there should be no way for a completion record to enter an algorithm from some opaque source, like a call to another AO where you have to look at its return type to understand whether you're holding a completion record or not, or a field access. There should be no way to enter that algorithm without it being obvious from the algorithm that you're looking at. So when it's the case that you are trying to create an alias to a completion record, we annotate that with this `Completion` AO, which asserts that its argument is a Completion Record but is otherwise just the identity function and returns what it's given. So this is used as this annotation. +MF: One thing to note about our design philosophy here is that whenever a completion record as a value is held, it should be obvious from within that algorithm that you are holding a completion record, so there should be no way for a completion record to enter an algorithm from some opaque source, like a call to another AO where you have to look at its return type to understand whether you're holding a completion record or not, or a field access. There should be no way to enter that algorithm without it being obvious from the algorithm that you're looking at. So when it's the case that you are trying to create an alias to a completion record, we annotate that with this `Completion` AO, which asserts that its argument is a Completion Record but is otherwise just the identity function and returns what it's given. So this is used as this annotation. MF: And we do have a one minor convenience that we've added, because we noticed some repetitiveness to the guidelines we had and that is that when an AO is declared to return a completion record we don't need to wrap every return site with `NormalCompletion`. We have this concept that is outlined more in the notational conventions called, “clearly marked”. And in those cases, if the value is not clearly marked as a completion record, it is implicitly wrapped in a normal completion. So a function that has like 15 exits with like true or false doesn't need to say NormalCompletion(true); the idea is that each of these return sites can just return true or false. So, make sure you annotate all of the return types, using the new structured headers we've added in the last year or so, which all AOs should have in 262. And ecmarkup, which builds ECMA 262 should have a tons of lint rules now, that should catch basically any of these kinds of errors. @@ -258,7 +258,7 @@ Presenter: Michael Ficarra (MF) - [proposal](https://github.com/tc39/ecma262/pull/2649) -MF: So this will be quick. So at a previous meeting – I think it was two meetings ago – I had talked about the Ecma262 editor maintenance burden of Unicode releases. When a Unicode release happens, we have to update these tables - we have 4 unicode Related tables used for regex stuff in the spec. And one of the options I listed was that we actually go to the Unicode Consortium with a proposal that they keep their names stable. And then we wouldn't need to keep two of those tables around. We could eliminate them from the spec. So I did write that proposal. You can see here. And it was accepted by the Unicode Consortium. So as of just a few days ago, they've actually updated the stability policy. I think MWS has a link to it here. But anyway, these values are now stable. So we don't need to keep these tables in spec, in my opinion. And I believe the opinion of the rest of the editor group. So, I am looking for consensus to remove those two tables. The two tables are the tables outlining the property value aliases that are allowed. That's all I have. +MF: So this will be quick. So at a previous meeting – I think it was two meetings ago – I had talked about the Ecma262 editor maintenance burden of Unicode releases. When a Unicode release happens, we have to update these tables - we have 4 unicode Related tables used for regex stuff in the spec. And one of the options I listed was that we actually go to the Unicode Consortium with a proposal that they keep their names stable. And then we wouldn't need to keep two of those tables around. We could eliminate them from the spec. So I did write that proposal. You can see here. And it was accepted by the Unicode Consortium. So as of just a few days ago, they've actually updated the stability policy. I think MWS has a link to it here. But anyway, these values are now stable. So we don't need to keep these tables in spec, in my opinion. And I believe the opinion of the rest of the editor group. So, I am looking for consensus to remove those two tables. The two tables are the tables outlining the property value aliases that are allowed. That's all I have. SFC: I just had one question, which is, should we wait for the official release of this version of the Unicode standard? I need to check exactly what the timetable is for that. @@ -304,7 +304,7 @@ KG: That's correct. It is the scope in which the source text, the literal "eval" MM: It's okay. So in that case, I have no objection. However, I have no objection to the normative content of what you want to do. I would appreciate a new non-normative, note clarifying that in the direct eval case of the realm that's referred to, the callee realm in The spec is must be identical to the The realm of the executing code. Check. -KG: I'm happy to have that note. +KG: I'm happy to have that note. BT: Thank you. Okay, we have lay on the queue. That will be our time box. We are, we are asked how much now. So, Leo, Please quick. Go ahead. @@ -316,8 +316,7 @@ BT: Okay, sounds like no objections to removing this. Okay. ### Conclusion/Resolution -callerRealm parameter to be removed -Note to be added pointing out that caller realm is the same as the callee realm in the case of a direct eval +callerRealm parameter to be removed Note to be added pointing out that caller realm is the same as the callee realm in the case of a direct eval ## Can we try to remove gross use of @@species in the TypedArray constructor @@ -325,7 +324,7 @@ Presenter: Shu-yu Guo (SYG) - [proposal](https://github.com/tc39/ecma262/issues/2677) -SYG: Okay, so no slides. Basically, this is something that we found when implementing resizableable array buffers, which kind of got us to relook at a bunch of existing array, buffer, stuff and found things that were super weird. This is one of those things, wonder if we can remove it basically, when you create a new type of direct, one of the overloads of the typed array Constructor is that you can pass in an array like or another typed array. If you pass in another typed array. There's this abstract operation that's called called initialize typed array from typed array, that tries to initialize the newly allocated typed array with the typeed array that you passed in. When this happens. A species look up is done on symbol dot species to get the species Constructor for the source array, the typed array that you passed in. Except this species lookup is only done to get the Prototype. You never actually call the species constructor. This seems really weird. So, you get a species prototype. You hook up the Prototype, but you never actually call the subclasses Constructor. +SYG: Okay, so no slides. Basically, this is something that we found when implementing resizableable array buffers, which kind of got us to relook at a bunch of existing array, buffer, stuff and found things that were super weird. This is one of those things, wonder if we can remove it basically, when you create a new type of direct, one of the overloads of the typed array Constructor is that you can pass in an array like or another typed array. If you pass in another typed array. There's this abstract operation that's called called initialize typed array from typed array, that tries to initialize the newly allocated typed array with the typeed array that you passed in. When this happens. A species look up is done on symbol dot species to get the species Constructor for the source array, the typed array that you passed in. Except this species lookup is only done to get the Prototype. You never actually call the species constructor. This seems really weird. So, you get a species prototype. You hook up the Prototype, but you never actually call the subclasses Constructor. SYG: So concretely, here is a test case that shows this behavior. So I subclass array buffer into something called GrossBuffer. And this thing is basically passed through, it has this getter that lets you know that if you in fact actually have a GrossBuffer, and the Constructor, you can construct it for exactly one time and after that it will just throw. This is to kind of showcase that it's never actually called. And then you have this species thing to actually trigger the species path. so what currently happens in the spec is that, say I create a new buffer and then I create this new typed array that is backed by this gross buffer. And then I create a new array and I passed into the new array. This gross array to the typed array I just made. So what actually happens is that this new array, this mystery TA gets the exact same prototype as the subclass by species. but it never actually called the sub class Constructor because otherwise this snippet would have thrown because of this code here. so when you create a new typed array via this path, you get species prototype, but you never actually called the species Constructor. @@ -379,9 +378,9 @@ JHD: They're both release candidates, not official until the June GA meeting app IS: Yeah, that's okay thank you. -JHD: One more quick thing is, if you have any concerns with the PDF generation of either one, please approve the budget request we made a year or two ago because that's the only way that we'll be able to make further improvements. +JHD: One more quick thing is, if you have any concerns with the PDF generation of either one, please approve the budget request we made a year or two ago because that's the only way that we'll be able to make further improvements. -IS: Okay. So regarding the 402, I have already contacted, Isabelle, was the who did last year for the 402 that we would have a similar exercise. Also also this year. So, least 402 part it would be taken care of the 262 part with the 917 pages of I don't know how many exactly this is my more complex and more challenging here, but you are right. +IS: Okay. So regarding the 402, I have already contacted, Isabelle, was the who did last year for the 402 that we would have a similar exercise. Also also this year. So, least 402 part it would be taken care of the 262 part with the 917 pages of I don't know how many exactly this is my more complex and more challenging here, but you are right. JHD: Thanks. @@ -515,7 +514,7 @@ MM: it's an expression, I missed that. TAB: Yeah. Sorry. I said I would explain it and then I kind of forgot about it when I got to the point talking about bindings. Yeah inside of a template literal, interpolations are just treated like normal template literals that they see the Bindings. That anyone else would see, as I explained in the final slide, and they're just interpolated and produces a string. - And in that string is checked the same way as any literal string. +And in that string is checked the same way as any literal string. MM: I see and you also mentioned that you were going to mention tag. Template literals is extension points. @@ -651,7 +650,7 @@ JHX: Thank you. MM: [signals he prefers const bindings] -SHO: Hey everybody, so this is come up in sort of pre-plenary discussions as well. And so I wanted to bring it up here, which is that this proposal is very big and we are doing a lot of things in one proposal and I'm sort of wondering if there would be an advantage here to ship an MVP first and then some extensions, so that we can be sure we don't sort of paint ourselves into unforeseeable Corners. I hear a lot when we talk. About what we want to do. Cases are were like, oh, we shouldn't have done that thing before but now we have to live with it. We should have done that thing before we now have to live with it. I'm wondering if this is a case where maybe we can save ourselves from future regret by breaking the proposal up into smaller pieces and see those Advance which would sort of let us identify problematic corners or behaviors. that we don't expect by looking at this way. +SHO: Hey everybody, so this is come up in sort of pre-plenary discussions as well. And so I wanted to bring it up here, which is that this proposal is very big and we are doing a lot of things in one proposal and I'm sort of wondering if there would be an advantage here to ship an MVP first and then some extensions, so that we can be sure we don't sort of paint ourselves into unforeseeable Corners. I hear a lot when we talk. About what we want to do. Cases are were like, oh, we shouldn't have done that thing before but now we have to live with it. We should have done that thing before we now have to live with it. I'm wondering if this is a case where maybe we can save ourselves from future regret by breaking the proposal up into smaller pieces and see those Advance which would sort of let us identify problematic corners or behaviors. that we don't expect by looking at this way. TAB: Well, I 100% agree with you in general. Yes, this is we want to take this in as small chunk as possible. And that's why we have a lot of potential future extensions that we are not including the proposal right now because I haven’t listed them out because they're a lot more questionable and not as necessary for any core thing that said I believe that more or less what's in there right now is a minimal feature set, if we cut anything out of it It would be either feel extremely weird or would constrain future evolution in particular. The Primitive, ident, array, and object matchers, I think are all of a set. There's only one reasonable behavior for all those. Any way to match up with how they work in any other similar pattern matching proposal and if you’re missing any of them, it feels weird. And for custom matchers, you absolutely need the value to match against variables. The construct becomes almost unusable if you can't put variables in, at least for the simple version interpolation, and if we Have custom matching it may well, I suppose theoretically we could cleave off the custom matcher entirely right now. It would be slightly unfortunate and I want to make sure that we don't like to lose it because like regexes are similarly heavily magical and if they can be done in pattern matching the arbitrary things should be able to be done. And like the example. I showed of constructing custom matches on the fly. I think this is extremely valuable, But in theory there is a cliff point there. We could do because the symbol keyed method at least would allow us to still distinguish in the future, you know, if we didn't ship it in the first part, do you have any suggestions for any more finer cleaving? @@ -742,7 +741,7 @@ SYG: I mean, not having any metadata at all. This also touches on yulia's matter KHG: So okay, then that might where they lack of understanding is coming from without this API. It is not possible to do metadata with without some API. It is not possible to add. metadata at all, because of all the other constraints that have been placed on decorators. The way that metadata was accomplished previously was that the class itself was passed into the decorator, but that was given as an explicit condition of we cannot do that because that allows decorators to change the shape the class. So without that there is just no way for us to be able to associate any kind of information from the decorator that can then be read externally at all, especially for things like class Fields. So we need some way to pass that information along in order to support very basic, use cases, dependency injection. Without some metadata, API dependency injection is simply impossible, and dependency injection one of the most common use cases for decorators. It is has millions of weekly downloads collectively across just just one Greater library for dependency injection. Has 500,000 weekly downloads and you know, if we include like angular, if we include other libraries that are doing this, it is, it is very, very common use case. So we just need something. And you know, this was this was the more complicated approach we could go with a much simpler approach sure, but we need some way to side channel that information. I see I want to talk. I -SYG: have two responses to that one. Thank you for the explanation again that without like, you need something to key off of and because we've explicitly kind of gotten the thing to key off of without some kind of support here to implicitly carry along the key without exposing it directly. That's you need something. So, yes, a simpler approach might work here. I haven't had the time to review the proposed simpler approach yet. So can't really respond to that as for the. And I also want to respond to the dependency injection thing. I'm not sure there's Universal agreement that that's a good thing to enable first of all, in the language and to the Angular thing in particular. I'm not sure how strongly I would take that as evidence that we need this in the language, even if we were to were to have standardized decorators. It's my understanding that angular is not going to be adopting this. If that is the main use case for dependency injection, or at least the main part of the argument for why? It's, it's such a crucial use case. I personally wouldn't put too much weight on that. +SYG: have two responses to that one. Thank you for the explanation again that without like, you need something to key off of and because we've explicitly kind of gotten the thing to key off of without some kind of support here to implicitly carry along the key without exposing it directly. That's you need something. So, yes, a simpler approach might work here. I haven't had the time to review the proposed simpler approach yet. So can't really respond to that as for the. And I also want to respond to the dependency injection thing. I'm not sure there's Universal agreement that that's a good thing to enable first of all, in the language and to the Angular thing in particular. I'm not sure how strongly I would take that as evidence that we need this in the language, even if we were to were to have standardized decorators. It's my understanding that angular is not going to be adopting this. If that is the main use case for dependency injection, or at least the main part of the argument for why? It's, it's such a crucial use case. I personally wouldn't put too much weight on that. KHG: But yeah, if mean there are there are many other use cases as well. I just thought it was the most compelling, glad given like I said, there are many Frameworks that have adopted it and it has many millions. Downloads per week if we want to get into like entire Frameworks. The angular is a good example because they don't just dependency injection via metadata. They do everything the event in that it like literally every angular decorator, just uses metadata. It's basically annotation oriented. So it's like reactivity and many other things also use metadata. But yeah, I happy that you go into more detail of the specific like use cases for sure. And and I do think like we could like I said, ship something simpler here or potentially break this out into another proposal. Although if we were do that, I would really, we would need to discuss like how we, I think YSV’s previous points. in the last presentation kind of comes up that these are two very highly related things and they would need especially if we wanted to prevent the ecosystem like from, you know, fragmenting a bit. We would need some way to make sure that those landed relatively close to each other or Advanced relatively close together. @@ -750,11 +749,11 @@ SYG: Yeah. Before I cede the time to Yulia, I think there is a different. like, KHG: Yeah, I don't. Yeah. I don't want to over rotate on angular. I mean, there's also there's a lot of Frameworks that use decorators right now, and I think that's really what I mean. There is just decorators is a unique in a unique spot as proposal. It is probably one of the most adopted pre-staged 2 syntactic features - for a variety of reasons, historically. Hopefully, we never have a proposal quite like it again. -SYG: but you may about fragmentation. Is it the case today? That there is there exists fragmentation already do to, for example, the decorators, the typescript of chipped. +SYG: but you may about fragmentation. Is it the case today? That there is there exists fragmentation already do to, for example, the decorators, the typescript of chipped. KHG: I would say the fragmentation is very minimal because most libraries that use decorators either, use TypeScript’s implementation, or they use Babel’s implementation, which is quite similar to typescript implementation in capabilities and actual like semantics. This is a so in this current form of the whip, without with the metadata intact in the current form. It is so very different, very different right from from, yeah. -SYG: Okay, absolutely. And so so we're buying into fragmentation no matter what, but hopefully what we'll just have like a before standard decorator and after Santa decorated instead of more fragmentation. What's your point, +SYG: Okay, absolutely. And so so we're buying into fragmentation no matter what, but hopefully what we'll just have like a before standard decorator and after Santa decorated instead of more fragmentation. What's your point, KHG: right. And and more. So like I guess we're what ideally we would just make a decision on whether something like metadata for instance is going to be included or not And that would allow these libraries to either. you know, adopt the new expect that doesn't support their use cases anymore, but they'll just have to figure out some way around it or, you know, continue on I guess using the previous leadership iterations. where where, I think it could be really - is if we ended up, you know, moving decorators without metadata for instance stage 4 and metadata was still in stage 1, Let's say and ended up waffling around. There for quite some time. We could see a lot of libraries that are in this kind of in between a rock and a hard place, right? Like, they want to adopt the language. They want to be spec compliant. But at the same time like metadata is coming. So do they be spec compliant, but then adopt a stage one feature or do they just hold out and hope that you know, it gets through the process. Like I think it would be ideal if this could be done progressively. And, and I guess the way I would approach this is I would probably as the champion who's primarily working on this. would probably create the separate proposals and try to advance all of them to the same stage as decorators as it is right now. And then, you know, move each one to get to stage three before proposing for stage four and so on. So they would all always be within one stage of each other. I think that would be a pretty reasonable, you know, way to approach that problem and to prevent further fragmentation and confusion in the ecosystem. diff --git a/meetings/2022-03/mar-29.md b/meetings/2022-03/mar-29.md index 74d76c85..aa75461e 100644 --- a/meetings/2022-03/mar-29.md +++ b/meetings/2022-03/mar-29.md @@ -108,7 +108,7 @@ LEO: The first one was actually just fixing a normative mistake that we made in LCA: LEO, we can't see the pull request, we can only see the slides. -LEO: I'll because it was sharing a slide only and it opened in a new screen. I'm sorry about that. Yeah so this is the change being proposed. We need consensus, of course. I think I also edited this one day after the deadline, but this is the change. If someone is not comfortable with accepting this change right now, that's okay I understand that. I will just bring back and present this again, in the next meeting. But how big is it? A change. And going back to the other pull request to finish the summarization. We have a very small change where getFunctionRealm abstraction may throw on revoked proxies. This pull request as I'm going to show sets proper exceptions from revoked proxies coming from a different realm and this pull request, improves the API to avoid any leaking, coming cross boundaries. I'm going to this. Once again, I need to share. Okay, this tab. It's merged but it can be reverted if we don't find consensus here. This change looks subtle, but it's normative. I consider this as effects so as like for the fundamentals of the `ShadowRealm`s, it preserves any leaking cross boundaries. But yeah, it's a normative change. So I'm communicating this to TC39 to make sure we have consensus and we don't have any specific objections here. This pull request was suggested just from Legend?? who is working on the Chromium implementation and that's I think it makes sense, but we can go deeper if we have more questions about this. +LEO: I'll because it was sharing a slide only and it opened in a new screen. I'm sorry about that. Yeah so this is the change being proposed. We need consensus, of course. I think I also edited this one day after the deadline, but this is the change. If someone is not comfortable with accepting this change right now, that's okay I understand that. I will just bring back and present this again, in the next meeting. But how big is it? A change. And going back to the other pull request to finish the summarization. We have a very small change where getFunctionRealm abstraction may throw on revoked proxies. This pull request as I'm going to show sets proper exceptions from revoked proxies coming from a different realm and this pull request, improves the API to avoid any leaking, coming cross boundaries. I'm going to this. Once again, I need to share. Okay, this tab. It's merged but it can be reverted if we don't find consensus here. This change looks subtle, but it's normative. I consider this as effects so as like for the fundamentals of the `ShadowRealm`s, it preserves any leaking cross boundaries. But yeah, it's a normative change. So I'm communicating this to TC39 to make sure we have consensus and we don't have any specific objections here. This pull request was suggested just from Legend?? who is working on the Chromium implementation and that's I think it makes sense, but we can go deeper if we have more questions about this. LEO: So this is a summary and here, I am asking for consensus. We have some blog posts coming in. If we have time, we can talk about it, but it, yeah, I'm requesting consensus for these three pull requests. @@ -120,7 +120,7 @@ SYG: Is it is more of a question for the Apple folks? I heard LEO say that Safar MLS: so I think LEO said that its Safari technology preview, which means it's not chipping in a release version, but in the two weeks version, -SYG: I see, and and I want to further clarify that shipping in the released version would block on HTML integration, being done and the web API and an audit being done. +SYG: I see, and and I want to further clarify that shipping in the released version would block on HTML integration, being done and the web API and an audit being done. MLS: Yeah, which might be done nothing. Yeah. Sure. I don't know the details. But, yeah, we don't things and official releases, unless their standards. @@ -154,7 +154,7 @@ MM: The revoked proxy will throw an error that is where the error itself is in. LEO: Yeah. So like if you have if I had realm A and B and I have revoked proxy coming from B, without this pull request, I would observe the B realm inside realm A. -MM: I'm sorry. I don't understand Insight. We have a ShadowRealm, boundary. Is this ShadowRealm boundary or is there are we talking about direct realm to realm contract, +MM: I'm sorry. I don't understand Insight. We have a ShadowRealm, boundary. Is this ShadowRealm boundary or is there are we talking about direct realm to realm contract, LEO: Shadow realm boundary. @@ -216,17 +216,17 @@ SYG: There's no like control being done here. When you tried to get the realm of MM: Okay. So the reason why proxies are a special concern here is is that the only case where the function realm Itself can be caused to throw? -SYG: correct? +SYG: correct? MM:I see. Okay, that makes sense. That explains why they're not orthogonal. Okay. thank you. All right. -USA: That's the end of the queue +USA: That's the end of the queue LEO: okay, so, as I was just trying to show here, these example like this test262. I think it does show the intention here on life again. CZW: Okay. I just noticed that SYG said that all notes on the spec are editorial, so I just think we are missing some switching context in the current spec. It seems we are not switching the execution context for the errors. -SYG: During let's take it offline on that, please. You probably did flag CC. Me already on that thread to please re CC me. I guess it should work it out on the spec because the the high level behavior that MM wanted. I don't think it's actually possible to spec without significant like normative changes to changes to the entire spec. So um, so let's hash that out for what the realm should actually be on the thread. +SYG: During let's take it offline on that, please. You probably did flag CC. Me already on that thread to please re CC me. I guess it should work it out on the spec because the the high level behavior that MM wanted. I don't think it's actually possible to spec without significant like normative changes to changes to the entire spec. So um, so let's hash that out for what the realm should actually be on the thread. LEO: Okay, so let's go to thank you. Okay, so you're short on the time box here. I just want to make sure we advance a few things. So, right now, I would like to do separation here. I like to ask consensus for each one of them. So for the first one was to removed the name prefix for wrapped functions. That was a fixed that I intend like just matches a consensus of what is achieved in December 2021. Do we have any objections for this? I don't see anything on the queue. @@ -342,7 +342,7 @@ Presenter: J.S.Choi (JSC) - [proposal](https://github.com/tc39/proposal-call-this) - [slides](https://docs.google.com/presentation/d/1-MLGCibETPX8NiIvNJ1xOxiMS-NB8GCbDGNcB5patiU/edit?usp=sharing) -JSC: Hi everyone. This is JSC again. This is an update and a little time for bike shedding for the call-this operator formerly known as the bind-this operator. It's currently at stage one. There are four options to talk about. There's lots of cross-cutting concerns. I'll try to be quick with the slides. So just to review call-this: this is an operator that lets you change the receiver of a function call. Okay, there. There are four possible syntaxes for this. I will describe them in more detail after I talk about some other stuff. So call-this used to be called bind-this, it was a resurrection of the whole bind operator. It's also a rival proposal to JHX’s extensions, although now, you could think of it as a subset of extensions because we dropped functions from the operator so that it's valid with functions called with parentheses, but it's a syntax error without it. And so, we can decide what we want to do without parentheses later. A lot of this will plug into Redux and the holistic dataflow discussion later in this meet at the end of this plenary. But for this, I want to focus on bike shedding the syntax of call-this just this just to quickly review. Why might we want to add this? `someFunction.call` is really common, people use it for a lot of reasons whenever they have a method using a function in some variable for whatever reason, whether it was imported from a module, whether they extracted it from an object and need to conditionally switch between it, whether they need to reuse it on a monkey patched objects, whether they want to cache it to make sure it doesn't get bit by prototypes changing, whatever. It's really common. It's one of the most common methods in the entire language. In fact, we did a pretty robust corpus analysis. You can see our methodology and reproduce it if you want. It's on GitHub. There's a link there that includes transpiled code. A volunteer did a pretty thorough manual review. We it seen in our in the data set from Chi. From a GSM need. It was more common than `console.log`, more common than `.slice`. more common than `.push`, `.set` (whatever `.set` might be). It's really common, but it's really clunky. It's long, and it flips word order from what you would expect from a regular subject-verb argument order that you would have with regular dot calls. And there's just a lot of boilerplate. Every time you type `verb.call(receiver, …)`, it just separates the receiver from the verb. It flips the order. It makes nesting difficult. It's really clunky call. This operator would make this a lot less clunky. It would make the word order back to subject, verb, then arguments; it will put them near each other. Again, it just makes it a lot more or readable. .call() is very clunky, very common, and it's very clunky. So, very common times very clunky means worth improving with syntax or at least considering it. Big impact on the language. One might argue that “Doesn't the pipe operator solve this?”, I would argue that it does not. I would argue that if you try using the pipe operator to make the word order better - `receiver |> function.call(@, args)` - the result is actually less readable. So the word order is better, but there's just so much excessive visual noise that for such a common operation that I don't think the pipe operator addresses this problem adequately. The pipe group has been investigating for a long time whether it's possible to modify the pipe operator to address .call()’s clunkiness while still addressing pipe’s other use cases. And we still haven't found any, except like a separate operator, or modifying the pipe operator enough that it's essentially a separate operator. There's also concerns about ecosystem schism; I plan to get into this a lot more during the dataflow holistic discussion redux. I’d like to just briefly touch on it. I would argue that there's an ecosystem schism already happening right now between the non-this-based functions and this-based functions, and the most important thing is whether you can easily interoperate between the two paradigms. It's actually not so easy right now because they have such differences in whether they can have linear data flow and whether you can tree-shake them. And I would argue that with both the pipe operator and bind-this, the ecosystem schism actually gets bridged and interoperability becomes better. That schism is already happening right now and it would be improved by adding these operators. But let's get into that more in the data flow. I would like to focus on bike shedding four candidate syntaxes. That was the update. Now for the bikeshedding. There are four candidate syntaxes that different representatives have floated. Two of them you can group together, in that they are receiver-first and then the verb and then arguments, and that they're unbracketed, so they resemble dot-syntax. And you can have loose precedence, which means it resembles the pipe operator, or you can have tight precedence, which means it resembles the dot operator; that affects whether people will usually put white space around them, and it affects grouping, obviously, and it also affects how you conceptually think of the operator. Is this, basically, another pipe, or is this basically another dot? There's also a bracketed version of receiver-first syntax. This isn't too popular, but it's there and possible. There's function-first, which means you would have to use it with pipe if you want to have a “natural” word order, but it's still better than .call(). And then there's using `this` as an argument, annotated with a special `this` annotation. So those are four examples of syntaxes. There's an issue thread here. I just wanted to get some temperature checks from the committee on which ones the committee thinks might be the best to go with. Let's go to the queue. Looks like it's empty right now. If nobody has any comments to make, I will say that I am currently leaning towards tight receiver syntax. First I have heard from some representatives that they are concerned that there may be confusion, with beginners or whatever, on when should I use or should I use call-this? And I would just say that almost like you should usually use `.` and call-this when you know what you're doing. I don't think that's really a big deal. It's just a matter of a little education. I think that tight precedence makes more sense than and would be less surprising than loose precedence. And I think that, I share concerns with like some people like JHD, that other syntaxes like receiver first wouldn't bring enough benefit to warn syntax. We looks like hex yeah. +JSC: Hi everyone. This is JSC again. This is an update and a little time for bike shedding for the call-this operator formerly known as the bind-this operator. It's currently at stage one. There are four options to talk about. There's lots of cross-cutting concerns. I'll try to be quick with the slides. So just to review call-this: this is an operator that lets you change the receiver of a function call. Okay, there. There are four possible syntaxes for this. I will describe them in more detail after I talk about some other stuff. So call-this used to be called bind-this, it was a resurrection of the whole bind operator. It's also a rival proposal to JHX’s extensions, although now, you could think of it as a subset of extensions because we dropped functions from the operator so that it's valid with functions called with parentheses, but it's a syntax error without it. And so, we can decide what we want to do without parentheses later. A lot of this will plug into Redux and the holistic dataflow discussion later in this meet at the end of this plenary. But for this, I want to focus on bike shedding the syntax of call-this just this just to quickly review. Why might we want to add this? `someFunction.call` is really common, people use it for a lot of reasons whenever they have a method using a function in some variable for whatever reason, whether it was imported from a module, whether they extracted it from an object and need to conditionally switch between it, whether they need to reuse it on a monkey patched objects, whether they want to cache it to make sure it doesn't get bit by prototypes changing, whatever. It's really common. It's one of the most common methods in the entire language. In fact, we did a pretty robust corpus analysis. You can see our methodology and reproduce it if you want. It's on GitHub. There's a link there that includes transpiled code. A volunteer did a pretty thorough manual review. We it seen in our in the data set from Chi. From a GSM need. It was more common than `console.log`, more common than `.slice`. more common than `.push`, `.set` (whatever `.set` might be). It's really common, but it's really clunky. It's long, and it flips word order from what you would expect from a regular subject-verb argument order that you would have with regular dot calls. And there's just a lot of boilerplate. Every time you type `verb.call(receiver, …)`, it just separates the receiver from the verb. It flips the order. It makes nesting difficult. It's really clunky call. This operator would make this a lot less clunky. It would make the word order back to subject, verb, then arguments; it will put them near each other. Again, it just makes it a lot more or readable. .call() is very clunky, very common, and it's very clunky. So, very common times very clunky means worth improving with syntax or at least considering it. Big impact on the language. One might argue that “Doesn't the pipe operator solve this?”, I would argue that it does not. I would argue that if you try using the pipe operator to make the word order better - `receiver |> function.call(@, args)` - the result is actually less readable. So the word order is better, but there's just so much excessive visual noise that for such a common operation that I don't think the pipe operator addresses this problem adequately. The pipe group has been investigating for a long time whether it's possible to modify the pipe operator to address .call()’s clunkiness while still addressing pipe’s other use cases. And we still haven't found any, except like a separate operator, or modifying the pipe operator enough that it's essentially a separate operator. There's also concerns about ecosystem schism; I plan to get into this a lot more during the dataflow holistic discussion redux. I’d like to just briefly touch on it. I would argue that there's an ecosystem schism already happening right now between the non-this-based functions and this-based functions, and the most important thing is whether you can easily interoperate between the two paradigms. It's actually not so easy right now because they have such differences in whether they can have linear data flow and whether you can tree-shake them. And I would argue that with both the pipe operator and bind-this, the ecosystem schism actually gets bridged and interoperability becomes better. That schism is already happening right now and it would be improved by adding these operators. But let's get into that more in the data flow. I would like to focus on bike shedding four candidate syntaxes. That was the update. Now for the bikeshedding. There are four candidate syntaxes that different representatives have floated. Two of them you can group together, in that they are receiver-first and then the verb and then arguments, and that they're unbracketed, so they resemble dot-syntax. And you can have loose precedence, which means it resembles the pipe operator, or you can have tight precedence, which means it resembles the dot operator; that affects whether people will usually put white space around them, and it affects grouping, obviously, and it also affects how you conceptually think of the operator. Is this, basically, another pipe, or is this basically another dot? There's also a bracketed version of receiver-first syntax. This isn't too popular, but it's there and possible. There's function-first, which means you would have to use it with pipe if you want to have a “natural” word order, but it's still better than .call(). And then there's using `this` as an argument, annotated with a special `this` annotation. So those are four examples of syntaxes. There's an issue thread here. I just wanted to get some temperature checks from the committee on which ones the committee thinks might be the best to go with. Let's go to the queue. Looks like it's empty right now. If nobody has any comments to make, I will say that I am currently leaning towards tight receiver syntax. First I have heard from some representatives that they are concerned that there may be confusion, with beginners or whatever, on when should I use or should I use call-this? And I would just say that almost like you should usually use `.` and call-this when you know what you're doing. I don't think that's really a big deal. It's just a matter of a little education. I think that tight precedence makes more sense than and would be less surprising than loose precedence. And I think that, I share concerns with like some people like JHD, that other syntaxes like receiver first wouldn't bring enough benefit to warn syntax. We looks like hex yeah. JHX: discuss about the proceedings. This is the I think the most important thing we to consider personally I prefer the tight precedence like the operator. This problem is also discussed in the oh The barring operator Ripple. So if it's used in loose style it. It's looks more like pipeline. Operator. This is a problem of the old by bind operator because the as the old, bind operates are all current to call this, all the extension proposal, they all use the function at some methods. So, it used as a master cylinder, which which army methods that, it have the receiver, receiver, so generally people would would like to use it like a normal method. this this is why it's better to use the same precedence as the dot and i think it will be better for the mass of the changing. And another prominent of the operator precedence is the style of tight unbracketed. I'm practicing. It's use. Proceedings is hard to describe the precedent's the left side. It could be seen as the same precedence as thought, but the right side, it it's a little bit lowered, the and Toppers so I believe it's Circle a possible, but I feel it's a little bit confused. @@ -389,7 +389,7 @@ Presenter: J.S. Choi (JSC) JSC: this is not advancement. It's an update and bike shedding. So long-awaited pipe operator nearly ready for stage 3. Hopefully we have two more big hurdles. That's this one, but we've got to one of them is that we're stuck on by shifting the spelling of the reference crucial piece of its syntax. Um, why a pipe operator really quick developers often transform stuff and data flows. They don't want to use, you don't want to use variables for every step, they're able to do this with DOT method with regular prototype method chains. You can see the numbers there, left to right, linear easy to read. But if you mix in functional function, calls or other Russians, the reading order gets become zigzagging between left and right and To left. It's not linear anymore. So if you want to read from like the beginning to the end of the data flow, it becomes a lot more difficult. Follow the numbers (on the slide) to see what I mean. So developers need should be able to express other data like data flows that not only use prototype method change, but other things like function calls and other stuff. It's still in linear left to right chains and a pipe operator would make this possible. So the pipe operator would create a lexical context around its right hand side within, which it would bind a topic reference topic reference to the result of its left hand side, the topic reference that being some sort of symbol, and this is a, this is a reflection of like a greater trend in the ecosystem towards using more tree shakeable functions, like instead of things straight on the prototype chain. Because in order to be able to split module, Like, there's a prominent example with the new Firebase JavaScript SDK, which recently switched from prototype based methods to individually importable functions, but a big complaint with that has been that you have to rewrite your code and it becomes a lot harder to follow the code without introducing a lot of temporary variables. The pipe operator would help with that. It's been a long road to get here. You can read my History document if you want to get the Gory details, this has been. This is like maybe the fifth or sixth presentation of pipe to the committee since 2015. We've gone back and forth on a lot of sin taxes in particular between two possibilities with tested unary function calls. That's take a F# style and Hax style, which is what we have now with lexical topic references since last summer we've had consensus or style is the way forward to, to concerns about f# style from the from the broad committee, the developer committee itself remains split, but they just want some pipe operator, largely in general. We've also been talking about data flow proposals in general and where things fit i, and there's going to be a long discussion during this plenary that continues that discussion. You can see the there have been two meetings so far and you read the notes for that. And I also have a long article that I recently updated that you can also read about that. If you can't find it, just ping me. -JSC: So, you know, we that one big hurdle is figuring out pipe, operator and where it fits in with this other data flow, proposals in particular. There's a representative who has a hard requirement that pipe operator be packaged with call this but like that's one hurdle; the other hurdle is to bikeshed, the spelling of the topic reference. So I've got a couple of criteria here for choosing to token. Is simple doesn't make parsing more complex for computers or for humans, is it visually distinguishable. Is it does it blur into ASCII soup? With other common symbols? Does it make it excessively more verbose? Is it easy to type in common keyboard formats? And when we're thinking about these criteria we gotta away from them by expected by how frequently their cases are going to occur. For instance there is a candidate topic to number signs to up to octo-thorn[??], and I would argue that because I expect Tuple and record literals to be very common. I would use that very commonly and that their costume visual distinguishability Scott to be x + Large expected number of occurrences. So that's just an example of weighing these criteria by expected numbers of current of occurrences. We've four, we've got four slides with candidates, but those are all the candidates on the left of this slide. There are six candidates. You can look at the wiki page. You can even edit it at yourself. There's an issue that has maybe 400 comments on it accrued over the past four years and just some notes. We are excluding things that are currently binary operators. So that includes carret and percent. Some implementers have raised concerns about increasing coupling between the lexer and the rest of the parser. You know, like like / that having to be distinguish between division and regular expression literals. So like we have excluded those we have also excluded the single number octothorpe[??] because it would require because if Tuple literals go with with the number sign for to indicate literals, then we would have to parenthesize them for topic property access in. Definitely identifiers like the dollar sign cetera team too hazardous to refractor and refactoring. It would be. also. There also common identifiers who would be quite annoying if we could not use them within pipe bodies, and there is a representative who would hard block anything that involving that is an identifiers. So we won't talk about them. So we've got that @ sign only viable single character token, there is with regards to decorator, syntax allowing `@` in parentheses, and then an expression in there to indicate to let you decorate classes and functions with arbitrary expressions. There is maybe arguably an automatic semicolon insertion hazard, but I would argue it's not that big of a deal. One thing to remember is that we statically at parse time, have an early error that requires developers to put at least one topic in every topic body or else it will, it will not compile. So if there is, if the only, if they use an `@` and they, it's it's being used as decorator and not topic reference, then it will throw if they use an `@` elsewhere. And it and it's still that wouldn't catch it. We could put in an utterly are to prevent unprinted size decorated classes or functions as pipe bodies. I don't think this is a very big ASI Hazard and of course the committee generally thinks that people should be using semi colons anyway, it's only viable Single Character token. I am biased towards it, but here are the other possibilities, like double carret. Which is fine. It bitwise xor exclusive or is not that common and can set there was a concern about dead keys with a keyboard layouts, but it actually is usually easy to type with the exception of some keyboard layouts that require four keystrokes. There's double percent, Similar thing remainder more common, but still not bad-common, `@@` there's, there's a number, a number sign, and then underscore, which actually, I think probably should be disqualified because it will be ambiguous with private fields, and is already ambiguous within this and then there's double the number sign, which I is not ambiguous, but I'm not a big fan of because I think it will be quite Noisy with Tuple literals. Anyways, those are the six possibilities. +JSC: So, you know, we that one big hurdle is figuring out pipe, operator and where it fits in with this other data flow, proposals in particular. There's a representative who has a hard requirement that pipe operator be packaged with call this but like that's one hurdle; the other hurdle is to bikeshed, the spelling of the topic reference. So I've got a couple of criteria here for choosing to token. Is simple doesn't make parsing more complex for computers or for humans, is it visually distinguishable. Is it does it blur into ASCII soup? With other common symbols? Does it make it excessively more verbose? Is it easy to type in common keyboard formats? And when we're thinking about these criteria we gotta away from them by expected by how frequently their cases are going to occur. For instance there is a candidate topic to number signs to up to octo-thorn[??], and I would argue that because I expect Tuple and record literals to be very common. I would use that very commonly and that their costume visual distinguishability Scott to be x + Large expected number of occurrences. So that's just an example of weighing these criteria by expected numbers of current of occurrences. We've four, we've got four slides with candidates, but those are all the candidates on the left of this slide. There are six candidates. You can look at the wiki page. You can even edit it at yourself. There's an issue that has maybe 400 comments on it accrued over the past four years and just some notes. We are excluding things that are currently binary operators. So that includes carret and percent. Some implementers have raised concerns about increasing coupling between the lexer and the rest of the parser. You know, like like / that having to be distinguish between division and regular expression literals. So like we have excluded those we have also excluded the single number octothorpe[??] because it would require because if Tuple literals go with with the number sign for to indicate literals, then we would have to parenthesize them for topic property access in. Definitely identifiers like the dollar sign cetera team too hazardous to refractor and refactoring. It would be. also. There also common identifiers who would be quite annoying if we could not use them within pipe bodies, and there is a representative who would hard block anything that involving that is an identifiers. So we won't talk about them. So we've got that @ sign only viable single character token, there is with regards to decorator, syntax allowing `@` in parentheses, and then an expression in there to indicate to let you decorate classes and functions with arbitrary expressions. There is maybe arguably an automatic semicolon insertion hazard, but I would argue it's not that big of a deal. One thing to remember is that we statically at parse time, have an early error that requires developers to put at least one topic in every topic body or else it will, it will not compile. So if there is, if the only, if they use an `@` and they, it's it's being used as decorator and not topic reference, then it will throw if they use an `@` elsewhere. And it and it's still that wouldn't catch it. We could put in an utterly are to prevent unprinted size decorated classes or functions as pipe bodies. I don't think this is a very big ASI Hazard and of course the committee generally thinks that people should be using semi colons anyway, it's only viable Single Character token. I am biased towards it, but here are the other possibilities, like double carret. Which is fine. It bitwise xor exclusive or is not that common and can set there was a concern about dead keys with a keyboard layouts, but it actually is usually easy to type with the exception of some keyboard layouts that require four keystrokes. There's double percent, Similar thing remainder more common, but still not bad-common, `@@` there's, there's a number, a number sign, and then underscore, which actually, I think probably should be disqualified because it will be ambiguous with private fields, and is already ambiguous within this and then there's double the number sign, which I is not ambiguous, but I'm not a big fan of because I think it will be quite Noisy with Tuple literals. Anyways, those are the six possibilities. JSC: Let's look at the queue. The queue is empty. I will glance at matrix. Nothing. I am currently inclined towards that and I would, and although I'm not the only champion of the group. I, that's what I would push for within the champion group. So I'd like get temperature from the other committee and what they like, or don't like. There's a bunch of things on the queue now. Okay, MM. @@ -407,7 +407,7 @@ JSC: Yeah, I'm not sure what the answer is. It reached stage 3? So probably it's SHO: Yeah. I just wanted to voice that. I remain a huge fan of the double carret because I think my having the higher baseline, it's just really distinguishable while also still being light, right? You're not like being assaulted with all of these full height topics, and I think the win is such that on the few keyboards. That caret is harder to type remapping or other shortcuts could be made to make it work. And so in my opinion from the, from the dimensions, on which we are balancing, everything. I think this is still a clear win. Xor is not used very often and it's so distinguishable. So I just wanted to throw in my favor for that one. -JSC: All right. Yeah, I appreciate the typeographic baseline argument. I always have I really wish that single carrot. Could have been a thing. It, having said that, I think I personally feel that a single character having a single character is enough of a win over having raised typographic baseline, but I understand that this can be subjective. It looks like others in the community are generally supporting so, but I do appreciate your argument for distinguishability do to raise typographic baseline. +JSC: All right. Yeah, I appreciate the typeographic baseline argument. I always have I really wish that single carrot. Could have been a thing. It, having said that, I think I personally feel that a single character having a single character is enough of a win over having raised typographic baseline, but I understand that this can be subjective. It looks like others in the community are generally supporting so, but I do appreciate your argument for distinguishability do to raise typographic baseline. JSC: JHX, looks like is looks like 5 carrets would be valid syntax. No need to talk actually hacks if we went with double carrot, we would ban topic the xor being right next to next to each other. They would have to be At this separated by parentheses 4 spaces. @@ -567,7 +567,7 @@ Presents slide on popularity of TypeScript [continues presenting slides] -DRR: So, jump back into this. Looking at the top four spots here, and that's actually been sustained in the year in last year's Octoverse ranking as well. So this is astonishing because we really didn't see this happening back when we started typescript and well came out with it in 2012. It's been around for about a decade at this point, but, you know, you can look at this chart and you can see that you can really think of this as TypeScript as being a subset of the JavaScript Community, right? And what this chart is really showing is that this is a very popular pattern: using types in your JavaScript, using a typed version of JavaScript, is extremely prevalent, right? You should see. This is the percent, you know, the number of people using types in their JS today and it’s really proven the value proposition here, right? A lot of people use types in this room. It's at least the lower Bound by trip. Is that is so You know, it speaks volumes to see the usage. We've also seen huge efforts from other companies as well. There are also other type-checkers out. There. Flow is one that also adds its own syntax extensions that look very similar to TypeScript; Closure Compiler also did a sort of similar thing where they use a format called. JSDoc, a common format called JSDoc to type-check JavaScript as well. And so in many ways these have sort of convergence, right? Like the fact that you know, these type annotations and types, don't have any sort of emit generally, but they have different approaches and goals, and it's been good that they've been able to sort of experiment with that as well. So, you know, one of the things about the people using typed JavaScript, or rather these typed extensions to JavaScript, is that as soon as you use these extensions, you can't just run the code, right? You have to find a way to strip out the types. And so, you know, we knew that this was an issue because people want to be able to do lightweight projects without a build step. They want to be able to actually just restart node or whatever their sort of program is to run code without having to do some sort of intermediate step. However if you're going to use this format, this is really convenient, but it's not directly runnable. You're going to get an error as soon as you try to run it. We try to solve this problem with TypeScript by leveraging the JSDoc format, right? So this is one way that you could annotate a function to say that, it takes two numbers and returns a number. There's other ways that you could do this to, by mixing TypeScript specific syntax with JSDoc. So here's another way that you can today, you know, write JavaScript code that's understood by a tool like typescript that says, it takes two numbers and produces a number as well. Flow had a another similar approach where they used comments that look a little bit closer to the actual types and tax that touch been flow. both use so So you can kind of hop into this comments syntax every time you have a binding and then say like this thing isn't going to be a number, right? So visually it looks a little bit more like something like typescript or flow, but you know, you have to kind of hop between these things. And so this this all these approaches are Usable, but they tend to be a little bit cumbersome, I would say. There are other approaches to get around the build step as well. Right? So for example, we're seeing this huge blob of build tools that are just meant to be as fast as possible. Right? So fast that it feels transparent to like have them around so you can just like run them immediately and within less than a second, many of these tools can produce output that you can run directly in your browser and node wherever and then some platforms either have hooks like node where you can you can use this sort of preprocessor like ts-node or other platforms? Like Deno just have TypeScript built-in so they can just run time to put directly correctly. And so These workarounds are okay, but often they have met some amount of configuration there, still issues with, you know, you run them here, but you can run them in other places as well. So that that tends to make things harder. So, +DRR: So, jump back into this. Looking at the top four spots here, and that's actually been sustained in the year in last year's Octoverse ranking as well. So this is astonishing because we really didn't see this happening back when we started typescript and well came out with it in 2012. It's been around for about a decade at this point, but, you know, you can look at this chart and you can see that you can really think of this as TypeScript as being a subset of the JavaScript Community, right? And what this chart is really showing is that this is a very popular pattern: using types in your JavaScript, using a typed version of JavaScript, is extremely prevalent, right? You should see. This is the percent, you know, the number of people using types in their JS today and it’s really proven the value proposition here, right? A lot of people use types in this room. It's at least the lower Bound by trip. Is that is so You know, it speaks volumes to see the usage. We've also seen huge efforts from other companies as well. There are also other type-checkers out. There. Flow is one that also adds its own syntax extensions that look very similar to TypeScript; Closure Compiler also did a sort of similar thing where they use a format called. JSDoc, a common format called JSDoc to type-check JavaScript as well. And so in many ways these have sort of convergence, right? Like the fact that you know, these type annotations and types, don't have any sort of emit generally, but they have different approaches and goals, and it's been good that they've been able to sort of experiment with that as well. So, you know, one of the things about the people using typed JavaScript, or rather these typed extensions to JavaScript, is that as soon as you use these extensions, you can't just run the code, right? You have to find a way to strip out the types. And so, you know, we knew that this was an issue because people want to be able to do lightweight projects without a build step. They want to be able to actually just restart node or whatever their sort of program is to run code without having to do some sort of intermediate step. However if you're going to use this format, this is really convenient, but it's not directly runnable. You're going to get an error as soon as you try to run it. We try to solve this problem with TypeScript by leveraging the JSDoc format, right? So this is one way that you could annotate a function to say that, it takes two numbers and returns a number. There's other ways that you could do this to, by mixing TypeScript specific syntax with JSDoc. So here's another way that you can today, you know, write JavaScript code that's understood by a tool like typescript that says, it takes two numbers and produces a number as well. Flow had a another similar approach where they used comments that look a little bit closer to the actual types and tax that touch been flow. both use so So you can kind of hop into this comments syntax every time you have a binding and then say like this thing isn't going to be a number, right? So visually it looks a little bit more like something like typescript or flow, but you know, you have to kind of hop between these things. And so this this all these approaches are Usable, but they tend to be a little bit cumbersome, I would say. There are other approaches to get around the build step as well. Right? So for example, we're seeing this huge blob of build tools that are just meant to be as fast as possible. Right? So fast that it feels transparent to like have them around so you can just like run them immediately and within less than a second, many of these tools can produce output that you can run directly in your browser and node wherever and then some platforms either have hooks like node where you can you can use this sort of preprocessor like ts-node or other platforms? Like Deno just have TypeScript built-in so they can just run time to put directly correctly. And so These workarounds are okay, but often they have met some amount of configuration there, still issues with, you know, you run them here, but you can run them in other places as well. So that that tends to make things harder. So, DRR: That brings us to what we're proposing today. So, the goal is to make static types more accessible, within JavaScript. So, a couple of the specifics of the goals, we want to make ergonomic type annotations in JavaScript, they should be easy to use. Should be optional. You shouldn't need to use them. We're not trying to change the way that every JavaScript developer has to write code. And the similar sort of duel of that: we don't want to change the semantics of the surrounding JavaScript. if you do choose to use types, so these types should not affect code that exists today and they should not affect code that exist with types either. So they shouldn't, you know, they shouldn't affect semantics but they should still be runnable, right? In other words: You should still be able to run the code that uses types. You shouldn't need some sort of preprocessor step anywhere that runs. JavaScript with types should run as well, but we need a space for type-checkers to actually evolve? Type-checkers are still innovating. They're still adding new constructs and we need room to grow while still having the syntactic space to do so in JavaScript, and on top of all of these things we want to meet community expectations from all JavaScript users here, especially users who have already invested interest in types. So we want to be able to serve those users users who already has and types of per flow. So that, you know, that we don't produce something that's going to be completely different from what they were already used to. @@ -616,7 +616,7 @@ RPR: Yes, if something has runtime implications then, by definition, that's outs YSV: Good. I think that topic is finished. KG continues our topic on syntax. -KG: So, with regards to WH's question about what the point of this proposal is, I am taking as the point of this proposal that people would like to write types in their code and be able to run that in the browser. And specifically, I am taking that as the goal in contrast to making TypeScript syntax specifically work. So if the goal is just to have some syntax where you can write types, there are simpler options, and I would like the focus to be on those, so that we reserved a much smaller amount of syntax space. So for example `:` is already reserved in several positions, just reserve it in more positions and give rules for what it means in those positions. Sorry, rather than reserving the other positions, we could make it a comment in some positions. And then you could write, know, `:interface` as your declaration or whatever. This wouldn't make a typescript work, but it would allow you to write type declarations in your code with very little syntactic overhead. So, I'm fine with this proposal going to Stage 1, I would just like to ensure that we are saying that we are open to exploring syntaxes which are very different from TypeScript specifically, because TypeScript has so much syntax. +KG: So, with regards to WH's question about what the point of this proposal is, I am taking as the point of this proposal that people would like to write types in their code and be able to run that in the browser. And specifically, I am taking that as the goal in contrast to making TypeScript syntax specifically work. So if the goal is just to have some syntax where you can write types, there are simpler options, and I would like the focus to be on those, so that we reserved a much smaller amount of syntax space. So for example `:` is already reserved in several positions, just reserve it in more positions and give rules for what it means in those positions. Sorry, rather than reserving the other positions, we could make it a comment in some positions. And then you could write, know, `:interface` as your declaration or whatever. This wouldn't make a typescript work, but it would allow you to write type declarations in your code with very little syntactic overhead. So, I'm fine with this proposal going to Stage 1, I would just like to ensure that we are saying that we are open to exploring syntaxes which are very different from TypeScript specifically, because TypeScript has so much syntax. DRR: I am, I guess there's a sort of balance there with our goals. So like I don't want to bikeshed it here. I hear what you're saying. I'm open to exploring in various ways, but two of the goals that we have our meeting Community expectations. So like what do people expect types of look like in JavaScript? and also being ergonomic and so, you know, some of that is subjective and we can talk about that alone at a later point, but I am open to like thinking about that in ways that, you know, we're trying enable independent evolution as well. Right? So like what is the carve out? Right? That's something we can talk about further. @@ -683,7 +683,7 @@ DRR: Okay, I would say it's as I mean, it's a very similar approach and that's y YSK: The next topic we have is from SYG. Please, go ahead. -SYG: Switching gears a little bit from from types to non-types, So, ignoring the stuff that I actually am not clear on if you are proposing right now with importing types of content types, ignoring the stuff that is not just type annotation comments and possibly type declaration comments. The mechanical way to look at what the proposal is doing, is just at unambiguous comment attribution, is my understanding. You have comments today, like C-style comments that maybe some toolchains— I think you showed an example with Flow— of annotating certain parse nodes at a finer granularity than line level to annotate that with a particular type using existing comments, but my understanding is that you like there is no standardized way to unambiguously attribute a comment to any particular parse node at a finer granularity than 'line'. So mechanically, I think the thing that this proposal ads is unambiguous comments attribution and I kind of agree with you that it opens a big space to explore. Maybe like well beyond types. Imagine that browsers decide to use performance hints Beyond types? Like think, performance guided optimization stuff. This makes that space open. Is that desirable? Have you thought about those consequences? What are your thoughts about those consequences? +SYG: Switching gears a little bit from from types to non-types, So, ignoring the stuff that I actually am not clear on if you are proposing right now with importing types of content types, ignoring the stuff that is not just type annotation comments and possibly type declaration comments. The mechanical way to look at what the proposal is doing, is just at unambiguous comment attribution, is my understanding. You have comments today, like C-style comments that maybe some toolchains— I think you showed an example with Flow— of annotating certain parse nodes at a finer granularity than line level to annotate that with a particular type using existing comments, but my understanding is that you like there is no standardized way to unambiguously attribute a comment to any particular parse node at a finer granularity than 'line'. So mechanically, I think the thing that this proposal ads is unambiguous comments attribution and I kind of agree with you that it opens a big space to explore. Maybe like well beyond types. Imagine that browsers decide to use performance hints Beyond types? Like think, performance guided optimization stuff. This makes that space open. Is that desirable? Have you thought about those consequences? What are your thoughts about those consequences? DRR: I think that there is some prior art with runtimes experimenting. like SoundScript or whatever where the idea was like, let's try to hit a little bit to the to the runtime system to do so-and-so optimizations ahead of time. That would be cool but it's not something that we think is the primary benefits that something like this would bring it would be more of the design time. Authoring process, the tooling that we can provide based on that. You know, I'm not an engine expert but based on what I know of how optimizations work in existing runtimes. There's just a lot of difficulties when it comes to something like structural type checking, right? And in large Parts these types of stands for JavaScript are structural and it would be difficult in some ways. I would imagine to optimize so if— @@ -830,7 +830,7 @@ WH: I am uncomfortable with the problem statement they presented. JWK: I am skeptical on this but am not going to block it hard. -DRR: Is there something that we can? Clarify in some way WH, Maybe any specific detail. +DRR: Is there something that we can? Clarify in some way WH, Maybe any specific detail. WH: The things that make me uncomfortable are that this appears to be very much tied to TypeScript, but only parts of TypeScript. I see putting parts of TypeScript to ECMAScript but only some parts as being actively harmful to the ecosystem because it will fork TypeScript. I see the argument for convenience of being able to just write TypeScript and run it in your browser. and I think that would be better solved by having an opt-in to TypeScript syntax. And that way, you could use all of TypeScript rather than just some parts of it with other parts not working or doing unexpected things. @@ -868,7 +868,7 @@ YSV: So folks, because we are 10 minutes over, should we schedule more time to d DRR: Maybe we can reconvene another day. -YSV: I thing that is going to be wise. Is everyone. All right with that outcome? For now, we will reconvene and discuss this later. I don't hear any opposition. So, thank you everyone, for the discussion. We are done for today, and we will be starting again tomorrow at the same time and I will speak with the chairs about finding time for short, extra time box for this discussion for this topic. Thanks everyone for the Champions. I recommend reviewing the chat. There are folks who would like more clarification on the problem statement and we are done. +YSV: I thing that is going to be wise. Is everyone. All right with that outcome? For now, we will reconvene and discuss this later. I don't hear any opposition. So, thank you everyone, for the discussion. We are done for today, and we will be starting again tomorrow at the same time and I will speak with the chairs about finding time for short, extra time box for this discussion for this topic. Thanks everyone for the Champions. I recommend reviewing the chat. There are folks who would like more clarification on the problem statement and we are done. ### Conclusion/Resolution diff --git a/meetings/2022-03/mar-30.md b/meetings/2022-03/mar-30.md index cec6c931..829717dc 100644 --- a/meetings/2022-03/mar-30.md +++ b/meetings/2022-03/mar-30.md @@ -36,7 +36,7 @@ WH: Well, shoe sizes are just — I'm not sure that's something that we should b YMD: So, the idea is, there is already like a library for like shoe sizes in ICU and lose information is already specified by see the art, for example, 4 meter, and usage and person height. I mean a new city and the Locale you can get the information. So the information is already in the CLDR and in and it is already mplemented in ICU. The idea is here we want to bring this API to be used by many user in ecmascript? And as you as I showed many users need this Library. -WH: I'm not familiar with that library. Do they have things like shoe sizes? +WH: I'm not familiar with that library. Do they have things like shoe sizes? YMD: Do you mean the library in ICU? Yeah, so in ICU, there is already number four, metal. And if you use number formatter and you say for usage or due to usage, should I (?) give units and usage and locale, it would convert for your the formatting or automatically and give you the correct format. as a formatting update. And we are now working on making - because some people wanted already and a way you are working to bring it on. Also in CLDR, there is something called units, preference. If you check it you are going to see all the information and evidence needed more information. We sequentially adding like, for example, if people asking for altitude, we are now in a stage in working in adding it. @@ -80,7 +80,7 @@ MLS: Do you want to be inclusive of everything that's in CLDR to be exposed thro YMD: Everything related to units. -USA: Hi, I just missed part of this, but just to clarify, the units part in itself, doesn't need to come from CLDR. It in itself is quite minimal. The actual locale information is coming from CLDR. So Locale aware unit formatting is a CLDR thing, but converting from one unit to another's. Doesn't need to go via the CLDR. +USA: Hi, I just missed part of this, but just to clarify, the units part in itself, doesn't need to come from CLDR. It in itself is quite minimal. The actual locale information is coming from CLDR. So Locale aware unit formatting is a CLDR thing, but converting from one unit to another's. Doesn't need to go via the CLDR. YMD: You are correct. However, also the unit's information and the conversion conversion rates, all of this Right now Right now, also in CLDR. @@ -132,7 +132,7 @@ RPR: and I think SFC and SYG are kind of making the same thing Point here, which CM: Yes, there are so many issues there and I think there are questions. For example, might a particular feature expose some security vulnerability, some engineering hole that might mess up with software development? But the particular details of what units you have and how you factor in API and all of that, it's just not our concern. -RPR: I'll take the point. Could we? That discussion offline either to the the chat or the reflection. +RPR: I'll take the point. Could we? That discussion offline either to the the chat or the reflection. CM: That's fine. I just I just wanted to get that issue on the table, @@ -142,7 +142,7 @@ RPR: Yeah, I think I think guy for okay. Okay. She will SYG to you. YSV: Yeah. So as ZB actually, wrote a really informative comment about effectively what SFC mentioned regarding the the bar for proposal, of course, that can be applied to Stage 2. However, for stage 1, we do need a problem statement. And the problem statement that I have heard is that there is some unknown quantity of developers who need this, which makes it very difficult to scope. What this feature should be. And I do agree with ZB’s comment where he states that you know, if you look at what that sealed your data is doing there may be several different proposals contained in this one, that would be greatly benefited from having a more concrete. We focus a more concrete problem statement because I'm not, I'm not really convinced that like an API for shoe sizes, shirt sizes, or anything else, that gets added to CLDR later will necessarily be handled well by like a single API that will have. And what if Unicode decides to make certain arbitrary changes. Are we going to be always referencing them? We've had that discussion in the past. That's a larger discussion. That's a discussion. I think, for a much more concrete core language feature. That we had earlier regarding the tables that were pointing to in Unicode. So I think that for stage 1, we really need to have a concrete “What are we solving” statement? -RPR: We're basically at time. JSC, is your item blocking stage one? +RPR: We're basically at time. JSC, is your item blocking stage one? JSC:. I think it might help them. Give me 15 seconds. As point out possibly splitting APIs. I think that there might be, at least two APIs here. What about getting the unit system preferred by Locale and one about doing the actual conversion? conversion. I think that Problem statement might be refined to maybe focus on the former rather than both at the same time. time. That's all for me. @@ -160,7 +160,7 @@ WH: It's not a very well-defined thing. So I'm skeptical. I’m not going to blo RPR: All right, that's yes, right. -CM: I don't want to block work in this area, but I kind of object on the grounds that it’s really out of scope for what we do. +CM: I don't want to block work in this area, but I kind of object on the grounds that it’s really out of scope for what we do. YSV: From from my perspective. I understand that you need stage one in order to have like work time to implement this but I would like to see a sharpened problem statement. Like what are we? Because there is no problem statement at the moment. And think ZB outlined very clearly, especially in his final comment to you. Why we don't want to just everything and that we do want to think about what we're exposing. I would really like to see a concrete….. I feel uncomfortable giving stage 1 because there is effectively no problem statement. Here. It is. Just we have this. We have some unknown application that is going to make use of this. So we are solving something for them. But there is no actual problem statement about what we're solving or what the goal of this is. Which makes it difficult to judge at later points in the proposal process, whether or not, we're achieving those goals that we set out to solve. So that makes me uncomfortable. I understand that for whatever internal politics reasons you may need stage 1 to investigate this. So I'm not going to block, but I would really like to see a clear problem statement in an update soon. @@ -189,19 +189,19 @@ EAO: What we really want to do here is, is to build on and enhance existing work EAO: So this presentation is going to go through the couple of the parts of the proposed very tentative API for this and for this and through this, hopefully present the various different aspects for why we consider these to be important. And hopefully also Show. Some of the value of y, this ought to be in the spec, rather than, as a separate library itself. Now, message format itself in pulled Intl.MessageFormat is being proposed so not exactly this. But and in told up message format is he is he being proposed as the only new primordial brought by this library? And now, if we start from how Intl interfaces usually start out with: you have a Constructor that takes in locale then a basket of options and then on the instance, you get some methods such as a formatting method before taking a message and, you know, producing a string. Well, then you get the result of options worker past. So if we start looking at this table apart, I am going to present to you what all goes into this and how we can we can make this whole of message format, which is a rather wide scope really to have minimal impact on JavaScript itself. -EAO: If we start from the message part, if we have just a message, how do we format that? So really, let's identify that we have two different parts. We have the sort of data of the message, a data model that comes from a source syntax. “Hello { $ Place !}” then we have the data that we combining this with saying that the place is the world, so we could know end up with a formatted "hello world" message. So we need to split, or we ought to split that message into two different parts that are coming into this whole Enterprise, the message that. The active values that are being used there. And really, I mean, honestly, the message is independent of runtime stuff. So it ought to go into the constructor. +EAO: If we start from the message part, if we have just a message, how do we format that? So really, let's identify that we have two different parts. We have the sort of data of the message, a data model that comes from a source syntax. “Hello { $ Place !}” then we have the data that we combining this with saying that the place is the world, so we could know end up with a formatted "hello world" message. So we need to split, or we ought to split that message into two different parts that are coming into this whole Enterprise, the message that. The active values that are being used there. And really, I mean, honestly, the message is independent of runtime stuff. So it ought to go into the constructor. -EAO: Now, going deeper into this, we ought to recognize that the message data is almost always- it's not one message at a time that we ever had in our localization system. We have a bunch or a group or a resource of message. That we are working with. So, to recognize that and to also keep the API surface here minimal, rather than building the interface, the constructor on a single message. Let's build it around a resource of messages and this then does require that in the formatting we identify the message that we're bringing in. And here, this is an aspect that is different from practically all existing Intl APIs because here asking for the user effectively to bring that data rather than relying on data being provided by the environment and that brings in some concerns that we're going to get into a bit later. And here in particular, when we considering. What is this message resource thing? We ought to recognize one reason, but To bring this into JavaScript itself, is that we ought to be able to represent a resource with a string representation of it that is then parsed into a structure. And this is currently being done by various localization libraries so that they need to provide a JavaScript parser for whatever representation of messages that they use into the runtime, and then run that on the on the data and go from there. So it's could be made cheaper both in terms of byte-size as well as execution time. And this means that the the API that we're proposing here allows for the message resource to be provided as a, you know, readily constructed object but also as a string. The constructed form there will then be relying on or enabling existing localization systems to effectively parse their own format and to from that format provide a message resource structure and then the duress of the APIs, a unifying unifying runtime for what were they doing: message formatter. And now, as a detail here, that the message resource structure doesn't necessarily need to be flat, but it might be a hierarchical object,, so we ought to Rather than message-id have a message path parameter in the format to address that. +EAO: Now, going deeper into this, we ought to recognize that the message data is almost always- it's not one message at a time that we ever had in our localization system. We have a bunch or a group or a resource of message. That we are working with. So, to recognize that and to also keep the API surface here minimal, rather than building the interface, the constructor on a single message. Let's build it around a resource of messages and this then does require that in the formatting we identify the message that we're bringing in. And here, this is an aspect that is different from practically all existing Intl APIs because here asking for the user effectively to bring that data rather than relying on data being provided by the environment and that brings in some concerns that we're going to get into a bit later. And here in particular, when we considering. What is this message resource thing? We ought to recognize one reason, but To bring this into JavaScript itself, is that we ought to be able to represent a resource with a string representation of it that is then parsed into a structure. And this is currently being done by various localization libraries so that they need to provide a JavaScript parser for whatever representation of messages that they use into the runtime, and then run that on the on the data and go from there. So it's could be made cheaper both in terms of byte-size as well as execution time. And this means that the the API that we're proposing here allows for the message resource to be provided as a, you know, readily constructed object but also as a string. The constructed form there will then be relying on or enabling existing localization systems to effectively parse their own format and to from that format provide a message resource structure and then the duress of the APIs, a unifying unifying runtime for what were they doing: message formatter. And now, as a detail here, that the message resource structure doesn't necessarily need to be flat, but it might be a hierarchical object,, so we ought to Rather than message-id have a message path parameter in the format to address that. EAO: Now to consider another really important feature, Is that so far, a lot of the existing structures of That lead to the message. Formatting libraries have this expectation of formatting messages to be strings. And we honestly don't have this expectation in the in the web world where we aren't just taking a string and popping it into content elsewhere, but very, very often. We are plopping it within for example HTML or other structures that require some of the variables coming in to be already complex elements. So, the output to be complex into many of the INTL apis has already provided formatToParts type of method for this, but that doesn't actually map well to this structure, because the formatting is - the parts are not necessarily string content that they would have. So instead of a format method, what we proposing here is, I resolve message method that returns a resolve to message. We're going to get into what that means later. And now also another aspect in which We are differing from how Intl interfaces currently mostly do, is that because message format, doesn't really bring much data by itself to intl, to JavaScript, but rather, it could allows a new way of combining the power of different parts of Intl already, and because we have users that are bringing in their own data, there's way more places where mistakes can be made. And now this is usually handled of course by throwing errors. But in the case of localization, we do want to be able to provide the best possible experience for the user to not throw errors. Unless we really, really need to. And this means that rather than throwing errors. We always return value. Even some sort of a fallback representation of the message and provide an error Handler for this, for catching or doing dealing with the errors as necessary. Now, this is effectively the API that we're proposing for the INTL dot message format object and it's possible, course, to do also provide a format method there. The polyfill for that is effectively in its entirety there on the right. So whether that ought to be included or not, this is a seperate concern. EAO: Now to dive into a little bit. What is this? resolved message that is mentioned on? What is that really doing? And that's it's a message value or it's extends what we're calling a message value. And it contains an iteration value. That's an iteration of other message values, which then can themselves be stringified and contain data, for example, about the Locale and otherwise. Now the message values themselves would be represented by plain objects, so we would not need to - so the interface is here are just interfaces for plain objects rather than any new primordial. They would include a type o to identify these. of And they would be provided, for example, for the existing values supported by INTL such as numbers, dates, all sorts of things in order to each of those to be able to Incapacitate, the message value. That is the value. The options, as well as providing a two parts sort of, a method for getting the formatted Parts, representation, literal part of our message, other things that I explicitly, In the source message like the hello world example, I showed you, hello comma space. There is a message literal. That would be the presenter is the message literal would represent that in the in the formatted output. And here again, another possible thing that can show up as a message value is a message fallback which there to allow for this sort of partial failure of the message of the of the resolution and formatting of a message. And this allows us to have a way, if a part of a message formatting fails. We can represent that - we can provide the best possible representation of the message otherwise. -EAO: Now to briefly also touch on the message formatter functions, which is one of the things that ought to go into the message options at the start of the thing in the Constructor and these provide a way to handle different sorts of values. Some of these such as number, for instance, would be provided natively by the spec, but others could be defined by users themselves for custom inputs, and these allow for, the example here, You ran some meters for that. ‘Some meters’ be formatted according to the best possible representation for that, that we can provide the syntax examples here, of course are very much temporary and under development as we are finalizing that work in the message format v2 working group. +EAO: Now to briefly also touch on the message formatter functions, which is one of the things that ought to go into the message options at the start of the thing in the Constructor and these provide a way to handle different sorts of values. Some of these such as number, for instance, would be provided natively by the spec, but others could be defined by users themselves for custom inputs, and these allow for, the example here, You ran some meters for that. ‘Some meters’ be formatted according to the best possible representation for that, that we can provide the syntax examples here, of course are very much temporary and under development as we are finalizing that work in the message format v2 working group. EAO: Now, wrapping up. When you put all of the preceding together. This is how effectively it would kind look like you would have a resource a strength. So Intl.messageFormat takes no opinion on how do you get the string through to be available under the scope of the Constructor. But it would contain messages such as ‘new_notifications’. That in this case. It's It's a plural value selector base ten, some count of notifications. And then for different cases, would provide different messages and then you would first call the, the instance, there after constructing the message format. This is called a result message method on it providing whatever data that you need and then you get the response, the resolved message that you could then for instance stringify with to stream. -EAO: Now there are couple of constraints, still because this is we're not implementing something that is ready. We're continuing work, that is ongoing in the Unicode Consortium and here hopefully in TC39. So the biggest thing is that we are it is likely that the message format 2.0, the very first specification, which ought to be coming out in some number of months will only define a single message syntax in parallel with this, we're working on getting the message resource syntaxes specified under Unicode. So this will be the Intl.MessageFormat will depend on both of these. And then there's some details of the message specification that are still a little bit uncertain and may affect some of the details of the JavaScript level API. they are in such as message pattern elements such as the literals and variable references and And so on, these might include something like explicitly representing something like display or markup element such as saying that here. This part of the message ought to be a link and the link should point to this URL, for instance, and as a slight extension of this, the exact extensibility extension points for the message from up specification are Completely settled yet. So that much still develop. +EAO: Now there are couple of constraints, still because this is we're not implementing something that is ready. We're continuing work, that is ongoing in the Unicode Consortium and here hopefully in TC39. So the biggest thing is that we are it is likely that the message format 2.0, the very first specification, which ought to be coming out in some number of months will only define a single message syntax in parallel with this, we're working on getting the message resource syntaxes specified under Unicode. So this will be the Intl.MessageFormat will depend on both of these. And then there's some details of the message specification that are still a little bit uncertain and may affect some of the details of the JavaScript level API. they are in such as message pattern elements such as the literals and variable references and And so on, these might include something like explicitly representing something like display or markup element such as saying that here. This part of the message ought to be a link and the link should point to this URL, for instance, and as a slight extension of this, the exact extensibility extension points for the message from up specification are Completely settled yet. So that much still develop. EAO: and yeah, the the agenda includes a link to the proposal. the a lot of the this work has been going on the Unicode message. for my working group. There's a polyfill available for effectively almost the this version of the spec but without the syntax parsing because that's not there yet. Yet. And I also collected here, a couple of links to some of the prior discussions within well in 2013 and 14. On the the wiki ecmascript archive you can find information there and from 2016 to 2019, some of the discussion was happening under one of the issues in under the Ecma 402 and from there it was effectively spun off into the Unicode working group but that's my presentation and at the end of this, hopefully looking for stage 1, advancement for this happy to take any questions that you might have. @@ -333,7 +333,7 @@ SYG: That's not what I was saying either. I'm saying internal slots despite not JWK: Yeah, I won't do that. That's not the goal of the way one. -SYG: Okay, I think so. I think I've made myself clear on these points, we can move on. +SYG: Okay, I think so. I think I've made myself clear on these points, we can move on. MAH: Yeah, that means not really a question about it. It doesn't have to be like you can specify it. However want. You might be since simply setting a new stabilized flag in I know in a slot that is checked before any operation that we have before we have been mutating the other slots. It doesn't have to be freezing the content of slots obviously, @@ -353,7 +353,7 @@ MM: I can answer. JHX is correct. The stabilize addresses the needs of snapshot. JWK: read-only, [?], the snapshot is just like we used the second way that (add them) case by case. -MM: That's right. That's correct. Read-only collections as a proposal independent of this would just be an example of JWK's second way. And the point that SYG was making about premature generalization, the second way is exactly the option of not generalizing. Of just figuring out on a case-by-case basis. What are the stabilization API meaningful for collection? +MM: That's right. That's correct. Read-only collections as a proposal independent of this would just be an example of JWK's second way. And the point that SYG was making about premature generalization, the second way is exactly the option of not generalizing. Of just figuring out on a case-by-case basis. What are the stabilization API meaningful for collection? JHX: I see, thank you. @@ -363,7 +363,7 @@ RPR: Thank you. Yeah, I think I just point on Peter. MAH: I think from what I understand the difference between option one and option two is whether there is a language provided synchronization point on how an object should be asked to stabilize itself. So option one wouldn't forcibly stabilize the objects. It would just say that if you want to ask the object to stabilize it, it you use a well-known well-known symbol for that. -PH: right, but would not that - but in the example that was given the object is then - there's two things. I think there's, giving the object, the option to opt out, you know, to refuse to be stabilized. And I think we support that, I think there's there's good reasons for that. think the second part though, is giving the ability to perform the stabilization itself. And by doing that, you give the object the ability to change the semantics of what stabilized means for and that, that means. Then I think to SYG’s point. that means then that it's not predictable. What's going to happen and for something it's supposed to provide guarantees about immutability. T something that could be used, you know, for for to be able to safely pass an object around and know that it won't be modified. That's not useful like that breaks some of the core uses of that. +PH: right, but would not that - but in the example that was given the object is then - there's two things. I think there's, giving the object, the option to opt out, you know, to refuse to be stabilized. And I think we support that, I think there's there's good reasons for that. think the second part though, is giving the ability to perform the stabilization itself. And by doing that, you give the object the ability to change the semantics of what stabilized means for and that, that means. Then I think to SYG’s point. that means then that it's not predictable. What's going to happen and for something it's supposed to provide guarantees about immutability. T something that could be used, you know, for for to be able to safely pass an object around and know that it won't be modified. That's not useful like that breaks some of the core uses of that. MAH: Right, you have to trust that object. You asked to stabilize will in fact do that properly. @@ -411,7 +411,7 @@ Presenter: Kristen Hewell Garrett (KHG) KHG: So there's just a few issues that have been brought that people are concerned about. So the first one here is the naming of "isPrivate" and "isStatic" on the context object for decorators. These are just two booleans that allow people to understand whether or not the value being decorated, is a private value or is a static value or both. So, as DD notes here, there's precedent for using "is" for functions, and not using “is” prefixed for values. I don't have a strong opinion here. And as far as I understand, just based on all my context. There was not a particular reason to do this. Other than the fact that both static and private are keywords, but that wasn't a conscious design decision when we made this API decision. I think I just didn't realize that there was a naming precedent here. So I don't particularly have a strong opinion here. Does the committee, does anybody on the committee feel it should be one way or the other? -JSC: With t my developer hat on, I would strongly expect all boolean flags to have, no No, "is" prefix. would have been really surprised at properties named with is do not refer to predicate methods when using them from the options argument. I would prefer not to have Structuring like, have developers either. Not used. you know, `options.private`, or just rename the variables and destructuring like a `private: privateFlag`. The destructuring thing I think is way less important than matching developer expectations with regards to predicate naming. +JSC: With t my developer hat on, I would strongly expect all boolean flags to have, no No, "is" prefix. would have been really surprised at properties named with is do not refer to predicate methods when using them from the options argument. I would prefer not to have Structuring like, have developers either. Not used. you know, `options.private`, or just rename the variables and destructuring like a `private: privateFlag`. The destructuring thing I think is way less important than matching developer expectations with regards to predicate naming. KHG: Yeah, yeah. And for what it's worth, we have also always said that, you know, decorators usage patterns. Is they're authored by very few people and generally you only have a few of them overall and then they're used in many places. That's the whole point of meta programming in general, right? Is to, to be able to reduce amount of code. Anyways, so the point is the authoring experience can be a little bit less ideal, I think because it's just not as common to have to destructure these. And if you do, there are options for renaming them. @@ -481,7 +481,7 @@ KG: and in particular, you can already put a private field there, just by wrappi MM: Okay, I'm satisfied. Thank you. Perfect. -WH: There's still, I believe, a conflict between using `@` for decorators and for pipeline operators for the cases of expressions in which we have infix non-reserved keywords. +WH: There's still, I believe, a conflict between using `@` for decorators and for pipeline operators for the cases of expressions in which we have infix non-reserved keywords. KHG: so you're talking about pipelines versus decorators. Okay. So, like an example that I guess would be like just this like `@foo`, there are some places in the where I @@ -530,7 +530,7 @@ SYG: This is some normative updates I am asking consensus for, for the resizable SYG: So first, we'll talk about making detached array checks, directionally consistent previous changes to these detached checks. the first method will be talking about is TypedArray.prototype.set. For those unfamiliar. It is a method that has two overloads. Basically, one of them you can pass in an array like and then it assigns to the receiver typed array, element wise, the elements from this array-like. The second form is you can pass in another typed array and then similarly, it assigns element-wise the values of the elements of from the other typed array into the receiver. This will be entirely talking about the first Forum where typed array.prototype does set takes an array like. So, because it's an array-like and array-likes are just objects with a length and maybe elements, it is possible to have shenanigans like this where you have an array index getter, which then detaches the buffer that you are setting into. Currently, what happens is that if you detach the buffer in any elements getter, or in any kind of evil way like this during the iteration of the array-like when you're assigning with the receiver that is a typed array. We throw via a per-iteration detached check, so per iteration of the array-like where we are trying to take an element out and assign it into the receiver type of the array. We check if the buffer is detached and if it is we throw. -SYG: My position here is that this is no longer consistent with a change we made a couple years ago. I don't quite remember when but this is a change from RKG here selling who did a great work in wrangling, the different implementation behaviors finding out real world Behavior. But part of that work in #2208. We changed that. So, if we were to type a normal assignment of, you know, `...[0] = something`, and that typed array's backing buffer is detached, we changed that so that those assignments no longer throw. So my proposal here is to get rid of this per-iteration detached check on TA.prototype.set to be directionally consistent PR #2208. So concretely what this means is to use the ‘set’ abstract operation. And this is a screenshot of the spec text. Basically currently in the spec text of TypedArray.prototype.set, there's like manual inlining of typedarray logic to assign into the receiver typed array. I don't know why this is inline. Like it's that it didn't inline all of the logic, only some of the logic. My archaeology on that is that before #2208 the Set AO calls some internal method on exotic integer-indexed objects. And that implementation of that internal method of .set had a detached check. This was copy pasted and inlined into this method for whatever reason, instead of going through the set a 0, which a normal assignment would do. So when we changed #2208 to get rid of that detached check, of course, we didn't know it was partially inline somewhere else and this detached check remained. So this if you look at, if you look at it through that lens, this is just a bug that we've missed. So a nice outcome from this proposed change is that TypedArray.prototype.set comes pretty simple In the spec text in that. It's just you iterate the array like source. And then you set on the target type directly. So that is the first change that I am asking for consensus on. I would like to take these one at a time before continuing on to the next one. Any concerns about this change that I'm asking for. +SYG: My position here is that this is no longer consistent with a change we made a couple years ago. I don't quite remember when but this is a change from RKG here selling who did a great work in wrangling, the different implementation behaviors finding out real world Behavior. But part of that work in #2208. We changed that. So, if we were to type a normal assignment of, you know, `...[0] = something`, and that typed array's backing buffer is detached, we changed that so that those assignments no longer throw. So my proposal here is to get rid of this per-iteration detached check on TA.prototype.set to be directionally consistent PR #2208. So concretely what this means is to use the ‘set’ abstract operation. And this is a screenshot of the spec text. Basically currently in the spec text of TypedArray.prototype.set, there's like manual inlining of typedarray logic to assign into the receiver typed array. I don't know why this is inline. Like it's that it didn't inline all of the logic, only some of the logic. My archaeology on that is that before #2208 the Set AO calls some internal method on exotic integer-indexed objects. And that implementation of that internal method of .set had a detached check. This was copy pasted and inlined into this method for whatever reason, instead of going through the set a 0, which a normal assignment would do. So when we changed #2208 to get rid of that detached check, of course, we didn't know it was partially inline somewhere else and this detached check remained. So this if you look at, if you look at it through that lens, this is just a bug that we've missed. So a nice outcome from this proposed change is that TypedArray.prototype.set comes pretty simple In the spec text in that. It's just you iterate the array like source. And then you set on the target type directly. So that is the first change that I am asking for consensus on. I would like to take these one at a time before continuing on to the next one. Any concerns about this change that I'm asking for. [silence] @@ -546,7 +546,7 @@ SYG: Okay, I will take silence as we are also okay with getting rid of this per USA: Yep -SYG: Moving on then to the second part, which is a normative change I'm asking for for TA.prototype.subarray for length tracking typed arrays. So a recap of what subarray does. So `subarray()`, unlike the other typed array methods, do not create a copy of the underlying buffer and a new typed array. `subarray()` creates a new typed array that is backed by the same buffer, meaning it creates a sub-window into the same arraybuffer. So, another quick recap for what a length tracking typed array is. A length-tracking typed array is that if you create a typed array with an undefined length and to be backed by a resizable buffer, that typed array, as part of the resizable buffers proposal, automatically tracks the length of the underlying array buffer. So, to combine these two, the question is: if the receiver typedarray is one of these length tracking arrays, should subarray begin where I pass in undefined as the SD end offset where I do not provide an end offset. Should this create a length-tracking typed array or a fixed-length typed array? So my mental model of what `subarray()` does is, you pass in these begin and end offsets? And then this subarray method doesn't do anything with typed arrays or array buffers, all this method does is it computes a new byte offset and a new length from begin and end, and it delegates everything else to a typed array constructor call of the same buffer with the new byte offset and the new length. And with this mental model, I think it makes the most sense that if `end` is undefined, we compute byte offset from past begin, but we don't do anything to - we do not compute a new length because there is no end, meaning we pass on undefined as the new length to the type director instructor and this means that, that form of the type of record structure creates a length, tracking typed array, which means subarray creates a length tracking typed array, if end is not passed. This is different than what the current spec says, the current spec draft says that if `end` is not passed, I think it gets computed to be like the actual length of the buffer at that time. So it creates a new fixed length typed array right now. And I'm proposing that if we were to not passing an end argument to sub-array that it creates length tracking typed arrays instead. Any discussion. I see there is a queue. +SYG: Moving on then to the second part, which is a normative change I'm asking for for TA.prototype.subarray for length tracking typed arrays. So a recap of what subarray does. So `subarray()`, unlike the other typed array methods, do not create a copy of the underlying buffer and a new typed array. `subarray()` creates a new typed array that is backed by the same buffer, meaning it creates a sub-window into the same arraybuffer. So, another quick recap for what a length tracking typed array is. A length-tracking typed array is that if you create a typed array with an undefined length and to be backed by a resizable buffer, that typed array, as part of the resizable buffers proposal, automatically tracks the length of the underlying array buffer. So, to combine these two, the question is: if the receiver typedarray is one of these length tracking arrays, should subarray begin where I pass in undefined as the SD end offset where I do not provide an end offset. Should this create a length-tracking typed array or a fixed-length typed array? So my mental model of what `subarray()` does is, you pass in these begin and end offsets? And then this subarray method doesn't do anything with typed arrays or array buffers, all this method does is it computes a new byte offset and a new length from begin and end, and it delegates everything else to a typed array constructor call of the same buffer with the new byte offset and the new length. And with this mental model, I think it makes the most sense that if `end` is undefined, we compute byte offset from past begin, but we don't do anything to - we do not compute a new length because there is no end, meaning we pass on undefined as the new length to the type director instructor and this means that, that form of the type of record structure creates a length, tracking typed array, which means subarray creates a length tracking typed array, if end is not passed. This is different than what the current spec says, the current spec draft says that if `end` is not passed, I think it gets computed to be like the actual length of the buffer at that time. So it creates a new fixed length typed array right now. And I'm proposing that if we were to not passing an end argument to sub-array that it creates length tracking typed arrays instead. Any discussion. I see there is a queue. MAH: Yeah. So from what I understand you want to say that if n is not classified. it's assumed to be the end and it should track. What happens if end is specified as a negative number, which is suppose as I understand that we want to stop it at enough set from the end and now the end changes? @@ -610,12 +610,11 @@ SYG: Right. And with that clarifying stuff answered hopefully, I am asking the c [call for support] -KM: And I just think it's more intuitive than the other things because it matches what the constructor does. Sorry, the new behavior is more consistent with the constructor than the other behavior and it generally is a more ergonomic way to get, the the the tracking view that otherwise would be pretty inconvenient where it's getting fixed view, is only just asking for the length, you just explicitly past the end and you're done. +KM: And I just think it's more intuitive than the other things because it matches what the constructor does. Sorry, the new behavior is more consistent with the constructor than the other behavior and it generally is a more ergonomic way to get, the the the tracking view that otherwise would be pretty inconvenient where it's getting fixed view, is only just asking for the length, you just explicitly past the end and you're done. ### Conclusion/Resolution -Removing the per-iteration detached checks in the existing .set and .sort -In the resizable buffers proposal, changing the behavior of subarray when `end` is undefined so that it creates a length-tracking subarray, for consistency with the constructor +Removing the per-iteration detached checks in the existing .set and .sort In the resizable buffers proposal, changing the behavior of subarray when `end` is undefined so that it creates a length-tracking subarray, for consistency with the constructor ## Change array by copy @@ -642,7 +641,7 @@ ACE: Third, We have one thing that was discussed is the return type of `toSplice ACE: Lastly, `with` indexing. So, there's two kinds of things about it, that we've talked about and decided. One is that it uses the abstract operation `toIntegerOrInfinity` just like all other array methods do. so this does mean, you can do all your loosely dynamically typed, lovely things of passing in strings that are obviously indexes and they'll be no exception, `with` will just coerce that to an integer index. So in my example here, if I have my array, `[1, 99, 3]` if I do `with(1, 2)` then I'm saying index, one, is now replaced with two. `with` with 1.55 is the same as also trying to do with with just the integer 1. The second thing is that, there's no way to grow the array or get a longer array back. So it's not like when you - earlier, I said that with could be seen as a non-mutating version of index assignment. That's not quite true in terms of the fact that index assignment to an array will implicitly grow that array and fill it with holes if needed. this doesn't happen here; if you try and `with` beyond the length of the array, you get a RangeError. -ACE: So yes, we've got a complete specification. It's being reviewed. I got in touch with SYG and KG just to check kind of their editorial side and I didn't have a chance to reach out to MF, sorry. But it seems like they are happy with conditional passing to stage 3, you know, they'll give it kind of their complete review some point in the near future. We do have a JavaScript polyfill implementation that matches the spec. We do also have - and this is not really required for stage 3, but I think it's a great thing to call out - we have some work at Igalia; TJC, did a spidermonkey implementation and I think the thing that's currently merged in the SpiderMonkey isn't completely spec compliant. There are a modifications, but there's a Believer, a PR opened up. Has those necessary changes and ADT at Igalia has been working on a webkit implementation as well. We also have, thanks to NRO, we have test262 tests as well. That's not merged but he's got a fork with the tests there as well. +ACE: So yes, we've got a complete specification. It's being reviewed. I got in touch with SYG and KG just to check kind of their editorial side and I didn't have a chance to reach out to MF, sorry. But it seems like they are happy with conditional passing to stage 3, you know, they'll give it kind of their complete review some point in the near future. We do have a JavaScript polyfill implementation that matches the spec. We do also have - and this is not really required for stage 3, but I think it's a great thing to call out - we have some work at Igalia; TJC, did a spidermonkey implementation and I think the thing that's currently merged in the SpiderMonkey isn't completely spec compliant. There are a modifications, but there's a Believer, a PR opened up. Has those necessary changes and ADT at Igalia has been working on a webkit implementation as well. We also have, thanks to NRO, we have test262 tests as well. That's not merged but he's got a fork with the tests there as well. ACE: So, I haven't kept an eye on the Queue, but we believe that we are a kind of a position where we've done everything we can do and the thing that we're really needing to extend. This is more implementation experience and kind of external usage, to type of things we can and stage 3. @@ -769,7 +768,7 @@ JRL: You can have empty lines anywhere in your content. So, in this case line fo JRL: Expressions do factor into common indentation here. There four shared space characters between these three content line and importantly, the evaluation of an interpolation, of a text interpolation, is not used as part of the common indentation. It's only the literal, the syntax indentation in source code that you are dedenting here. Not the evaluation. So even though there are two spaces that I'm going to interpolate into the string, the common indentation is not going to be 6 space characters. It's only going be 4 space characters because that is what's represented in our text block's syntax, so all the lines will have at least two space characters here. -JRL: And finally, there's a form of tagged dedent. So the same way that you can do a template tag invocation. You can do the same thing with a dedent block in this case. I'm showing you what happens with a hypothetical python interpreter template tag. Python is famously a white space sensitive language. So if we were to try and do this tagged template evaluation, normally it would throw an error because there's unnecessary white space and that's invalid in Python. You could dedent your block manually so that it doesn't look correct in your source code and then it would work in print out "hello python world". But with a string.dedent tag, we're creating a brand new template tag wrapper. What will happen here is it takes the invocations template strings array, it'll perform the dedent on the template strings array and then pass it to the underlying template tag, to the python interpreter. So the python interpreter will receive dedented template strings array, that will not contain any leading indentation in this case, so it'll continue to work in print out "hello python world" again. +JRL: And finally, there's a form of tagged dedent. So the same way that you can do a template tag invocation. You can do the same thing with a dedent block in this case. I'm showing you what happens with a hypothetical python interpreter template tag. Python is famously a white space sensitive language. So if we were to try and do this tagged template evaluation, normally it would throw an error because there's unnecessary white space and that's invalid in Python. You could dedent your block manually so that it doesn't look correct in your source code and then it would work in print out "hello python world". But with a string.dedent tag, we're creating a brand new template tag wrapper. What will happen here is it takes the invocations template strings array, it'll perform the dedent on the template strings array and then pass it to the underlying template tag, to the python interpreter. So the python interpreter will receive dedented template strings array, that will not contain any leading indentation in this case, so it'll continue to work in print out "hello python world" again. JRL: so, the first big question we have here is, should there be syntax to represent a dedent block? There's been a couple of attempts here by community members, to propose different syntaxes. It seems like the community wants a syntactic form to do this for them. The original attempt that we had was to use triple backticks. This isn't 100% web compatible but I think it'd be exceedingly rare to have a case where someone is already doing this. But the community has proposed a couple of other potential syntax that we can do. For instance, using triple quote characters so you could have triple double quotes or triple single quotes. They've also proposed using double quote followed by a tick (`), and this could potentially allow expression evaluation into your literal and then if you were to use triple double quotes, you would use a non interpreting expression, be like literal text, like a normal string block. RBN has suggested and the current readme text uses the`@` character to denote a dedent block. So, `@` followed by a backtick is a dedented string block and expressions would be allowed in this case. In the same syntax, if you were wanting to use a tag template literal that would work out perfectly, so can do the same python interpreter and pass it at syntatic dedent, and it'll evaluate correctly. @@ -787,7 +786,7 @@ JRL: Yes, correct. JRL: Okay. There's a repl so that you can test all this for yourself. I have a reference implementation and in npm. But I haven't merged all the codes to GitHub. It's not super easy to view the code, but you can play around with the repl if you would like to you to figure out what all the cases are there. -MM: Okay. Thank you. And with regard to String.indentable, I think I might have a suggestion of for a more pleasant way around the problem. I don't like having to introduce a new concept of indentable but I do understand the problem you explain so I'll just I'll just leave that as something to come back to. +MM: Okay. Thank you. And with regard to String.indentable, I think I might have a suggestion of for a more pleasant way around the problem. I don't like having to introduce a new concept of indentable but I do understand the problem you explain so I'll just I'll just leave that as something to come back to. JRL: Okay, welcome to leave comments on the issue GitHub. I don't have the number on me at the moment, but there is an open issue. It's not currently part of this proposal, but I'd be happy to tackle it as a follow-up. @@ -825,7 +824,7 @@ JRL: I'm sorry, I didn't catch that last part. MF: Instead of overloading the tag so it takes just a single string. Why not provide a String.prototype method where you can have any semantics you want? - JRL: Okay. +JRL: Okay. MF: I just want to know, is there a reason why we would overload the tag and have that be called as a function instead of just having a prototype method? I think that a String.prototype method would be idiomatic JavaScript @@ -833,7 +832,7 @@ JRL: okay. I hadn't considered a prototype method. But it that sounds okay, to m MF: Also, it doesn't need to be part of this proposal like this, both of these sound like they can be add-ons later. -KG: Okay. Well, it can only do the behavior I'm discussing, it can only be an add on later if it throws when passed a string instead of, I don't know, treating it as an array-like, I guess? so we would need to do a little bit of reserving space for it. +KG: Okay. Well, it can only do the behavior I'm discussing, it can only be an add on later if it throws when passed a string instead of, I don't know, treating it as an array-like, I guess? so we would need to do a little bit of reserving space for it. JRL: Yes, I could add in a runtime error, if you invoke the function with a string for now. I doubt that would even evaluate correctly if you were to do it. I don't know if anyone would want to try. @@ -865,7 +864,7 @@ JRL: Okay. That's it. That's the queue. JRL: Okay, so I’m not going for stage advancement at this time. I'll bring this back next time for stage two advanced hopefully. Points that I've heard are - MM’s point, that I can’t remember right now, I'm sorry. I can't remember at the moment. But there's also this topic that KG is talking about about passing a string, the calling function with a string instead of invoking it as a tagged template literal. Oh and indentable expressions. So if we want to support “indentable” as a follow-on proposal so that the output will be re indented correctly, both of those already have open issues on the issue tracker. So, I encourage you to post your comments on the issue trackers. That's everything I have. -USA: Okay, great. Thank you. +USA: Okay, great. Thank you. ## Incubation chartering @@ -875,13 +874,13 @@ JSC: I would like to request that the array.fromAsync item. also be expanded to The default mapping function. Sure from leasing to expand to function. -MM: I'm sorry. What is the default mapping function? +MM: I'm sorry. What is the default mapping function? JSC: all right, up from async is currently trying to match a rate up from in that. it lets the developer Supply a mapping function as its second argument, but there are Shenanigans having to do with whether we should double like treat it as an identity function. Whether we should double await by default that sort of thing MM:. Oh, okay. Thank you. It's very clarifying. -SYG: Barring any. Volunteers. I Will propose The Decorator metadata stuff to be an incubator called topic to get attendance from engine implementers specifically. Firefox, folks, and V8 folks about given that we were the ones who objected to the metadata being included in the current decorator proposal. We have an incubator a call about just metadata at large. guess. I'm moving forward on whether it be. removal, or a simplification or what to do with the metadata separate proposal. +SYG: Barring any. Volunteers. I Will propose The Decorator metadata stuff to be an incubator called topic to get attendance from engine implementers specifically. Firefox, folks, and V8 folks about given that we were the ones who objected to the metadata being included in the current decorator proposal. We have an incubator a call about just metadata at large. guess. I'm moving forward on whether it be. removal, or a simplification or what to do with the metadata separate proposal. USA: Yeah, I think that's a great idea. The Champions. I don't believe are here, but maybe we can confirm that offline diff --git a/meetings/2022-03/mar-31.md b/meetings/2022-03/mar-31.md index f44c2069..8c843d23 100644 --- a/meetings/2022-03/mar-31.md +++ b/meetings/2022-03/mar-31.md @@ -55,11 +55,11 @@ KG: We do have a bunch of other slides that we called an appendix talking about RPR: Thank you. All right, then. I think we're ready begin and the first person on the queue is JHD -JHD: Okay, so The subclassing question as Kevin and Michael laid out - it affects a lot of things. Observability, can harm performance optimizations and can create all sorts of weird things with proxies and stuff like that, it can affect “borrowability”, so it is sort of related to defensive code? it may not be a common practice, but it is a practice that affects a lot of people transitively you can do like `Set.prototype.add.call`, and you can cache the method and right we've talked about `callBind` earlier in this meeting, So that that's a relatively common thing in my code, at least to make sure that I don't call the add method on the thing people give me, but that I instead if they give me a set that I use sets functionality on it directly. So for example, if you tried to enforce an additional invariant via a subclass my code is just going to blindly ignore all of your invariants because it uses the base class methods, and that sort of dovetails into overridability. It's sort of the only way that the current set of subclassing things works is if everyone is just kind of faithfully calling methods directly on your object, which is admittedly a common thing to do. But there's nothing in the language that requires it and so it sort of it means that you basically have can have no guarantees when you are making a subclass and that philosophically sucks, but that also might suck if you're interested in your code providing guarantees. So, an idea that I've been tossing around for a while, based on Bradley's proposal a while back (the Set and Map, key and value constructor hooks) is essentially if subclasses never had to overwrite any methods, and instead at construction time could provide hooks or alternative implementations for internal algorithms - then it feels to me, like, everything else would just naturally work like, idiomatically, it would work with robust defensive coding patterns. It would work still just fine with everyone just doing `.add` on Set subclasses, and so on. I just kind of wanted to throw that out there - not as a proposal idea - but to more like - It seems to me like that would be a way that would check the most boxes and answer a lot of these questions - and I've talked about this with you, Kevin and Michael, a few times - is that I was hoping to get sort of the room’s thoughts on that as a general approach. For example, just to give a concrete example before I stop speaking, if I wanted to use SameValue instead of SameValueZero in a Set, when I construct the Set I could pass an options bag that has like a “something” property, that function would you same would be a predicate that provides SameValue semantics instead of SameValueZero semantics. And then at that point `Set.prototype.add.call` on my set instance would use my SameValue semantics because that instance had already been constructed with those hooks. So that's like a concrete example to think about +JHD: Okay, so The subclassing question as Kevin and Michael laid out - it affects a lot of things. Observability, can harm performance optimizations and can create all sorts of weird things with proxies and stuff like that, it can affect “borrowability”, so it is sort of related to defensive code? it may not be a common practice, but it is a practice that affects a lot of people transitively you can do like `Set.prototype.add.call`, and you can cache the method and right we've talked about `callBind` earlier in this meeting, So that that's a relatively common thing in my code, at least to make sure that I don't call the add method on the thing people give me, but that I instead if they give me a set that I use sets functionality on it directly. So for example, if you tried to enforce an additional invariant via a subclass my code is just going to blindly ignore all of your invariants because it uses the base class methods, and that sort of dovetails into overridability. It's sort of the only way that the current set of subclassing things works is if everyone is just kind of faithfully calling methods directly on your object, which is admittedly a common thing to do. But there's nothing in the language that requires it and so it sort of it means that you basically have can have no guarantees when you are making a subclass and that philosophically sucks, but that also might suck if you're interested in your code providing guarantees. So, an idea that I've been tossing around for a while, based on Bradley's proposal a while back (the Set and Map, key and value constructor hooks) is essentially if subclasses never had to overwrite any methods, and instead at construction time could provide hooks or alternative implementations for internal algorithms - then it feels to me, like, everything else would just naturally work like, idiomatically, it would work with robust defensive coding patterns. It would work still just fine with everyone just doing `.add` on Set subclasses, and so on. I just kind of wanted to throw that out there - not as a proposal idea - but to more like - It seems to me like that would be a way that would check the most boxes and answer a lot of these questions - and I've talked about this with you, Kevin and Michael, a few times - is that I was hoping to get sort of the room’s thoughts on that as a general approach. For example, just to give a concrete example before I stop speaking, if I wanted to use SameValue instead of SameValueZero in a Set, when I construct the Set I could pass an options bag that has like a “something” property, that function would you same would be a predicate that provides SameValue semantics instead of SameValueZero semantics. And then at that point `Set.prototype.add.call` on my set instance would use my SameValue semantics because that instance had already been constructed with those hooks. So that's like a concrete example to think about KG: Yeah, so I'd especially like to hear from implementations on that. -MF: Well, I think I can prime this discussion a little bit. We had some slides where we talked about implementation freedom versus extensibility, and how depending on what you make observable you limit what kinds of implementations can exist. Some of the examples you gave are actually good examples of how that implementation freedom will be limited if you were to replace that fictional comparison operation that happens in Sets with SameValue. That would make Sets actually have to do comparisons against all their values. So this would mean adding to a set would be a linear operation. So log time if they're backed by a hashing operation. Whereas if the implementation wants to change a Set from SameValueZero to SameValue, they could just change the hashing algorithm. By giving the freedom to change it with this conceptualization of a Set, you now have limited what kinds of conceptualizations the implementation has to align with. +MF: Well, I think I can prime this discussion a little bit. We had some slides where we talked about implementation freedom versus extensibility, and how depending on what you make observable you limit what kinds of implementations can exist. Some of the examples you gave are actually good examples of how that implementation freedom will be limited if you were to replace that fictional comparison operation that happens in Sets with SameValue. That would make Sets actually have to do comparisons against all their values. So this would mean adding to a set would be a linear operation. So log time if they're backed by a hashing operation. Whereas if the implementation wants to change a Set from SameValueZero to SameValue, they could just change the hashing algorithm. By giving the freedom to change it with this conceptualization of a Set, you now have limited what kinds of conceptualizations the implementation has to align with. JHD: For that specific example, instead of providing a predicate, you could provide instead a transformation function like “use this value for comparisons” so that you would only call it once for each item and then if you wanted SameValue semantics, you'd make all strings with a prefix in all numbers, you would convert to a string and I don't, you know, you could come up with some implementation here. @@ -153,7 +153,7 @@ KG: Effectively, what do you mean by that? KM: Like changing how like if I wanted to change, how exactly what's in some like, very fundamental way like changing how matchworks, or something have to change something about how matchworks subtly at least to get it to like work with work with your new concept. If that makes sense like it's not like adding stuff on top. It's like doing like changing a very small component individually without changing all the high level operations of the class as well. -KG: Well, I can say that I have definitely seen multiple implementations of like frozen set that just override add, delete, and clear to throw. So I can say that people in JavaScript at least try to do that. It just like, it fundamentally doesn't work because someone else can always just call Set.prototype.add. +KG: Well, I can say that I have definitely seen multiple implementations of like frozen set that just override add, delete, and clear to throw. So I can say that people in JavaScript at least try to do that. It just like, it fundamentally doesn't work because someone else can always just call Set.prototype.add. KM: Sure, I mean more like but those are sort of like the end level abstraction, I guess in a lot of ways. ways. It's sort of more when you change how like, I everyone's like yeah, it's sort of like if you change how add works then or like get works and expect like, do you expect iteration to still work and you do that. Like, if supposing that iteration used the get operation on a set. Like I would sort of expect that like that would There's a pretty decent chance that when I'm writing that version that like I would sort of expect that iteration wouldn't quite work the way that it used to work and would have weird issues, is that? I don't know. I don't know if what I'm saying Makes sense. @@ -165,7 +165,7 @@ KG: Yeah, that sounds right to me. KM: Okay, then my topic is already covered. -SYG: I think what KM is saying is for things for which it seems like at least there is some semblance of an algebraic understanding like Set. You maybe think that a minimal core ought not break the outer methods as he said but for things where the algebraic understanding is literally just the instructions. This is going to execute and the invariants for the properties that happened to hold because of the instructions that the default thing executes, like there is no way I think there is any reasonable expectation. you replace like exact and have other stuff work other than just trial and error. So I think that to me is the design Criterion that separates low-level and high-level. Like there is, at least some broad, agreement and understanding of what like algebraically a map and a set is. A regular expression engine etc, what it with that should be like, I don't think we can communicate that in a clear way for it to be hooking one. +SYG: I think what KM is saying is for things for which it seems like at least there is some semblance of an algebraic understanding like Set. You maybe think that a minimal core ought not break the outer methods as he said but for things where the algebraic understanding is literally just the instructions. This is going to execute and the invariants for the properties that happened to hold because of the instructions that the default thing executes, like there is no way I think there is any reasonable expectation. you replace like exact and have other stuff work other than just trial and error. So I think that to me is the design Criterion that separates low-level and high-level. Like there is, at least some broad, agreement and understanding of what like algebraically a map and a set is. A regular expression engine etc, what it with that should be like, I don't think we can communicate that in a clear way for it to be hooking one. KM: I think I would largely agree with that. That kind of makes sense. Yeah, if you're trying to expose a low level component Effect to have of like a things. JavaScript works. It's going to be a lot harder for someone to kind of plug in play with. In the middle. @@ -269,13 +269,13 @@ Presenter: J.S.Choi (JSC) - [updated article](https://jschoi.org/22/es-dataflow/) - [post-plenary ad-hoc discussion](https://github.com/tc39/incubator-agendas/blob/main/notes/2022/01-27.md) - JSC: All right. Good morning afternoon everyone. I’m JSC. I'm with a. I'm here with a return of an item that we talked about last plenary, which is trying to view five, proposals that overlap each other holistic way, strategically. So we've had two meetings, a plenary meeting in a post. Plenary meeting. This is a Redux of that. It's Open-ended discussion. I'd like to quickly go through this article. I linked it in matrix, but I can link it again right now. On Matrix. I'm going to through this article fairly quickly to give as much time as possible to discussions since since I gave up some time. So we'll look. Here we go. So, you know like last time like at the last plenary. Dr. Miller mentioned that you know, the proposal process has pathology and that it emphasizes new individual proposals and it kind of D emphasizes their cross-cutting concerns how they overlap and such. So we it's tough to get a unified language unless we actively fight against that. So this is kind of an effort to try to view this space holistically to try to do the sort of holistic strategizing that was happening pre es6 before. So we've got we've got like five proposals here. You might remember this diagram. There have been some changes particular, bind-this has become call-this. The bind-this operator down here, used to you. That used to support also function binding. It's an infix operator that its left hand side is some object and the right-hand side is some function. and so it tries to it, tries to change the receiver of that function. And it used to support creating bound functions. In addition to calling that bound function with arguments. Now, it's basically just a version of dot call. We got rid of function binding. So it actually is functionally a subset of extensions, extensions being this proposal that touches on a bunch of things. +JSC: All right. Good morning afternoon everyone. I’m JSC. I'm with a. I'm here with a return of an item that we talked about last plenary, which is trying to view five, proposals that overlap each other holistic way, strategically. So we've had two meetings, a plenary meeting in a post. Plenary meeting. This is a Redux of that. It's Open-ended discussion. I'd like to quickly go through this article. I linked it in matrix, but I can link it again right now. On Matrix. I'm going to through this article fairly quickly to give as much time as possible to discussions since since I gave up some time. So we'll look. Here we go. So, you know like last time like at the last plenary. Dr. Miller mentioned that you know, the proposal process has pathology and that it emphasizes new individual proposals and it kind of D emphasizes their cross-cutting concerns how they overlap and such. So we it's tough to get a unified language unless we actively fight against that. So this is kind of an effort to try to view this space holistically to try to do the sort of holistic strategizing that was happening pre es6 before. So we've got we've got like five proposals here. You might remember this diagram. There have been some changes particular, bind-this has become call-this. The bind-this operator down here, used to you. That used to support also function binding. It's an infix operator that its left hand side is some object and the right-hand side is some function. and so it tries to it, tries to change the receiver of that function. And it used to support creating bound functions. In addition to calling that bound function with arguments. Now, it's basically just a version of dot call. We got rid of function binding. So it actually is functionally a subset of extensions, extensions being this proposal that touches on a bunch of things. JSC: So also, although I've kept this part of the diagram within the pipe operator, based on some conversations I had with some of the representatives. I'm starting to consider the call-this part. It's not really overlapping with pipe in so far that the pipe version of `.call` expressions is really clunky and unreadable and I'll talk about that. But to zoom in once again, "call this" has dropped creating bound functions, in that it currently only supports immediately calling functions with different `this` arguments. And, and although I'm keeping this section within pipe, operator. I mean, I crossed it outreally is. it out here. I don't think that receiver owner dot owner dot method dot call with topic arguments is really that much of an improvement. So it improves the word order, but it's very clunky. Other than basically everything else is the same. And I'll go over the results of the ad hoc meeting afterwards, since a lot of the plenary wasn't there, at the end of this article. JSC: Im not really going to review the proposals themselves. You can read the explainers. And it also has a summary in the article itself. They overlap in different ways. I'm going to really focus more, broadly, and high level on the differences on how they approach paradigms. You can see I added color coding to this diagram here and I'm going to consistently use that color coding in the rest of the article. Whereas red indicates APIs that don't use the, ‘this’ binding and like, quote functional unquote, APIs and blue indicates, APIs that use that ‘this’ binding like, quote object-oriented, unquote APIs, and I include duck doc, call in this. so, -JSC: Just to really to go broadly on what the point of all this is. I'm going to use a term called data flow that it's in the title of this thing. It's basically the idea of that you transform some raw data. It can be really any value with a series of sequence of steps and they can be function calls, method calls, whatever. And these five proposals try to improve them in different ways in order to make them more natural, more readable, more fluent. More linear more ,more fluent. And so I give this example here that we have in the status quo, already a flow away to create fluent data flow using a prototype based method chains. And here we have a nice linear data flow kitchen, get fridge, find with predicate count, to string. And so can, you can see the numbers at the top there. In contrast, if you use you use any other kind of a API, in particular function calls that don't use this. You get a zig zagging, you get deeply nested expressions that result in a zig, zagging by reading order. You can just follow the numbers up here. So we have a term called fluent interfaces, there's even a Wikipedia article about it and it's right now only available in the status quo with prototype based chains. So the idea behind something like the pipe operator is to try to make it so that we can express these fluent dataflows with other sorts of API. It's not just prototype based method chains where the method already belongs to the Prototype. Now, reading order isn't the only factor in data flow fluency. There's also excessive boilerplate, and by boilerplate I mean visual noise. I mean stuff that isn't essential to understanding the meaning of essential stuff and just gets in the way. Hypothetically for instance, The pipe operator wouldn't improve that original prototype based method chain, if you try to use the pipe operator on each of these, it's a lot of wordier. it's worse. To give another example, you can see that dot call is very common in object oriented code you, if you follow that link, you can see our dataset and our results, you can reproduce it yourselves. And so if we try to improve find, this thing with `.call call` using the pipe operator, it arguably gets worse, which is why it is struck out. Find dot call pipeline dot call topic predicate is arguably, even even worse than using find dot call up here, though the word order gets improved. Just excessive boilerplate with the stock call topic, whatever so that, that's why a separate operator may be here to, to reduce the excessive boilerplate. And as a reminder, .call is a very common operation in the function in the language, the pipe Champion group has really been thinking about whether it's possible to modify the pipe operator to address.calls, clunkiness without compromising the pipe cases and the pipe operator, that we really haven't figured out any way. Other than other than making what's essentially a new operator. +JSC: Just to really to go broadly on what the point of all this is. I'm going to use a term called data flow that it's in the title of this thing. It's basically the idea of that you transform some raw data. It can be really any value with a series of sequence of steps and they can be function calls, method calls, whatever. And these five proposals try to improve them in different ways in order to make them more natural, more readable, more fluent. More linear more ,more fluent. And so I give this example here that we have in the status quo, already a flow away to create fluent data flow using a prototype based method chains. And here we have a nice linear data flow kitchen, get fridge, find with predicate count, to string. And so can, you can see the numbers at the top there. In contrast, if you use you use any other kind of a API, in particular function calls that don't use this. You get a zig zagging, you get deeply nested expressions that result in a zig, zagging by reading order. You can just follow the numbers up here. So we have a term called fluent interfaces, there's even a Wikipedia article about it and it's right now only available in the status quo with prototype based chains. So the idea behind something like the pipe operator is to try to make it so that we can express these fluent dataflows with other sorts of API. It's not just prototype based method chains where the method already belongs to the Prototype. Now, reading order isn't the only factor in data flow fluency. There's also excessive boilerplate, and by boilerplate I mean visual noise. I mean stuff that isn't essential to understanding the meaning of essential stuff and just gets in the way. Hypothetically for instance, The pipe operator wouldn't improve that original prototype based method chain, if you try to use the pipe operator on each of these, it's a lot of wordier. it's worse. To give another example, you can see that dot call is very common in object oriented code you, if you follow that link, you can see our dataset and our results, you can reproduce it yourselves. And so if we try to improve find, this thing with `.call call` using the pipe operator, it arguably gets worse, which is why it is struck out. Find dot call pipeline dot call topic predicate is arguably, even even worse than using find dot call up here, though the word order gets improved. Just excessive boilerplate with the stock call topic, whatever so that, that's why a separate operator may be here to, to reduce the excessive boilerplate. And as a reminder, .call is a very common operation in the function in the language, the pipe Champion group has really been thinking about whether it's possible to modify the pipe operator to address.calls, clunkiness without compromising the pipe cases and the pipe operator, that we really haven't figured out any way. Other than other than making what's essentially a new operator. JSC: Anyway, there's one more thing with regards to clunkiness is and that's we can express data flows as series to Temporary variables, which is totally fine, but excessive use of temporary variables is pretty clunky too, and can introduce a lot of redundant visual noise, too. There's a reason why there's a reason why the Prototype based method chaining was so popular with fluent interfaces. So we get into, what I would argue is an ongoing and current ecosystem Schism in the realm of data flow, and that's between object-oriented APIs that use ‘this’ based functions and functional, APIs, that don't fit that. Use non-displaced functions and you can further split both of these up into several quotes. Sub. paradigms unquote like Using functions from prototype. Chains versus ‘free this’ using functions and for the functional Paradigm stuff, depending on the variety of the function. like whether they're curried unary functions and n-arry functions, whatever. And whether the inputs of interest that we're doing the data flow through are zero arguments or last arguments. And so these paradigms have different trade-offs, but right now developers have to choose between APIs and interoperability isn't that good. These trade-offs fall under two major factors. Dataflow fluency, which I mentioned earlier and module splitting. Module splitting is a very powerful force. In today's ecosystem, due to the ongoing drive to improve performance. I'll talk about that in a little bit, but basically, right now fluent data flow is only possible in the status quo with prototype based chain of prototype-based object-oriented floats. It's not supported with free object-oriented functions. It's not possible with non this using function calls, but module splitting is possible with the other 10. So it's developers have to choose between having fluent data flow or module splitting. This interoperability gets into a virality problem. And again, this is ongoing right now in the status quo, the By virality, I mean something that WH mentioned in the previous ad hoc meeting, which I thought was a great framework, which is that when you have two different syntaxes that can do the same thing, more than one way to do it, interoperability determines how viral one choice becomes over the other. It is so like if if it's easier to work with the on syntax with that same syntax then, That that's going to keep on, that's going to encourage feet of new APIs to work only Only with that syntax another, not the other syntax, and I would include tool chains in this too. Things, like, like, webpack roll up, stuff that does tree, shaking and module bundling and splitting today. Module bundling, and splitting is an extremely powerful force in the ecosystem, and it's only possible really with free functions. It used to be such that data flow fluency was like, maybe the primary drive and that's why we saw many object-oriented APIs that were based on Prototype chaining think, jQuery, Think mocha. Think even the DOM but like code payload weight has become so powerful of a force with esbuild, roll up, and webpack and That major a The eyes are switching from prototype-based OO paradigms to functional paradigms of only because of the treeshake-ability. A major example is Firebase firebases, JavaScript SDK. They were trying to have it both ways by having modularity, by monkey-patching methods into prototypes, and allowing people to use prototype based method chains only if they imported certain modules, but they gave up on those side effecting Imports and just switched over whole sale to functional based stuff, giving on dataflow fluency. @@ -285,13 +285,13 @@ JSC: So with regards to that. How do we Bridge the Schism? I would argue that in JSC: So, you know, there are different ways we can mix and match them. It brings up the question of like, should we only bring in one proposal is to is two the right amount even three like for instance in this row down here, this request. This would require bringing in three proposals. `Function.pipe` doesn't really work on these two columns in the Paradigm without a partial function application syntax and then extensions down here. It kind of, it has some funny things with when it comes to how it handles. Non this using functions. I'll get that into that in a little bit, but really, there are different ways of mixing if you want to mix. And if you want to bring in only one, then then which are submitted for Pipe operator and not the call instance, if we only brought in this operator. Again. I argued earlier that the pipe operator doesn't improve duck calls, which means that it doesn't improve data flow for free this using functions, which means it doesn't improve the tree shaking, which means that it means, and it means that object oriented this pressures on object-oriented Paradigm. Ecosystem, will will not be improved. So, and I get into that this just talks about more about why arguing that call this in the pipe operator are complementary and do not overlap - that the pipe operator only really handles the functional Paradigm. It does not improve object-oriented Paradigm at all. I'm going to skip through that and and this rest is just basically what I mentioned to again, extension syntax is both object-oriented and functional paradigms. The functional Paradigm, is addressed in kind of a special scenario where it's ternary form, where a function called has to belong to an owner object, and the owner object, can't be a Constructor and the function called the input of Interest that's being flowed through has to be the zeroth argument. It's a very specific scenario. -JSC: There is the concern that even if we're having an ongoing ecosystem Schism that ratifying a number of these proposals may worsen the schism for instance, if we ratify `call this` alone, so the risk of the Schism going back to WH's, framework earlier, depends on Paradigm interoperability, and also the balance between the pressures on API designers. And so, for instance, if we only did call-this, and we didn't do pipe, then it would encourage developers to use free this using functions, which would be tree shakeable, but it wouldn't improve that data flow fluency for functional APIs and non ‘this’ using APIs. And so there the interoperability isn't improved in data flow fluency, which means that even though it be improved with ecosystem tools, tree shakers and stuff for It warranted APIs. So it may worsen it may well worsen the Schism we're seeing; developers will be torn between or will be pushed towards object-oriented data flows and to give up functional data flows because functional data flows wouldn't have that fluency. Likewise the pipe operator alone. If we do the pipe operator alone, it would improve functional data flow fluency wouldn't improve interoperability with object-oriented data flows with tools like tree shakers today, because it wouldn’t improve `call`'s clunkiness. So using the pipe operator alone may worsen the ecosystem Schism and accelerate transitioning of APIs to Functional APIs, preferentially because that would continue to be the only fluent way to actually have tree shakeable functions. if we did both "call this" and pipe operator, it is possible that the pressures on the ecosystem between the two paradigms would equalize and you will have fluent interoperability between `call` using functions, as well as regular prototype based methods, and also with non this-using functional APIs, to having both tree Shake ability and also with and data flow fluid. On both sides of the spectrum. +JSC: There is the concern that even if we're having an ongoing ecosystem Schism that ratifying a number of these proposals may worsen the schism for instance, if we ratify `call this` alone, so the risk of the Schism going back to WH's, framework earlier, depends on Paradigm interoperability, and also the balance between the pressures on API designers. And so, for instance, if we only did call-this, and we didn't do pipe, then it would encourage developers to use free this using functions, which would be tree shakeable, but it wouldn't improve that data flow fluency for functional APIs and non ‘this’ using APIs. And so there the interoperability isn't improved in data flow fluency, which means that even though it be improved with ecosystem tools, tree shakers and stuff for It warranted APIs. So it may worsen it may well worsen the Schism we're seeing; developers will be torn between or will be pushed towards object-oriented data flows and to give up functional data flows because functional data flows wouldn't have that fluency. Likewise the pipe operator alone. If we do the pipe operator alone, it would improve functional data flow fluency wouldn't improve interoperability with object-oriented data flows with tools like tree shakers today, because it wouldn’t improve `call`'s clunkiness. So using the pipe operator alone may worsen the ecosystem Schism and accelerate transitioning of APIs to Functional APIs, preferentially because that would continue to be the only fluent way to actually have tree shakeable functions. if we did both "call this" and pipe operator, it is possible that the pressures on the ecosystem between the two paradigms would equalize and you will have fluent interoperability between `call` using functions, as well as regular prototype based methods, and also with non this-using functional APIs, to having both tree Shake ability and also with and data flow fluid. On both sides of the spectrum. JSC: Similarly, to the pipe operator with there's also partial-function-application proposal (PFA) - with function that pipe, and we only did that obviously, it would accelerate transitioning to functional data flows to that. If hypothetically, we did that extensions, it's tough to predict, but it probably would It probably would, it may worsen it alone may worsen a Schism and that it would prevent interoperability with functional APIs, that do not use the 0 for argument as their as their main data flow input. And it would also it would demand that API writers for functional APIs Not use Constructors as owner objects and that they may not be able to do. use free, non this-using functions. Either extensions, has some meta programming stuff that hypothetically could solve all this using runtime dispatch. I'll briefly touch on that, but I'm not going to really get into that because I don't - I mean, I think that it probably would be much less performant efficient, but I'll touch on that right now. JSC: So, untime cost. There's one last way to divide these proposals and that's whether they're “zero-cost” abstraction, Abstractions with no memory or time overhead during runtime, or if they involve the run time allocation of objects or dynamic runtime, type dispatch. pipe and call-this would be abstractions whereas function that pipe, if you use it with partial function application it requires callback construction, which is I believe one reason why implementers were concerned about PFA syntax. Likewise extensions involves Dynamic type dispatch. It's kind of its kind of complicated, but in particular, yeah, it has a has both a binary and ternary operator and the ternary operator depends on whether it's middle operand is a constructor or not at runtime and depending on that it uses `.call` on a prototype method or to seems that it's a static non this using function. It also has a runtime metaprogramming system based on a well-known symbol, that affects the behavior of both the binary and ternary operators. So it hypothetically, it may be a little more difficult to statically analyze; certainly parser wouldn't be able to know you will have, you would have to do some type analysis of that stuff. -JSC: The last part of this article has to do with the appendix and the, a lot of this is drawn, mostly from the ad hoc post plenary meeting. Although there's one drawn from the prior meeting and there's to the two issues that statements that I'd like to draw attention to In particular. There's one representative that has a hard requirement that the pipe operator be bundled together with call-this or else that representative would block the pipe operator advance to stage one or two one. think like five years ago, or so, conditionally on that a call This like operator would advance that can always double bind operator, but whatever. So there's that issue. But at the same time, another representative is, is reasonably reluctant to have more than one syntactic dataflow proposals to advance. And that representative is most positive about the pipe operator. And so reconciling these two things. It's a big issue, especially with regards to feature at the pipe operator, which is hard coupled to call this or something like that, and then there's some other findings too that we had from the ad hoc post plenary meeting. There were like six representatives there. We discussed whether overlap is intrinsically bad, and as a little bit undesirable. It's okay to have some but to match is bad. TMTOWTDI “There's more than one way to do it” continues to be a core part of the language, but we have to just keep on looking at it her a situation because we don't want to have too much tend to agree. There's more than one way to do it in general extensions and call this or still mutually exclusive. Although you could say that call, this is future compatible with +JSC: The last part of this article has to do with the appendix and the, a lot of this is drawn, mostly from the ad hoc post plenary meeting. Although there's one drawn from the prior meeting and there's to the two issues that statements that I'd like to draw attention to In particular. There's one representative that has a hard requirement that the pipe operator be bundled together with call-this or else that representative would block the pipe operator advance to stage one or two one. think like five years ago, or so, conditionally on that a call This like operator would advance that can always double bind operator, but whatever. So there's that issue. But at the same time, another representative is, is reasonably reluctant to have more than one syntactic dataflow proposals to advance. And that representative is most positive about the pipe operator. And so reconciling these two things. It's a big issue, especially with regards to feature at the pipe operator, which is hard coupled to call this or something like that, and then there's some other findings too that we had from the ad hoc post plenary meeting. There were like six representatives there. We discussed whether overlap is intrinsically bad, and as a little bit undesirable. It's okay to have some but to match is bad. TMTOWTDI “There's more than one way to do it” continues to be a core part of the language, but we have to just keep on looking at it her a situation because we don't want to have too much tend to agree. There's more than one way to do it in general extensions and call this or still mutually exclusive. Although you could say that call, this is future compatible with JSC: extensions does continue to polarize the committee. I believe that the representative of the the champion for extensions plans to give an update later, to plenary, and also partial function application syntax also continues to polarize the committee. and that's about it with regards to my update on data flow. Hopefully this framework is somewhat useful with regards to like ecosystem Schism, I'm arguing that there's an ongoing Schism right now to different pressures and an imbalance between data flows with the tree shake ability, and that developers are forced to choose between one and the other. But right now, tree shake ability is winning, which is why major APIs like Firebase are transitioning to that and giving up on object oriented APIs, but interoperability between the two remains poor in data flow expressions anyway. @@ -301,11 +301,11 @@ JHX: Okay. Thank you. I just want to give a very small explanation about extensi MM: Yeah, so first of all, to frame the discussion I want to respond to the statement about JHD’s position. The term "block" or "I will block" has a connotation of being actively obstructatory, and I want to make sure that that we're not reading that in, and I think the right framing of it is that consensus needs to be earned and being clear about what consensus has not been earned is a fine clarifying statement to make and helps inform the discussion. So I will do that as well. -MM: I think the language is already way over its syntax budget. A lot of the concepts here are things that I would consider if we were starting from a tiny language, or if we were designing a language greenfield. That's not the situation; the situation is we're talking about adding syntax to a language that already has so much syntax. It's a real burden on understanding what code means, especially for novices and we need to remember the special role that JavaScript has in the world. A lot of people learn programming first, not in school, but by looking at JavaScript and trying to learn from other people's code. There's just lots of sort of amateur people who pick up JavaScript that are not planning to be professional programmers. Their expertise is elsewhere, but they want to use the language. And even though they can stick to a subset they understand, if they're learning from reading other people's code, the more they have to understand before they can even start to have to decipher other people's code the worse off they are. So, my conclusion for this, that one outcome of this which I would be happy with is that we adopt zero of these proposals. I would be happy with that. I don't think any of these proposals clearly adds more value to the language than it subtracts. The one proposal that from the arguments I've heard I would be comfortable seeing go forward, or I would not withhold consensus from it if it seemed to have by itself enough momentum to go forward, which is just the pipe operator, the pipe operator itself and the topic, which I presume is `@`, those two operators alone with the current semantics that they have, which can be understood straightforwardly as a rewrite. "call-this", the more you JSC explained it in terms of multiple paradigms, the more that I thought the conclusion was a stronger case against "call-this". So let me explain that because I'm confused about your argument. JavaScript has these two co-existing paradigms, object-oriented and functional; the pipe operator lets you get the notational benefit of infix for calling functions, but it does it without looking like a method lookup, but it does it by still making it clear that you're calling a function. I think the right way to think about what the in the language in which these paradigms coexist, the right way to think about what the difference between the Add-ons are, what the fundamental difference is, is, who decides what code gets run as a result of executing the call at the call site. In a function call, the function is looked up according to the scope at the call site, according to the value that the function expression evaluates to, which is typically an identifier looked up in the local scope. It's very much much according to the call site. That object oriented program is we call early binding. The functional paradigm is fundamentally about late binding, where what code gets run is according to the object, the object is the authority, and that introduction of new objects, with new implementations of the same API, are supposed to be able to extend the meaning of the call site and that says that the existing “.name” is the realization of the object-oriented paradigm. To put it another way, the object-oriented Paradigm, you're programming in methods, that mention "this", but an unbound method should never be reified. Having a first-class unbound method that is "this"-sensitive is outside the object paradigm and outside the functional paradigm, and should be discouraged. Part of what we're doing as a committee is also making choices that help encourage good patterns and discourage bad patterns. The object-oriented patterns that make sense is no reified unbound methods, the functional pattern that makes sense is all of the arguments are the explicitly provided arguments, and those functions should not be “this”-sensitive. And given that those are the only patterns for these two paradigms we want to encourage, the pipe operators still adds substantial value. The call this operator purely subtracts value. +MM: I think the language is already way over its syntax budget. A lot of the concepts here are things that I would consider if we were starting from a tiny language, or if we were designing a language greenfield. That's not the situation; the situation is we're talking about adding syntax to a language that already has so much syntax. It's a real burden on understanding what code means, especially for novices and we need to remember the special role that JavaScript has in the world. A lot of people learn programming first, not in school, but by looking at JavaScript and trying to learn from other people's code. There's just lots of sort of amateur people who pick up JavaScript that are not planning to be professional programmers. Their expertise is elsewhere, but they want to use the language. And even though they can stick to a subset they understand, if they're learning from reading other people's code, the more they have to understand before they can even start to have to decipher other people's code the worse off they are. So, my conclusion for this, that one outcome of this which I would be happy with is that we adopt zero of these proposals. I would be happy with that. I don't think any of these proposals clearly adds more value to the language than it subtracts. The one proposal that from the arguments I've heard I would be comfortable seeing go forward, or I would not withhold consensus from it if it seemed to have by itself enough momentum to go forward, which is just the pipe operator, the pipe operator itself and the topic, which I presume is `@`, those two operators alone with the current semantics that they have, which can be understood straightforwardly as a rewrite. "call-this", the more you JSC explained it in terms of multiple paradigms, the more that I thought the conclusion was a stronger case against "call-this". So let me explain that because I'm confused about your argument. JavaScript has these two co-existing paradigms, object-oriented and functional; the pipe operator lets you get the notational benefit of infix for calling functions, but it does it without looking like a method lookup, but it does it by still making it clear that you're calling a function. I think the right way to think about what the in the language in which these paradigms coexist, the right way to think about what the difference between the Add-ons are, what the fundamental difference is, is, who decides what code gets run as a result of executing the call at the call site. In a function call, the function is looked up according to the scope at the call site, according to the value that the function expression evaluates to, which is typically an identifier looked up in the local scope. It's very much much according to the call site. That object oriented program is we call early binding. The functional paradigm is fundamentally about late binding, where what code gets run is according to the object, the object is the authority, and that introduction of new objects, with new implementations of the same API, are supposed to be able to extend the meaning of the call site and that says that the existing “.name” is the realization of the object-oriented paradigm. To put it another way, the object-oriented Paradigm, you're programming in methods, that mention "this", but an unbound method should never be reified. Having a first-class unbound method that is "this"-sensitive is outside the object paradigm and outside the functional paradigm, and should be discouraged. Part of what we're doing as a committee is also making choices that help encourage good patterns and discourage bad patterns. The object-oriented patterns that make sense is no reified unbound methods, the functional pattern that makes sense is all of the arguments are the explicitly provided arguments, and those functions should not be “this”-sensitive. And given that those are the only patterns for these two paradigms we want to encourage, the pipe operators still adds substantial value. The call this operator purely subtracts value. KG: Yeah, I just wanted to— So the things that MM said first, that the language has so much syntax already and it is not enough that I think would be useful. That is not— the bar is much higher than "this would be useful for something people want to do". It has to be _so_ useful that it's worth trying to cram into this like rather full language. And second that yes, the notion of functions that are not attached to an object and still refer to their `this` is very strange and not something I would like to encourage. Even if it is a thing which already exists, again, it is not enough that it would be useful for a thing people want to do. It has to also be a good idea for them to do it, and I don't think that functions which refer to their "this" but are not actually associated with any object are a good idea. -JRL: So I have two points that are a little bit related, both in response to MM. First, is that he says that neither operator pipe, nor call-this, add to the language in a way that is more than them subtracting from the language by taking syntax, and I think that's false. The ability to change and get rid of class-based APIs in a way that is ergonomic to the call site and to the user of the API, makes both of these considerably better. And we can debate whether or not "call-this"'s `this` based usage is learnable to new developers, but I think that just the fact that we are switching from method dispatch on a class to a function invocation, makes this a complete change to the way that we currently write JavaScript. And that brings it into the second point. Whereas most code that is written today uses classes either explicitly or using the prototype on functions, almost all of the standard language is written in a class-based API. Almost all user code is written as class-based APIs. We understand object-oriented methods extremely well, but unfortunately there's an extreme penalty when using prototype methods and that it's very difficult for us to eliminate dead code from our bundles. The reason Firebase is so noteworthy here is because they're prioritizing bundle size so much that they have, for their users, broken the expected usability and ergonomics. You have a function that takes the context object as its first argument, which is almost unheard of in JavaScript. No one does this. If you have a class that has 5, 10, methods on it, that's totally normal. But if you have five or ten free functions that are taking their context objects, their fridge or their kitchen or whatever, as the first parameter, that breaks with expected ergonomics that everyone has accepted. The reason “call-this” is a good addition to the language is because it allows us to achieve both good bundle sizes, that we need to in order to have decent web performance, and an acceptable call site useability, so that libraries are actually encouraged to try these new APIs. +JRL: So I have two points that are a little bit related, both in response to MM. First, is that he says that neither operator pipe, nor call-this, add to the language in a way that is more than them subtracting from the language by taking syntax, and I think that's false. The ability to change and get rid of class-based APIs in a way that is ergonomic to the call site and to the user of the API, makes both of these considerably better. And we can debate whether or not "call-this"'s `this` based usage is learnable to new developers, but I think that just the fact that we are switching from method dispatch on a class to a function invocation, makes this a complete change to the way that we currently write JavaScript. And that brings it into the second point. Whereas most code that is written today uses classes either explicitly or using the prototype on functions, almost all of the standard language is written in a class-based API. Almost all user code is written as class-based APIs. We understand object-oriented methods extremely well, but unfortunately there's an extreme penalty when using prototype methods and that it's very difficult for us to eliminate dead code from our bundles. The reason Firebase is so noteworthy here is because they're prioritizing bundle size so much that they have, for their users, broken the expected usability and ergonomics. You have a function that takes the context object as its first argument, which is almost unheard of in JavaScript. No one does this. If you have a class that has 5, 10, methods on it, that's totally normal. But if you have five or ten free functions that are taking their context objects, their fridge or their kitchen or whatever, as the first parameter, that breaks with expected ergonomics that everyone has accepted. The reason “call-this” is a good addition to the language is because it allows us to achieve both good bundle sizes, that we need to in order to have decent web performance, and an acceptable call site useability, so that libraries are actually encouraged to try these new APIs. WH: I agree with MM, especially with the point about the different kinds of dispatch. In procedural programming you’re just calling a function. In object-oriented programming, it’s the object that determines what a method means. “Call-this” is a violation of that. It's an anti-pattern, it is not something that should be encouraged. The other worry I have is about ecosystem schisms: we must remember that functions sometimes take more than one argument, so it’s best to treat them equally. I don't want wars about which argument is special that the extensions proposal would encourage by privileging arguments in the first position. @@ -315,7 +315,7 @@ YSV: Yeah, I just want to support quite a bit of what MM was saying. I think my USA: Great, thank you before we return to the queue. We have a little over five minutes. So request people to be quick. -JHX: Yeah, as the author of the extension proposal. I have to say, actually I agree with both sides. The reason I designed the extension proposal, I know that many people may some disagreement about some design part of the second proposal, like why he needs declarations, why does it need a separate namespace? Actually, it's because the problem MM said for once I think the causes this, this functionality the very important but on the other side I also agree that spreading unbound will have many Problems. So, the extensions method has some special design to try to control the about that part of the Unbound methods. And in most cases it will, you can not use Unbound methods in extension because it's a declaration and you can't use the unbound method references. So yeah, that's it. +JHX: Yeah, as the author of the extension proposal. I have to say, actually I agree with both sides. The reason I designed the extension proposal, I know that many people may some disagreement about some design part of the second proposal, like why he needs declarations, why does it need a separate namespace? Actually, it's because the problem MM said for once I think the causes this, this functionality the very important but on the other side I also agree that spreading unbound will have many Problems. So, the extensions method has some special design to try to control the about that part of the Unbound methods. And in most cases it will, you can not use Unbound methods in extension because it's a declaration and you can't use the unbound method references. So yeah, that's it. SYG: This is more of a clarifying question, it might be a pedantic quibble. I don't quite understand what interoperability means in the context of this presentation. Like, obviously you can do both together in the same program. What does interoperability or rather what does the lack of interoperability mean? @@ -368,7 +368,7 @@ TAB: [from queue] Agree with JHD; accidents of grammar that don't have actual pr WH: I just wanted to bring up a procedural issue, which is that we have multiple proposals trying to grab the `@` symbol. It would be confusing if they all got it. So I'm just wondering how we're going to resolve this. -KHG: yeah, that is definitely an issue to discuss. I'm not sure that that is relevant exactly. If you're worried about it being. [audio-issue] I'm just saying like I'm not sure how that's relevant to this particular issue. Unless you're concerned about how `@` is being used in one of those other proposals. +KHG: yeah, that is definitely an issue to discuss. I'm not sure that that is relevant exactly. If you're worried about it being. [audio-issue] I'm just saying like I'm not sure how that's relevant to this particular issue. Unless you're concerned about how `@` is being used in one of those other proposals. WH: I'm asking a procedural question of how we are going to resolve this. I don’t want to discuss the technical issues now, but in case it matters, my preference is that decorators should get `@` and other proposals should not. @@ -404,11 +404,11 @@ KG: Why super? Super is totally reasonable. JHX: Sorry. I don't think so. Is there any real benefits to allowing it? -KG: Yeah, you can - think it is entirely reasonable that you might have a decorator, which is defined as a static method on the superclass and you might want to invoke it on a subclass. That's like a totally natural thing to want. +KG: Yeah, you can - think it is entirely reasonable that you might have a decorator, which is defined as a static method on the superclass and you might want to invoke it on a subclass. That's like a totally natural thing to want. JHX: Sorry, I don't get it. -KG: like, if I have a super class and the super class has a static method named ‘bind’, which is a decorator like it expects to be invoked as a decorator, then as my subclass I might very reasonably say `@super.bind` as my decorator on a field expecting that to invoke the method in a superclass as a decorator. That's like a thing that I would very naturally want to do. +KG: like, if I have a super class and the super class has a static method named ‘bind’, which is a decorator like it expects to be invoked as a decorator, then as my subclass I might very reasonably say `@super.bind` as my decorator on a field expecting that to invoke the method in a superclass as a decorator. That's like a thing that I would very naturally want to do. JHX: okay, I think if you could add some code example, in the jitsi it would it will help you to understand it. Thank you @@ -416,7 +416,7 @@ KHG: Okay, cool, so I think we can come back to this item, possibly in the next USA: Yeah, sure. -KHG: That's the last one. of these was the `new` keyword `new.target` and `import` keyword `import.meta` So yeah, JHX. How do you feel about these two same position? +KHG: That's the last one. of these was the `new` keyword `new.target` and `import` keyword `import.meta` So yeah, JHX. How do you feel about these two same position? JHX: I hope we wish we could ban all of those @@ -430,7 +430,7 @@ JHX: Okay, I uh, I think I need some time to think about that. Generally thinkin SYG: I don't quite get the argument for wanting to ban all of them. It's because you don't think people will write that code despite delegates saying they will write that code? -JHX: Yeah, so sorry. I said it seems apartment. He is here that I think in most cases people use decorator as very special thing. So, I understand to allow these expressions sounds ok, but I feel these are very strange usage and it would not match most real world usage. +JHX: Yeah, so sorry. I said it seems apartment. He is here that I think in most cases people use decorator as very special thing. So, I understand to allow these expressions sounds ok, but I feel these are very strange usage and it would not match most real world usage. SYG: I think that's a fine opinion to have. I don't think that really rises to the level of an argument for why we should not include an otherwise unsurprising bit of compositionality, right? Like as a language, most things should probably just compose even if the compositionality might be rare, except when that composed thing is like super problematic for some reason. And then we argue about why we should disallow certain things to be composed. In this case, I haven't heard any argument on why it's harmful to keep allowing them to be composed other than that it is unlikely. And I don't think that is really an argument at all for why they should not compose. @@ -442,13 +442,13 @@ KHG: Yeah, I think we can continue to iterate on this question and bring it back MM: Yes, I object in the absence. I did not follow the thread. I was not aware of a thread. So let me say in the absence of understanding. What Authority is our objective. We sure I'm getting (?) is correct. Without this addition, the decorator list is interpreted as strict code. And with the addition that you're asking about, the decorator list would be interpreted as sloppy mode, is that correct? -KHG: ah, I mean, I'm not super sure about this strict parts of the spec. I'm not super familiar with them the strictness parts, so I would to kind of look at it. But I think the current spec says all parts of the class declaration and the decorator list would be considered part of the class declaration or class expression. +KHG: ah, I mean, I'm not super sure about this strict parts of the spec. I'm not super familiar with them the strictness parts, so I would to kind of look at it. But I think the current spec says all parts of the class declaration and the decorator list would be considered part of the class declaration or class expression. MM: So I need to understand what the normative difference is. Of Accepting or not accepting this language. I don't know what the current behavior is that this language would change and I don't understand how that language would change. KHG: So what I understand is that it would allow you to use - like the code for decorators outside of the class body itself, so specifically the decorators applied to the class itself, would not be considered in strict mode. A reason for this was because of a look-ahead concern, so there was concern that we would have to like, you know, parse everything in order to understand, if this module was going to be strict or not. -MM: Okay. Thanks. Mentioning that concern makes a huge difference. I understand that sometimes these look ahead concerns can be very painful on people who implement parsers. Were there implementers on that thread that voice a strong desire to avoid the implementation complexity and held by this look-ahead. And and if not, are there any implementers in the room that would like to venture an opinion as to whether the look-ahead in question here would be painful. +MM: Okay. Thanks. Mentioning that concern makes a huge difference. I understand that sometimes these look ahead concerns can be very painful on people who implement parsers. Were there implementers on that thread that voice a strong desire to avoid the implementation complexity and held by this look-ahead. And and if not, are there any implementers in the room that would like to venture an opinion as to whether the look-ahead in question here would be painful. SYG: Well it’s arbitrary, right? If it's like an expression that's a decorator application expression that's currently in a sloppy thing. Well, I guess right now is there another context in which you could apply a decorator in a sloppy context? @@ -479,7 +479,7 @@ Presenter: Daniel Rosenwasser (DRR) - [proposal](https://github.com/tc39/proposal-type-annotations) - [slides](https://1drv.ms/b/s!AltPy8G9ZDJdq3JSrN6Dh1XYVwpW) -DRR: Thank you very much for making more time for this discussion. We heard some of the feedback from the first discussion plenary. So first off if you happen to see a grammar dot MD or some sort of grammar file. Sorry, we didn't mean to mislead you if you felt like that was the concrete syntax that we are proposing. That was more of an iteration point, those sorts of details can be discussed more in a later stage. So, sorry if that was a misleading point on top of that, some other feedback that we heard at the first presentation was, you know, a desired for type Very much in line type system neutrality. I think that's this. with what we had in mind for this proposal. So the sort of line of thinking for what we're trying to open up is something like pluggable types, if you're familiar with that concept enough space where you could bring, whatever type Checker and apply that to, you know, this space that we're trying to carve out. So existing type Checkers, future ones, what have you. Another piece of feedback that we got was a question of, “Hey, is this trying to describe something for an existing syntax. And is it? Some of that exists in syntax to represent all of that existing syntax?” It definitely takes inspiration from existing type system syntaxes, but we're not trying to get all of it. We do believe that there is value in just getting some of it at this point and it should be a decent chunk of that as well. Right? So that, we're not leaving existing users behind, but we also believe that there is room for the existing type systems to grow and sort of adapt and find ways to bridge the gap, so it's not just something where we're creating new variant that doesn't really satisfy existing users. But again, this is something that we can discuss at a later stage and we would appreciate the chance to do that, too. +DRR: Thank you very much for making more time for this discussion. We heard some of the feedback from the first discussion plenary. So first off if you happen to see a grammar dot MD or some sort of grammar file. Sorry, we didn't mean to mislead you if you felt like that was the concrete syntax that we are proposing. That was more of an iteration point, those sorts of details can be discussed more in a later stage. So, sorry if that was a misleading point on top of that, some other feedback that we heard at the first presentation was, you know, a desired for type Very much in line type system neutrality. I think that's this. with what we had in mind for this proposal. So the sort of line of thinking for what we're trying to open up is something like pluggable types, if you're familiar with that concept enough space where you could bring, whatever type Checker and apply that to, you know, this space that we're trying to carve out. So existing type Checkers, future ones, what have you. Another piece of feedback that we got was a question of, “Hey, is this trying to describe something for an existing syntax. And is it? Some of that exists in syntax to represent all of that existing syntax?” It definitely takes inspiration from existing type system syntaxes, but we're not trying to get all of it. We do believe that there is value in just getting some of it at this point and it should be a decent chunk of that as well. Right? So that, we're not leaving existing users behind, but we also believe that there is room for the existing type systems to grow and sort of adapt and find ways to bridge the gap, so it's not just something where we're creating new variant that doesn't really satisfy existing users. But again, this is something that we can discuss at a later stage and we would appreciate the chance to do that, too. DRR: But I think the biggest thing that we got feedback on was that we needed a more concrete problem statement here at plenary. So the problem statement that we've put together is that there is a strong demand for ergonomic type annotation syntax that has led to forks of JavaScript with custom syntax. This has introduced developer friction and means, widely used JavaScript forks have trouble coordinating with TC39 and often must risk syntax conflicts. Now, I want to come back to this slide in a minute or two. So can observe it, but we'll come back to it. @@ -487,7 +487,7 @@ DRR: I want to give some background on some of the thinking here. So. in a sense DRR: So the proposal here would be to formalize an ergonomics syntax space for comments to integrate the needs of type-checking forks of ecmascript. We want to Workshop that a little bit. That's something we can do maybe more in stage 1, but we believe that the problem statement is sufficiently motivating to move into stage one. And so we'd like to open the floor to the discussion to understand whether or not we're meeting the committee's expectations here, if there's anything that needs to be clarified, things like that. So I'll open the floor to questions. -MM: So first of all, let me say that when you presented types as comments in plenary, earlier, I had a conflict and I missed the whole discussion. So my apologies for that. So this might be a little bit out of context, what I understood about the presentation was that the grammar that was presented was large, and I don't know if you believe that you can satisfy the problem statement with a tiny grammatical edition, but I would certainly very skeptical of any large grammatical addition to the language. I think stage one is fine. I have no problem signing on to let this go to stage one with this problem statement. Thanks for a clear problem statement. And that's it. +MM: So first of all, let me say that when you presented types as comments in plenary, earlier, I had a conflict and I missed the whole discussion. So my apologies for that. So this might be a little bit out of context, what I understood about the presentation was that the grammar that was presented was large, and I don't know if you believe that you can satisfy the problem statement with a tiny grammatical edition, but I would certainly very skeptical of any large grammatical addition to the language. I think stage one is fine. I have no problem signing on to let this go to stage one with this problem statement. Thanks for a clear problem statement. And that's it. DRR: Thank you. I think that we definitely understand that feedback and at least having the opportunity to discuss this within stage 1 would be something that we and I think many typed JavaScript users would appreciate at the very least that would provide us a good venue to understand how to best serve users or understand the problem space. So that's I think the minimum of what we're hoping for but so yeah, yeah, well, I think can proceed in the queue. @@ -533,7 +533,7 @@ RPR: What are we open to? WH: Yeah, I just want to know what type annotation solutions are we open to? Questioning somebody's intentions is frankly hostile. And that's not something that people should be doing. -RPR: Aright, so we know you are not questioning our intentions. You are asking questions. I think what we've heard very clearly is that you would like the space to be open to include type annotation syntax that also has semantics. That's very clear. +RPR: Aright, so we know you are not questioning our intentions. You are asking questions. I think what we've heard very clearly is that you would like the space to be open to include type annotation syntax that also has semantics. That's very clear. WH: Yeah, and I would like this space to be open to very simple syntaxes. So, I'm absolutely open to hear how others feel about that. @@ -573,7 +573,7 @@ JHX: Thank you. My understanding is that the big goal should be narrowing the ga RPR: Thank you for the fall-back suggestion. -Francisco Tolmasky: So I guess I'm coming from perhaps the like opposite end of the big discussion that just took place, in that like it feels like this proposal relies a lot on the spirit of the proposal. And by that, I mean like, as far as I can tell, And I'm happy if this doesn't expand to any sort of semantics. It doesn't seem like proposal would actually enforce anything to do with types at all. Including like, this could be used to annotate anything. And, you know, we don't call them Financial operators. We call them arithmetic operators, right? Even if originally for the historical context of why plus and minus were added to the first programming language might have been because people wanted to do stuff with dollars and cents, But like it's certainly the case that when you read about these syntax elements, they're not described in terms of money handling, right? And similarly, you know, normal comments aren't described as like, you know, documentation generation tools, right? They're syntax elements that can be used for just about anything. It's very wide open - people have used it for other stuff, right? Like oh you move stuff left and right with the plus operator too, even if the original intent was something else. And so I think that at least some of the tension from this is that this proposal exists in this kind of liminal space: on the one hand, we talk about it as a thing that's super neutral, right? Like so neutral that likely even admit that like, yeah, I guess you don't even have to use it for types. On the other hand, I think some of the expectations of what will get out of it seemed very tight, right? Like if the idea is that like, oh every single editor will be able to expect at least these comments to be around. There's kind of this implicit understanding that like they're they're going to syntax highlight it correctly, but they can't because like the only correct syntax highlighting of a neutral comment would be just straight up normal comment color, right. Anything further would mean that, you know, the editor plugin is assuming that this is probably TypeScript type at this position, even though it's technically this neutral thing, right? So, I feel that perhaps both to prepare ourselves for that, I think people will use it for wacky reasons. It might be completely outside our expectations. And also to limit the kind of fear of: is this TypeScript or isn't it? If we were to instead have the problem statement be something closer to like, we understand that, you know, a part historically of the JavaScript Community has been to add language on top of the language and we'd like a place for any of that to live so that hopefully we can avoid this in the future for any any sort of thing. Whether it's some sort of new kind of decorator or whether it's types, or whether it's, you know, something that you could even use it for documentation, right? Then I think, you know, we would avoid these questions of like, well should it have types semantics? And if not is in a confusing that you can add types, but they don't, you know, throw an error and it's a try to locate it. I think it does a better job of putting that squarely as a, you know, kind of like an implementation detail of what you're doing on top of it in the same way that like, we don't really discuss the issues with JSDoc inside of comments, you know, when we discuss the syntactic elements of Comments. +Francisco Tolmasky: So I guess I'm coming from perhaps the like opposite end of the big discussion that just took place, in that like it feels like this proposal relies a lot on the spirit of the proposal. And by that, I mean like, as far as I can tell, And I'm happy if this doesn't expand to any sort of semantics. It doesn't seem like proposal would actually enforce anything to do with types at all. Including like, this could be used to annotate anything. And, you know, we don't call them Financial operators. We call them arithmetic operators, right? Even if originally for the historical context of why plus and minus were added to the first programming language might have been because people wanted to do stuff with dollars and cents, But like it's certainly the case that when you read about these syntax elements, they're not described in terms of money handling, right? And similarly, you know, normal comments aren't described as like, you know, documentation generation tools, right? They're syntax elements that can be used for just about anything. It's very wide open - people have used it for other stuff, right? Like oh you move stuff left and right with the plus operator too, even if the original intent was something else. And so I think that at least some of the tension from this is that this proposal exists in this kind of liminal space: on the one hand, we talk about it as a thing that's super neutral, right? Like so neutral that likely even admit that like, yeah, I guess you don't even have to use it for types. On the other hand, I think some of the expectations of what will get out of it seemed very tight, right? Like if the idea is that like, oh every single editor will be able to expect at least these comments to be around. There's kind of this implicit understanding that like they're they're going to syntax highlight it correctly, but they can't because like the only correct syntax highlighting of a neutral comment would be just straight up normal comment color, right. Anything further would mean that, you know, the editor plugin is assuming that this is probably TypeScript type at this position, even though it's technically this neutral thing, right? So, I feel that perhaps both to prepare ourselves for that, I think people will use it for wacky reasons. It might be completely outside our expectations. And also to limit the kind of fear of: is this TypeScript or isn't it? If we were to instead have the problem statement be something closer to like, we understand that, you know, a part historically of the JavaScript Community has been to add language on top of the language and we'd like a place for any of that to live so that hopefully we can avoid this in the future for any any sort of thing. Whether it's some sort of new kind of decorator or whether it's types, or whether it's, you know, something that you could even use it for documentation, right? Then I think, you know, we would avoid these questions of like, well should it have types semantics? And if not is in a confusing that you can add types, but they don't, you know, throw an error and it's a try to locate it. I think it does a better job of putting that squarely as a, you know, kind of like an implementation detail of what you're doing on top of it in the same way that like, we don't really discuss the issues with JSDoc inside of comments, you know, when we discuss the syntactic elements of Comments. RPR: Yeah, I guess that is somewhere where we might land. Certainly we've had some of those discussions over the last couple of days. I think it came up on Tuesday, whether this problem does reduce to “just comments”. It sounds like based on some of the feedback we've had today we need to consider also going a bit beyond that, but definitely I think in stage 1, I would hope to resolve that and it may land just as you say. diff --git a/meetings/2022-06/jun-06.md b/meetings/2022-06/jun-06.md index 733110b2..bd13a3e4 100644 --- a/meetings/2022-06/jun-06.md +++ b/meetings/2022-06/jun-06.md @@ -22,21 +22,21 @@ Presenter: István Sebestyén (IS) IS: Okay, so this is basically the usual type of agenda. I'm also looking at the, the clock here. You know, that I do not spend too much time on the report. So, first, I would go through very, very quickly, the list of new ECMA TC39 documents, and list of GA documents - those which are relevant. Then again, just to report on TC39 meeting participation numbers. That sounds like a very very quick one. Then the TC39 standards download- and access statistics. The next one, where we are with the ES 2022 approval and publication process. We are quite good. And then, there were some Ecma Bylaws and Rules changes proposed by the recent ExeCom meeting, and those are going to be approved or not approved at the June 2022 GA. I think it will be approved by the June 2020 GA and then last but not least, the next TC39 meetings, the GA and ExeCom meetings. There are some new information on those because we have received new dates for 2023 GA and ExeCom. -IS: Okay, so these are the TC39 document practically all of them, which I have not shown yet to you. So first of all on the opt-out period, as you know, we have already started during the last meeting. And then the timer for the two months had started. And then we had to publish, obviously, the full text of the two standards that we are going to approve, the new editions at the same time. So this is TC39/2022/14 and 15. Then the document 16 is the communication that was created by our management team regarding the TC39 nonviolent communication training funding requests to the ExeCom. So that was discussed at the ExeCom meeting but I understood it is still under discussion. So the 17 it is the collection of the slides for the last meeting. This is a duplication of info what we also offer over the GitHub. So only for those people who are only reading the official TC39 file server documents on the ECMA side, you know, this is all new information for them, but for you in in TC39 practically not so relevant because you know those from the Github. So then the minutes of the last meeting is is the say 18. So this is what we have approved, just five minutes ago. And then venue 19 for this meeting. +IS: Okay, so these are the TC39 document practically all of them, which I have not shown yet to you. So first of all on the opt-out period, as you know, we have already started during the last meeting. And then the timer for the two months had started. And then we had to publish, obviously, the full text of the two standards that we are going to approve, the new editions at the same time. So this is TC39/2022/14 and 15. Then the document 16 is the communication that was created by our management team regarding the TC39 nonviolent communication training funding requests to the ExeCom. So that was discussed at the ExeCom meeting but I understood it is still under discussion. So the 17 it is the collection of the slides for the last meeting. This is a duplication of info what we also offer over the GitHub. So only for those people who are only reading the official TC39 file server documents on the ECMA side, you know, this is all new information for them, but for you in in TC39 practically not so relevant because you know those from the Github. So then the minutes of the last meeting is is the say 18. So this is what we have approved, just five minutes ago. And then venue 19 for this meeting. IS: Okay, and then the agenda for this meeting 20, and then last but not least, and this is the most important one for us, 21, so that the opt-out period has ended. Actually, it was, I think exactly on the 29th of May, so a couple of days ago and we have not received any comments. So everything is fine and everything should be prepared for the approval meeting on June 22. Okay. Let's go to the next slide, please. -IS: So these are those new GA documents that are relevant for us. Some of them are also duplications with TC39 documents, and you have also seen them in the TC39 set. So like the opt-out period, it was not only announced on TC39 level but generally on the GA level, this is the ga 2022/25. This document are the minutes of the debate at the special General Assembly when the new additional copyright policy was approved. Then the 027: This is for those folks who are interested in the Ecma financial figures of last year, what Ecma has produced. So this is the auditor's report. There was an important type hybrid Execom meeting on April 5th and 6th in Geneva. Actually, then later on in my sides, I have taken over some info from these minutes. Just the most important information for us. So you will see that later. Then 030 and 031: these are key documents which we have officially published for the approval, for the new edition (13th) of ECMA-262. So this will be approved at the next GA. This is also true for the document for the ECMA-402 9'th Edition. So this is going to be approved. And then in a separate document we have the table for all the drafts that have been submitted to the General Assembly. The 123rd General Assembly meeting in june 2022 will be in Geneva. It will also be a hybrid meeting. So, those who are interested in going. they should contact the ECMA Secretariat. And 36 is the GA presentation of the minutes of the last TC39 meeting. and we have a document on candidates of nominees for the vacant ExeCom and Ecma Management position. I think there are two positions, but we have only one candidate. Daniel Ehrenberg is a candidate for the for Ecma Vice President to be elected only for half a year. And then at the end of the year we have a full election for the entire next year 2023. The document 44 and 45: these are quite important. I would say, this is what would be the highlights. There are some minor changes on the Ecma bylaws and rules. Very modest. They are very good in my opinion, both from the substantial point of view and also from the editorial point of view, Etc. There is nothing really dramatic I have to point out to you. Everything is very very modest, but very good changes. Okay, the 46th document. So this is the agenda of our meeting, this is going up to be announced also at the GA level. and then the 052 document, this is what has been published last week by a Patrick and this says that the opt-out period which has been required by the Ecma IPR rules for royalty-free standards, you know, that passed without receiving any comments, everything is fine. So, please, Next document. And this was the end of the list. +IS: So these are those new GA documents that are relevant for us. Some of them are also duplications with TC39 documents, and you have also seen them in the TC39 set. So like the opt-out period, it was not only announced on TC39 level but generally on the GA level, this is the ga 2022/25. This document are the minutes of the debate at the special General Assembly when the new additional copyright policy was approved. Then the 027: This is for those folks who are interested in the Ecma financial figures of last year, what Ecma has produced. So this is the auditor's report. There was an important type hybrid Execom meeting on April 5th and 6th in Geneva. Actually, then later on in my sides, I have taken over some info from these minutes. Just the most important information for us. So you will see that later. Then 030 and 031: these are key documents which we have officially published for the approval, for the new edition (13th) of ECMA-262. So this will be approved at the next GA. This is also true for the document for the ECMA-402 9'th Edition. So this is going to be approved. And then in a separate document we have the table for all the drafts that have been submitted to the General Assembly. The 123rd General Assembly meeting in june 2022 will be in Geneva. It will also be a hybrid meeting. So, those who are interested in going. they should contact the ECMA Secretariat. And 36 is the GA presentation of the minutes of the last TC39 meeting. and we have a document on candidates of nominees for the vacant ExeCom and Ecma Management position. I think there are two positions, but we have only one candidate. Daniel Ehrenberg is a candidate for the for Ecma Vice President to be elected only for half a year. And then at the end of the year we have a full election for the entire next year 2023. The document 44 and 45: these are quite important. I would say, this is what would be the highlights. There are some minor changes on the Ecma bylaws and rules. Very modest. They are very good in my opinion, both from the substantial point of view and also from the editorial point of view, Etc. There is nothing really dramatic I have to point out to you. Everything is very very modest, but very good changes. Okay, the 46th document. So this is the agenda of our meeting, this is going up to be announced also at the GA level. and then the 052 document, this is what has been published last week by a Patrick and this says that the opt-out period which has been required by the Ecma IPR rules for royalty-free standards, you know, that passed without receiving any comments, everything is fine. So, please, Next document. And this was the end of the list. -IS: But why are these lists of interest to TC39 members? I already mentioned that in the past. Because some of the documents, the GitHub documents can only be seen by internal TC 39 people, but for instance usually not by a company GA representative Etc. So we have to do a sort of duplication for them. This is one of the Reasons the other reason is - for long-term, archival purposes - that if one day we stop working GitHub, like, I don't know, in five years time years, years, etc. Etc.then we have automatically a back-up for most important TC39 archived information. There is always danger that one needs that. +IS: But why are these lists of interest to TC39 members? I already mentioned that in the past. Because some of the documents, the GitHub documents can only be seen by internal TC 39 people, but for instance usually not by a company GA representative Etc. So we have to do a sort of duplication for them. This is one of the Reasons the other reason is - for long-term, archival purposes - that if one day we stop working GitHub, like, I don't know, in five years time years, years, etc. Etc.then we have automatically a back-up for most important TC39 archived information. There is always danger that one needs that. IS: Yeah, and this the recent history of TC-39 meeting participation. You can go to the next page because it's only the latest information that is new. So on the March 2022 meeting (remote) we had very high participation (92) and this was probably due to the fact that this was the last meeting before the freeze and before the approval of the ES2022 spec. So, the next slides again, slides, please. IS: So this is the usual download of Ecma standards. The first line shows how many standards have been downloaded from the very beginning of the year in 2022. So, the first five months were 36,000, that's a little bit lower than usual, but not that much. This is the TC39 part of how many standards have been downloaded of the TC39 standards. And as it can be seen, more than half, so over 60% of all the downloads. So this is consistent with the trends, which we have seen over the years. Since 60 percent comes from us, the quality of the TC39 download standards is very important. -IS: Yeah, this is the usual breakdown for the two biggest TC39 standards in HTML format - regarding access. This is in the first in the second column, and then also regarding the download. Less logical are the access figures for ECMA-262 and ECMA-402 for the old Editions (6th and earlier). I have asked again to ECMA secretariat that we should try to look into more details. Unfortunately we could only go down to the country level but not into more details. The countries indicate that the vast majority of access is by bots and not by ES experts. I don't want to go into the list of those countries, but probably the first three numbers we can simply forget. And then starting from 2007, in my feeling it gets to be more realistic and the latest one is for the 12th edition with the 35,000 Etc. It's probably right. So you can see that the access number is significantly higher than the download number. But still with the download numbers are always said, if they are very good. +IS: Yeah, this is the usual breakdown for the two biggest TC39 standards in HTML format - regarding access. This is in the first in the second column, and then also regarding the download. Less logical are the access figures for ECMA-262 and ECMA-402 for the old Editions (6th and earlier). I have asked again to ECMA secretariat that we should try to look into more details. Unfortunately we could only go down to the country level but not into more details. The countries indicate that the vast majority of access is by bots and not by ES experts. I don't want to go into the list of those countries, but probably the first three numbers we can simply forget. And then starting from 2007, in my feeling it gets to be more realistic and the latest one is for the 12th edition with the 35,000 Etc. It's probably right. So you can see that the access number is significantly higher than the download number. But still with the download numbers are always said, if they are very good. -IS: So for ECMA-402, regarding the access number for the first edition and second that you see, it is also probably a fake number, so forget it. And then the rest, it should be okay and also for the download figures. Okay, and of course the ECMA-402 figures are significantly lower than the ES262. Okay, you can continue to the next page. +IS: So for ECMA-402, regarding the access number for the first edition and second that you see, it is also probably a fake number, so forget it. And then the rest, it should be okay and also for the download figures. Okay, and of course the ECMA-402 figures are significantly lower than the ES262. Okay, you can continue to the next page. IS: So regarding the ES 2020 timeline approval. Nothing new, it will be approved at the June 2022 GA, exactly on June 22nd and 23rd. It will be on the agenda on the first day. Everything was OK, no comment received. However, still please note still no substance changes are allowed - not only already for two last months but also until the final approval. Only small editorial changes until the last minute are allowed. And we will publish the two standards as soon as possible. Usually Patrick puts them out immediately, even the pdf version if the pdf version may not be the final one because of the known formatting issues, etc. We will put out whatever we have, as it's good enough for that and then replace it when we have a better one. @@ -50,7 +50,7 @@ IS: I have taken out from the April exocom report. Among many things I thought t IS: Ecma recognition Awards. We don't have any TC39 candidate in this half a year and in general, the Execom will look into that and work out a more detailed operational procedure for the future, and we will see what comes out. Next one. Please. -IS: Okay. So here GA venues and dates. is new here? Yes, the GA and the Execom meeting also for 2023 are now included. I think that should be the end. Thank you very much. +IS: Okay. So here GA venues and dates. is new here? Yes, the GA and the Execom meeting also for 2023 are now included. I think that should be the end. Thank you very much. ## TC39 editors report @@ -104,7 +104,7 @@ Presenter: Shu-Yu Guo (SYG) - [PR](https://github.com/tc39/ecma262/pull/1556) -SYG: So this is something we got consensus for a while ago. When was this? I don't remember. It was a couple of years ago, and I don't think anyone implemented it and then I forgot what the consensus was. And then there's a new PR here. So, I'm trying to jog my own memory and then either affirm consensus or to get new consensus on some of the corners of this weirdness. So quick recap, the typed array stuff remains to have non-interoperable corners. It behaves strangely, when composed with other features. Namely, the strangeness here is what happens when you put a typed array on the Prototype chain of regular object. So if you see on this slide here, which is the old slides from from when I presented this originally, if you have a TypedArray on the Prototype, and then you assign to an index, what do we want to happen? And we're kind of bound by web reality here in that this NASA thing broke when we try to change this. So what is not up for discussion? I think, is that so, currently? I think the spec says something like, if you use a string key. And there is a TypedArray on the Proto chain, like it always goes through the integer, exotic index set for something like that. Meaning that you never go through OrdinarySet. Meaning like, the TypedArray. The TypedArray, implementation of bracket bracket set bracket bracket, kind overrides ordinary set as setters work, except this setter is like really broad. What's the proposed change? Last time? that to kind of keep the NASA thing working? And that was the main example that that we found that broke, but the I guess we consider that a Bellwether of other code on the web depending on the behavior, the proposed change last time. was that if a TypedArray exists on the Prototype and you assign a string keyed property to it. The setter for the TypedArray, would check if the receiver is the same as the target meaning: “Am I on the Prototype chain or Are you actually setting a TA?” if I'm I'm on the Prototype chain, meaning, the receiver and the target are not the same, then it falls through to ordinary set meaning that you would actually assign to, In this case. Not the Proto, what you you would actually assigned. And an actual property at one in this example, 20 instead of the TypedArray, the approaching. So that was the consensus that was gotten before I move forward with the rest of the presentation on the new PR from Alexei from Apple. Are there any questions here? Does anyone disagree with this? I'm just recapping what I think is to the consensus from a few years ago. +SYG: So this is something we got consensus for a while ago. When was this? I don't remember. It was a couple of years ago, and I don't think anyone implemented it and then I forgot what the consensus was. And then there's a new PR here. So, I'm trying to jog my own memory and then either affirm consensus or to get new consensus on some of the corners of this weirdness. So quick recap, the typed array stuff remains to have non-interoperable corners. It behaves strangely, when composed with other features. Namely, the strangeness here is what happens when you put a typed array on the Prototype chain of regular object. So if you see on this slide here, which is the old slides from from when I presented this originally, if you have a TypedArray on the Prototype, and then you assign to an index, what do we want to happen? And we're kind of bound by web reality here in that this NASA thing broke when we try to change this. So what is not up for discussion? I think, is that so, currently? I think the spec says something like, if you use a string key. And there is a TypedArray on the Proto chain, like it always goes through the integer, exotic index set for something like that. Meaning that you never go through OrdinarySet. Meaning like, the TypedArray. The TypedArray, implementation of bracket bracket set bracket bracket, kind overrides ordinary set as setters work, except this setter is like really broad. What's the proposed change? Last time? that to kind of keep the NASA thing working? And that was the main example that that we found that broke, but the I guess we consider that a Bellwether of other code on the web depending on the behavior, the proposed change last time. was that if a TypedArray exists on the Prototype and you assign a string keyed property to it. The setter for the TypedArray, would check if the receiver is the same as the target meaning: “Am I on the Prototype chain or Are you actually setting a TA?” if I'm I'm on the Prototype chain, meaning, the receiver and the target are not the same, then it falls through to ordinary set meaning that you would actually assign to, In this case. Not the Proto, what you you would actually assigned. And an actual property at one in this example, 20 instead of the TypedArray, the approaching. So that was the consensus that was gotten before I move forward with the rest of the presentation on the new PR from Alexei from Apple. Are there any questions here? Does anyone disagree with this? I'm just recapping what I think is to the consensus from a few years ago. (_silence_) @@ -116,13 +116,13 @@ SYG: Cool. Cool. Thanks. So. yeah, to the extent that any TypedArray of this gaz (_silence_) -SYG: I'll take the silence as a people. Agree / /, don't care. So I will consider ASH’s PR to have consensus which due to reiterate. Again means that for out of bound indices they do not fall through to ordinary sets for typed arrays that are on the Proto chain. All right, that's it. +SYG: I'll take the silence as a people. Agree / /, don't care. So I will consider ASH’s PR to have consensus which due to reiterate. Again means that for out of bound indices they do not fall through to ordinary sets for typed arrays that are on the Proto chain. All right, that's it. MAH: I was actually I put something on a little late. I'm confused. Yeah, what I was reading from the PR is that it ends up, if you If typed array is on the Prototype, it ends up as in own property of set in down ends up as an own property on the receiver, which basically behaves pretty much like the ordinary set. If so, you're saying, if you're out of bounds, it wouldn't that. And the set would end up setting nothing. Is there a reason for the discrepancy between what is the reason for the discrepancy between inbound and out of bounds? Or and why not? Just I mean it's weird, but why have more weirdness? Why not make the weirdness consistent? SYG: consistent in which direction? -MAH: in the direction that any if you reach TypedArray and you do a set. So, I guess I need one more clarifications. All right, out of bound being non integer or any string or property, right? If you do foo instead of something numeric you, what is the behavior currently? Sorry +MAH: in the direction that any if you reach TypedArray and you do a set. So, I guess I need one more clarifications. All right, out of bound being non integer or any string or property, right? If you do foo instead of something numeric you, what is the behavior currently? Sorry SYG: The behavior currently is currently spec. I guess, you mean. Yeah, the let's see. Where is this set? @@ -140,12 +140,9 @@ MAH: Okay, it just feels weird. It feels weird. F for inbound. and for non integ SYG: So, currently the spec says, for all canonical numerics, basically suppress it. and we can't do. So it's consistent now, quote, unquote, but we cann’t do it because people don't implement this behavior and when we try to it broke NASA. So we did this carve out here to have it fall through to ordinary set. And the current question is there's non interruptive you if we go, let me go back to the convo. There is non-interop. Where's the comment? This one. I have a comment replying to something. There's interopnon-interop among engines between in bounds and out of bounds. In SpiderMonkey difference -_Gap in notes: recap_ -All engines differ -May be web compatible but some engines have a larger user base -Some engines may be very slow to implement this +_Gap in notes: recap_ All engines differ May be web compatible but some engines have a larger user base Some engines may be very slow to implement this -MAH: I think my response would be that it's so you're saying the minimal change here is minimal and implementation, but it is not exactly a minimal in minimizing the weirdness Like yeah, I think from the conceptual point of view. I would prefer to have the same behavior throughout if you which is instant the same behavior throughout what axis throughout what axis like this is the same. behavior throughout the type of properties, if it's whether it's it's numeric in them. You're out of bounds or non-numeric old shouldn't end up setting on the receiver. +MAH: I think my response would be that it's so you're saying the minimal change here is minimal and implementation, but it is not exactly a minimal in minimizing the weirdness Like yeah, I think from the conceptual point of view. I would prefer to have the same behavior throughout if you which is instant the same behavior throughout what axis throughout what axis like this is the same. behavior throughout the type of properties, if it's whether it's it's numeric in them. You're out of bounds or non-numeric old shouldn't end up setting on the receiver. SYG: Okay, that's a final opinion to have. What if I told you this kind of corner is so low priority that if we got that consensus to change the current V8 behavior, and we don't we're not going to do it for a few years. Does that change your opinion? @@ -201,7 +198,7 @@ YSV: so in for us and since we're going be infected by changing it, we are happy USA: There's nothing on the queue and I hear explicit consensus from the engines. Okay, I think you're good to go. - All right. Thank you +All right. Thank you ### Conclusion/Resolution @@ -217,7 +214,7 @@ Presenter: Justin Ridgewell (JRL) JRL: I'm talking about removing a job from adoption of thenable. So I'm going to start off with the little question. First off pretend ticks here is going to increment every single tick, every micro task. Every time it's going to increment by one. I want you to think about when do the particular log statements actually logged in which tick. Hopefully we all agree that the tick for the console A log will happen immediately. It'll be in the exact same tick as the executing code. That's because the executor is called synchronously by the promise constructor. We are resolving this Promise with a primitive value. And so, the promise itself is immediately settled with the primitive value, `then` callbacks happen after a single tick after the promise itself is settled. If the original promise settled in tick 0, then the callbacks would be called in tick one. And so, the B statement here is going to be called in tick 1. We're now returning a primitive value B. And then the that promise that was created by the then will be settled in that same tick in, tick one. And so the console log for statement C, it's going to happen in tick 2 because it again happens after a single tick after previous promise is settled. In C, we are now returning a promise, we're not returning a primitive value anymore, it is now a promise for a primitive value. I want you to think about when does D log and it's going to be a little surprising. JRL: Why is it surprising requires you to have a lot of context. And I'm sorry, but this is going to be a bunch of very code heavy slides that are coming up back-to-back. Essentially the way that the promise Constructor works in the spec is it calls the executor the function that you pass to new promise constructor. It calls it immediately with resolve and reject. Now resolve is an internal function that we create in the spec and its name Promised Resolve Functions. I've implemented the the important parts of the spec in the code slide here, but I want to note that the function that we pass to a promise constructor is the executor and the resolve parameter is the excutor's resolve. I'm going to call it the executor’s resolve so that we don't get it confused with Promise.resolve. So, in the executors resolve, there are a couple of checks. Is the value that I'm resolving with, is it an object? If it's not, then we can immediately fulfill the promise with the inner value. If it is an object, we get its `then` property synchronously right now, in the same tick as the executor is called. So if the `then` property is not a function, then we immediately fulfill the promise with the inner value. And now if then is a function, we call then after a single microtick. Once we have the then function, we know it's a Thenable. We wait one tick before we call the `then` with the inner value and a spec function that will allow us to settle the outer promise. JRL: What does `then` itself do? Well, it's complicated again, but `then` gets the symbol species of the promise. It's hard to describe because of the way the spec is written, but essentially what it does is it constructs a brand-new promise out of the symbol species. The new promise that we're creating waits a tick before it resolves with the result of the onFulfilled callback's return value. So when you call `then` it takes that onFulfilled callback, it waits a tick, and it resolves the newly constructed promise. -JRL: Linearizing that, if you're piecing this all together what has happened here? What we're doing is creating a new promise that's line 7 and 8 here, we are immediately calling the function, the executors resolve, with the inner value, which is the constructed promised a that I create on line 2 now, the inner value. That function becomes line 10 and 11, because the inner value is itself, a promise. We wait one tick before we call the the inners `then` with the function that will settle the outer promise. Then itself waits one tick before it calls, the setterOuter function. So before we can even settle the outer, we have to wait two ticks. And then finally, on line 16 and 17. We are waiting one tick after the outer has settled to call the logging function. So essentially there are two ticks that are required to adopt an inner thenable and then there's one tick to fire a chained callbacks on the outer promise. So, essentially three ticks for the next action to happen. +JRL: Linearizing that, if you're piecing this all together what has happened here? What we're doing is creating a new promise that's line 7 and 8 here, we are immediately calling the function, the executors resolve, with the inner value, which is the constructed promised a that I create on line 2 now, the inner value. That function becomes line 10 and 11, because the inner value is itself, a promise. We wait one tick before we call the the inners `then` with the function that will settle the outer promise. Then itself waits one tick before it calls, the setterOuter function. So before we can even settle the outer, we have to wait two ticks. And then finally, on line 16 and 17. We are waiting one tick after the outer has settled to call the logging function. So essentially there are two ticks that are required to adopt an inner thenable and then there's one tick to fire a chained callbacks on the outer promise. So, essentially three ticks for the next action to happen. JRL: And so if you were to follow that in our question, original question, when does D log? It logs in tick five. We have to wait three whole ticks in order for this to happen. I want to change this because it affects the way that we do async await. All async functions are just really fancy spec machinery for a promise chain. And so if you were to run these for async functions, you have to think about when it actually logs the results. JRL: On line 4 and 5, what I'm doing is creating an async function and then immediately returning a primitive value so that async function is settled in the first tick and so the then callback can be fired on the next tick, tick one. In the next example, I'm awaiting a value and then returning the awaited value. And so when does that happen? Well, we wait one tick because we're awaiting and then immediately settled the promise with the awaited value. And so here logs on tick 2. Line 11 through 13. This is where it gets tricky. What happens if you were to return a promise inside the async function? This is a native promise, which is also a thenable, and it could have been a fetch or any other API that returned in native promise. If you were to return a native promise what happens? Well, it has to adopt the state, which is two ticks and then it has to fire its chained then callbacks which is anothertick. So C here can't fire until tick 3. And finally, we have 15 through 17. What I'm doing here is awaiting a native promise. We're returning the awaited value and then chaining off of that D here, actually fires in tick two because we don't have to adopt the state of the promise here. await has special magic that allows you to quickly get the value of the Native promise. So I want you to pay attention to line 12 and line 16 here. It's faster to await a native promise and then return that value, then it is to directly return a native promise. It takes an extra tick, just for the adoption. So it's faster to do an await, then return than it is to just return a promise directly. JRL: This bites us all over the place. So, what I want to do is remove the only tick that we can. The tick that happens when you're trying to settle a promise with a Thenable actually doesn't need to be there. According to the promises a plus spec, it's not supposed to be there. We created it I don't know how many years ago to get around a security issue, but we did it really, really badly. Unfortunately, we're paying for the security fix, not getting any of the security benefits. So I kind of just want to remove the tick entirely. @@ -492,7 +489,7 @@ Presenter: Daniel Rosenwasser (DRR) - [proposal](https://github.com/tc39/proposal-array-find-from-last/) - [slides](https://github.com/DanielRosenwasser/findLast-and-findLastIndex-for-Stage-4/raw/main/findLast%20%26%20findLastIndex%20for%20Stage%204%20(TC39%20June%202022).pdf) -DRR: Great. Okay. I'm here on behalf of both me and KWL to present five last and final stage. Good for this Or when you can. presentation. So driving, most of his hard work. So the most part, have everything that we need for stage 4 where I have links here if necessary got suspect ex stage four criteria as a tracking issue where we link out to more things and a list track of implementations that have already gotten, you know something implemented in shipping or are on track to being implemented. Since we last presented, the only thing is this a minor typo? So no major changes just if we need it. I have the specific spec text in the slides as a refresher. much of this is kind a mash-up mash-up of like lastOf, and find and findLast and findIndex, right? And so, I don't think we're going to try to spend too much time on these steps, but if you need them, they're here. These are new sections and then, the only modified section this section on the on scope. So, we have test 262 tests. We have spec text as a PR ready to be merged and we have multiple implementations shipping. Or pretty close to being shipped. So we should have stage for qualifications. And so with that, I would like to ask for stage 4. +DRR: Great. Okay. I'm here on behalf of both me and KWL to present five last and final stage. Good for this Or when you can. presentation. So driving, most of his hard work. So the most part, have everything that we need for stage 4 where I have links here if necessary got suspect ex stage four criteria as a tracking issue where we link out to more things and a list track of implementations that have already gotten, you know something implemented in shipping or are on track to being implemented. Since we last presented, the only thing is this a minor typo? So no major changes just if we need it. I have the specific spec text in the slides as a refresher. much of this is kind a mash-up mash-up of like lastOf, and find and findLast and findIndex, right? And so, I don't think we're going to try to spend too much time on these steps, but if you need them, they're here. These are new sections and then, the only modified section this section on the on scope. So, we have test 262 tests. We have spec text as a PR ready to be merged and we have multiple implementations shipping. Or pretty close to being shipped. So we should have stage for qualifications. And so with that, I would like to ask for stage 4. RPR: All right, any objections to set the stage for? Right. Any any plus ones? We have one +1 from JHD? And as a bit more, yeah, LCA with a plus one. All right. I think we can call it. Congratulations. You have stage 4. @@ -507,9 +504,9 @@ DRR: Thank you very much. All right. Thanks everyone. Thank you, Daniel with ear Presenter: Leo Balter (LEO) - [proposal](https://github.com/tc39/proposal-shadowrealm/issues/365) -- [slides]() +- slides -LEO: Okay. Yeah, I don't have many contents here. This is a part. 1 of 1. Let me share my screen quickly. hmm, don't have anything like such as big slides on your thing, huh? so yeah, this is just like status updates on the stage, three couple, couple slides. I guess you can see my screen, right? Yes. Okay. So this is a part 1 of 3 for shadow realm. So we also have like a item that was headed by SYG which is we can consider this part two they not done. Did they? Follow specific order. And we have a part three, which is regarding the html integration, which might need more time. And it's also like a sad for extra discussion. Implementation status. We have Webkit. Actually Safari technology prefer already showing up, initial implementation of ShadowRealms. There are still details being discussed. We are aware of that and I as I believe like the Apple team is as well. We're so we have like a few details but like mostly, we are using these ShadowRealms implementation to also test things internally which are running great with good performance. I know there's a lot of ongoing implementation on Firefox. A lot notifications coming from Mozilla and we have ongoing implementation going on in Chrome. So things are still like, active. I think this is the best status to report right now, still active, but there isn't anything like really new to test on other than small normative PRs for the work here one. Is that one of them mostly reporting. The HTML GGA which was just an issue raised for regarding when we move a `document.all` across Shadow realms. because they're, they're just awful and I can get like weird results. The problem with `document.all` with, We've all the problems It has. It's not excuse me. It's to somehow callable. So they have an internal ‘call’, call, but they have different. results, if you give different arguments, and so, yeah, you can, there's wrapped the cross or Realms. I understand. There is no. We want to give no special treatment here in. So let me remember because this is not Amazing. Okay. Yeah, so for the behaviors that we might predict it. GG ayy is actually wrapped as call callable. This is the status quo like that means. If you try to transfer `document.all`, like it's just gonna be wrapped across the other realm with whatever results they have a cross and we could actually see the other options. I think the number three here, isn't it dda as an argument throws with type error? we consider you to just be an object or exactly callable that would actually require some extra work in the steps, like special treatments for something. is in annex B. The for my understanding. I think we should just keep this out of scope. But let everyone know that we are like we get we are given intentionally giving those special treatment to treatment to DDA crossrealms if they do. Have an internal call know regardless like how awful they can be status quo is just like the way to give them. I think this gives good path the way they on You're going the future on how to be moved and modernized. If we do, we can add things to the queue to discuss this, but my vote is just like to keep on static School, unless someone has any requirement. We can discuss it. I will move on for now to the other part, +LEO: Okay. Yeah, I don't have many contents here. This is a part. 1 of 1. Let me share my screen quickly. hmm, don't have anything like such as big slides on your thing, huh? so yeah, this is just like status updates on the stage, three couple, couple slides. I guess you can see my screen, right? Yes. Okay. So this is a part 1 of 3 for shadow realm. So we also have like a item that was headed by SYG which is we can consider this part two they not done. Did they? Follow specific order. And we have a part three, which is regarding the html integration, which might need more time. And it's also like a sad for extra discussion. Implementation status. We have Webkit. Actually Safari technology prefer already showing up, initial implementation of ShadowRealms. There are still details being discussed. We are aware of that and I as I believe like the Apple team is as well. We're so we have like a few details but like mostly, we are using these ShadowRealms implementation to also test things internally which are running great with good performance. I know there's a lot of ongoing implementation on Firefox. A lot notifications coming from Mozilla and we have ongoing implementation going on in Chrome. So things are still like, active. I think this is the best status to report right now, still active, but there isn't anything like really new to test on other than small normative PRs for the work here one. Is that one of them mostly reporting. The HTML GGA which was just an issue raised for regarding when we move a `document.all` across Shadow realms. because they're, they're just awful and I can get like weird results. The problem with `document.all` with, We've all the problems It has. It's not excuse me. It's to somehow callable. So they have an internal ‘call’, call, but they have different. results, if you give different arguments, and so, yeah, you can, there's wrapped the cross or Realms. I understand. There is no. We want to give no special treatment here in. So let me remember because this is not Amazing. Okay. Yeah, so for the behaviors that we might predict it. GG ayy is actually wrapped as call callable. This is the status quo like that means. If you try to transfer `document.all`, like it's just gonna be wrapped across the other realm with whatever results they have a cross and we could actually see the other options. I think the number three here, isn't it dda as an argument throws with type error? we consider you to just be an object or exactly callable that would actually require some extra work in the steps, like special treatments for something. is in annex B. The for my understanding. I think we should just keep this out of scope. But let everyone know that we are like we get we are given intentionally giving those special treatment to treatment to DDA crossrealms if they do. Have an internal call know regardless like how awful they can be status quo is just like the way to give them. I think this gives good path the way they on You're going the future on how to be moved and modernized. If we do, we can add things to the queue to discuss this, but my vote is just like to keep on static School, unless someone has any requirement. We can discuss it. I will move on for now to the other part, JHD: Yeah, Yeah, just means `document.all` is already getting special treatment and it depends on which perspective you have, right? If you're looking at spec text, special casing things for `document.all` might be seen as special treatment, but I think the more important model is that `document.all` being callable is the weird thing - we can’t get rid of it because of web compatibility. @@ -545,7 +542,7 @@ LEO: Okay, good. Yeah, so I'm asking for consensus on Sundays. RPR: No objections. I think you have a consensus. -LEO: Sounds good. Yeah, I'll finish this later for now. These are fine. I'm not sure if JHD’s back. Asked, again about DDA you one. Yeah, I was able to ask. Okay, So JHD, I was telling and with support from DE. as a champion. I am proposing. The number one. Lucien, I understand like the number three would be ideal for some sort of like user perspective, But we, Our intention is to give no extension of the special treatment for DDA in. I think it's fine as this, my question for you was if you actually have my question for you is, if you do have objections for number one, +LEO: Sounds good. Yeah, I'll finish this later for now. These are fine. I'm not sure if JHD’s back. Asked, again about DDA you one. Yeah, I was able to ask. Okay, So JHD, I was telling and with support from DE. as a champion. I am proposing. The number one. Lucien, I understand like the number three would be ideal for some sort of like user perspective, But we, Our intention is to give no extension of the special treatment for DDA in. I think it's fine as this, my question for you was if you actually have my question for you is, if you do have objections for number one, JHD: not in this case, but I don't want this to be a precedent. I would push towards banning `document.all` from new places, where conceptually a function is expected. But I don't have to push for that change here. @@ -553,11 +550,11 @@ LEO: Okay, that does it mean can we close this issue or? Considering like spec w MM: I heard JHD agree. So yes, yes. Y -JHD: Unless there's someone else feel strongly with option three. +JHD: Unless there's someone else feel strongly with option three. RPR: So it's yes. It sounded like one was acceptable to all. There was not any objections to one, so it sounds like that is the conclusion. -LEO: Okay. In a, just a trying to sort of summarize. What is coming next? We have the topic for a second part from sure. I think it's following up Now this SYG is driving and we have topics about HTML integration, which is for another part that we should see discuss this for tomorrow within 60 Minutes, anything that we need to revisit. We can. can be my find time tomorrow, but yeah, and we have things coming forward to meeting in July. We I just needed more time. I didn't have time prepare anything for this. call regarding the following PRs. And with that, I end my screen sharing. +LEO: Okay. In a, just a trying to sort of summarize. What is coming next? We have the topic for a second part from sure. I think it's following up Now this SYG is driving and we have topics about HTML integration, which is for another part that we should see discuss this for tomorrow within 60 Minutes, anything that we need to revisit. We can. can be my find time tomorrow, but yeah, and we have things coming forward to meeting in July. We I just needed more time. I didn't have time prepare anything for this. call regarding the following PRs. And with that, I end my screen sharing. RPR: Thank you LEO. @@ -568,13 +565,13 @@ Presenter: Ujjwal Sharma (USA) - [proposal](https://github.com/tc39/proposal-temporal/pulls?q=is%3Aopen+is%3Apr+milestone%3A%22Next+batch+of+normative+changes%22) - [slides](http://ptomato.name/talks/tc39-2022-06/) -USA: Perfect. Thank you. Okay. Okay. Hello folks. Hello again, for the third time today. today. temporal now. So sorry, just one moment. Thank you. Okay, so, what's up? For example, you might ask. Well this. If you've attended last few meetings, might find this familiar. This is similar to what we asked for consensus on the last meeting. This is mostly several minor normative changes. They can be in two categories. There is on the one hand, there's changes suggested by implementers essentially implementer feedback. We call them adjustments. Or changes that we need to make the spectators more accurately reflect the decisions. Essentially uncovering old spec bugs. Well, one thing that I'd like to point out is that there's nothing major, this time. It's just tiny fixes. So if anything that helps. We will convey the game plan, which is that you'll continue to turn implementer feedback into presentations. It's like this hopefully Islam until we reach stage for but the number of normative PR's between two. plenary meetings are still decreasing. So perhaps we would not need them in the next few months. The ITF update from my side is that we're only on the last few legs. I feel this relief for the final reading. Buddies are however, there are some bikesheds that are bothersome but should be fixed. There's broad consensus within the working group on the syntactic details and the RFC should be published soon, but it's hard to hard to say when. Regarding the adjustments that we've made to the spec. First of all, there were some concerns regarding non for two, implementations with calendars. So currently an implementation that does not have a support internationalization is constrained to support, Only the iso 8601 calendar, but we remove that because well, for instance, a internationalization fitted in implementation might want to support. So for example calendars, like the Gregorian calendar for instance, but it does require that any supported calendars are from the said that are supported by Angel. So, you cannot confirm arbitrary calendars, you can selectively support. calendars and implementation becoming too capable should not remove functionality either, so they're still constraints, but we're relaxed some of the requirements that did in sort of match the understanding of the implementers. Next up, we have the next PR which is the order of operations in date, from fuels. the changes in the order of thrown exceptions in a really sort of case are, maybe the example would make it clear. So if you call calendar dot date from Fields with this together, it throws the Site before and in the type Ever After. So we fix that. It makes the spec nicer to read and it somehow wasn't uncovered in the stage 3 reviews. Sorry for that regarding the We removed and errors. Sorry. I can't speak with her new and earnest run check. So there was a faulty rain, check in Temporal. Pull in from it is fixed now. So some valid strings were previously invalid, now, that's that's fixed. We also changed the time zone name grammar. So time zone strings with UTC offset. were not in the grammar. Now. More invalid strings, that would have been valid. So like if you check out this the current spec throws, the intended outcome of the Champions group was that it could be valid. Just the same way, Beyond the time zones, Develop. So that's X now. there's also, we also fixed a mistake in the exact time rounding. And so the rounding method on instant was buggy, that's that's fixed now. We start using null Proto objects in more places. So if you yeah, sorry, it guards against some of the odd cases where you can put in all sorts of objects.And now go it's It does the same sort of probably property look up. They are also fixed. Yeah, we validate the Overflow option in the from method now. So this is basically similar to the previous one, but it helps with consistency with property bags. and yeah, so those are all the changes. Please let me know what you think about them +USA: Perfect. Thank you. Okay. Okay. Hello folks. Hello again, for the third time today. today. temporal now. So sorry, just one moment. Thank you. Okay, so, what's up? For example, you might ask. Well this. If you've attended last few meetings, might find this familiar. This is similar to what we asked for consensus on the last meeting. This is mostly several minor normative changes. They can be in two categories. There is on the one hand, there's changes suggested by implementers essentially implementer feedback. We call them adjustments. Or changes that we need to make the spectators more accurately reflect the decisions. Essentially uncovering old spec bugs. Well, one thing that I'd like to point out is that there's nothing major, this time. It's just tiny fixes. So if anything that helps. We will convey the game plan, which is that you'll continue to turn implementer feedback into presentations. It's like this hopefully Islam until we reach stage for but the number of normative PR's between two. plenary meetings are still decreasing. So perhaps we would not need them in the next few months. The ITF update from my side is that we're only on the last few legs. I feel this relief for the final reading. Buddies are however, there are some bikesheds that are bothersome but should be fixed. There's broad consensus within the working group on the syntactic details and the RFC should be published soon, but it's hard to hard to say when. Regarding the adjustments that we've made to the spec. First of all, there were some concerns regarding non for two, implementations with calendars. So currently an implementation that does not have a support internationalization is constrained to support, Only the iso 8601 calendar, but we remove that because well, for instance, a internationalization fitted in implementation might want to support. So for example calendars, like the Gregorian calendar for instance, but it does require that any supported calendars are from the said that are supported by Angel. So, you cannot confirm arbitrary calendars, you can selectively support. calendars and implementation becoming too capable should not remove functionality either, so they're still constraints, but we're relaxed some of the requirements that did in sort of match the understanding of the implementers. Next up, we have the next PR which is the order of operations in date, from fuels. the changes in the order of thrown exceptions in a really sort of case are, maybe the example would make it clear. So if you call calendar dot date from Fields with this together, it throws the Site before and in the type Ever After. So we fix that. It makes the spec nicer to read and it somehow wasn't uncovered in the stage 3 reviews. Sorry for that regarding the We removed and errors. Sorry. I can't speak with her new and earnest run check. So there was a faulty rain, check in Temporal. Pull in from it is fixed now. So some valid strings were previously invalid, now, that's that's fixed. We also changed the time zone name grammar. So time zone strings with UTC offset. were not in the grammar. Now. More invalid strings, that would have been valid. So like if you check out this the current spec throws, the intended outcome of the Champions group was that it could be valid. Just the same way, Beyond the time zones, Develop. So that's X now. there's also, we also fixed a mistake in the exact time rounding. And so the rounding method on instant was buggy, that's that's fixed now. We start using null Proto objects in more places. So if you yeah, sorry, it guards against some of the odd cases where you can put in all sorts of objects.And now go it's It does the same sort of probably property look up. They are also fixed. Yeah, we validate the Overflow option in the from method now. So this is basically similar to the previous one, but it helps with consistency with property bags. and yeah, so those are all the changes. Please let me know what you think about them YSV: I'm going to the queue. It looks like it's empty at the moment. If anyone has any comments, please feel free to add yourself there. … I'm not seeing anything you want to ask for consensus? USA: Yeah. -YSV: All right. YSV: Looks like you have it. Wait. +YSV: All right. YSV: Looks like you have it. Wait. ### Conclusion/Resolution @@ -586,13 +583,13 @@ Presenter: Shu-yu Guo (SYG) - [proposal](https://github.com/tc39/proposal-shadowrealm/issues/353) -SYG: If you're there with errors, across the ShadowRealm. Yes, I am. Okay. So I have no slides here. It's fair. I don't think the solution space is probably simple. I think I think of the problem, the motivation, for the problem. The solution we're trying to solve is fairly simple and note that there is no concrete, normative change being applied for here, but I want to kind of get the discussion going on. what the actual invariants are that we are trying to preserve and and get some clarity on that and then the plan is to come back later with actual normative change. So, the problem at hand is that shadow Realms have this callable boundary? And the whole idea is that we build in a boundary? Into the API. So that you cannot accidentally intermingle the object graphs of things in the shadow realm with things in the outside realm. This is done by the callable boundary, where the only things that are allowed to pass through the boundary, are primitives, and callables with callable objects being automatically wrapped. All other objects throw when you do when they try to cross the boundary. +SYG: If you're there with errors, across the ShadowRealm. Yes, I am. Okay. So I have no slides here. It's fair. I don't think the solution space is probably simple. I think I think of the problem, the motivation, for the problem. The solution we're trying to solve is fairly simple and note that there is no concrete, normative change being applied for here, but I want to kind of get the discussion going on. what the actual invariants are that we are trying to preserve and and get some clarity on that and then the plan is to come back later with actual normative change. So, the problem at hand is that shadow Realms have this callable boundary? And the whole idea is that we build in a boundary? Into the API. So that you cannot accidentally intermingle the object graphs of things in the shadow realm with things in the outside realm. This is done by the callable boundary, where the only things that are allowed to pass through the boundary, are primitives, and callables with callable objects being automatically wrapped. All other objects throw when you do when they try to cross the boundary. -SYG: So, what does that mean for throwing errors? So currently the spec says something like, if the shadow realm code evaluation throws an error, we should re-throw a type error in the call learn. So like the caller. Calls into the shadow realm to do something. And this example calls this evaluate function, but it could also be an import value in case, you know, CSP is on or something. And if this, if the thing that is being evaluated itself throws, then the shadowRealm machinery. All the spec says, is the shadow realm Machinery, should a type error in the caller realm. So in this case with the caller of Shadow Realm .evaluate, the problem with that is that this makes Shadow Realms basically impossible to debug. We ran into this pretty quickly in the Prototype Tests implementation in Chrome where we were writing tests for import value, cause I wrote The implementation. And the tests were failing with just, this cannot import are and we're chasing it down. It turns out, it was just a stupid mistake. We forgot to upload some support files onto the test Runners And the error, the inner error was just “not found”, but the outer area was just generic, cannot evaluate or cannot import values. Value or something and the inner area was completely lost and took longer to debug and fix than it should have. You should have been like, a one minute thing. The file is not found. We can forget to update. So it's a, it's a, it's a major ergonomics pain point right now that the air is just kind of getting dropped. So the discussion is, How should we make this more ergonomic and actually debuggable during development. We should not obviously define the callable boundaries such that, you know, the error itself is passed through, but how do we preserve the actionable information when we rethrow the error in the outside realm is the general question. I know that the SES folks have opinions here. What should or should not be? Allowed to cross what should or should not be reproduced I guess. There's some difficulties. Let me see if I can find the right one. The right comment here. So, here are some questions to ask at least. Can we do things like, should we prohibit stacks from being preserved across the boundary stack serve continue, you do not be standardized and it seems to be desirable by all parties. It seems to be desired by all parties, rather that don't work across the boundary. that if the shadow realm either, we throw an error. It does not work. Stack that the kind of exposes the inner workings of the show around code. Another question is, how do we preserve the messages? Is this like eagerly reading the message property on the, the inner exception, the inner error and then copying it that you know, has observable normative effects. like it calls the public cost possible Getters on the inner error. Should we somehow take a fast path and check only for Native errors, the last question is something that has been raised that I have a pretty strong opinion on personally. And that question is whether we should just stick to which user thrown and platform thrown media platform, thrown errors to distinguish. The idea that user thrown errors errors caused by user code should just be kind of dropped but platform throws are just maybe getting their messages copied something. I feel that we should not. Wish users thrown in platforms rather than areas. Mechanically. I'm not sure how to deepen it, But just practically that, if we start distinguishing that, I think that has such large knock-on effects on the web ecosystem, all the downstream specs that it's just not a good idea to do for now. But anyway, these are some questions to see the discussion. And with that, let's open up. The floor for discussion. +SYG: So, what does that mean for throwing errors? So currently the spec says something like, if the shadow realm code evaluation throws an error, we should re-throw a type error in the call learn. So like the caller. Calls into the shadow realm to do something. And this example calls this evaluate function, but it could also be an import value in case, you know, CSP is on or something. And if this, if the thing that is being evaluated itself throws, then the shadowRealm machinery. All the spec says, is the shadow realm Machinery, should a type error in the caller realm. So in this case with the caller of Shadow Realm .evaluate, the problem with that is that this makes Shadow Realms basically impossible to debug. We ran into this pretty quickly in the Prototype Tests implementation in Chrome where we were writing tests for import value, cause I wrote The implementation. And the tests were failing with just, this cannot import are and we're chasing it down. It turns out, it was just a stupid mistake. We forgot to upload some support files onto the test Runners And the error, the inner error was just “not found”, but the outer area was just generic, cannot evaluate or cannot import values. Value or something and the inner area was completely lost and took longer to debug and fix than it should have. You should have been like, a one minute thing. The file is not found. We can forget to update. So it's a, it's a, it's a major ergonomics pain point right now that the air is just kind of getting dropped. So the discussion is, How should we make this more ergonomic and actually debuggable during development. We should not obviously define the callable boundaries such that, you know, the error itself is passed through, but how do we preserve the actionable information when we rethrow the error in the outside realm is the general question. I know that the SES folks have opinions here. What should or should not be? Allowed to cross what should or should not be reproduced I guess. There's some difficulties. Let me see if I can find the right one. The right comment here. So, here are some questions to ask at least. Can we do things like, should we prohibit stacks from being preserved across the boundary stack serve continue, you do not be standardized and it seems to be desirable by all parties. It seems to be desired by all parties, rather that don't work across the boundary. that if the shadow realm either, we throw an error. It does not work. Stack that the kind of exposes the inner workings of the show around code. Another question is, how do we preserve the messages? Is this like eagerly reading the message property on the, the inner exception, the inner error and then copying it that you know, has observable normative effects. like it calls the public cost possible Getters on the inner error. Should we somehow take a fast path and check only for Native errors, the last question is something that has been raised that I have a pretty strong opinion on personally. And that question is whether we should just stick to which user thrown and platform thrown media platform, thrown errors to distinguish. The idea that user thrown errors errors caused by user code should just be kind of dropped but platform throws are just maybe getting their messages copied something. I feel that we should not. Wish users thrown in platforms rather than areas. Mechanically. I'm not sure how to deepen it, But just practically that, if we start distinguishing that, I think that has such large knock-on effects on the web ecosystem, all the downstream specs that it's just not a good idea to do for now. But anyway, these are some questions to see the discussion. And with that, let's open up. The floor for discussion. MAH: Yeah, so I think my concern with passing messages through or anything that has a probing of the value being thrown seems incompatible with what we're trying to with the with, what the callable boundaries is currently doing. Currently the correct called boundary is just introspecting the type and letting it through or not, doesn't affect. the value through at all, here, you would have to start automatically probing the object itself, which, which seems like too much. And Regarding the point on user thrown versus platform throw error. The way I see it. There's it's, I wouldn't frame it like that. I would frame it. as is their user codes in the context inside the realm, where the or if is thrown or not. If there is no user code on the stack. There is no way for the user code to handle this exception at all. And so yes, there is a concern there and as far as I know that concern only ever happens during an import and the host is throwing an exception because it cannot resolve a part of the module graph. As far as I know. This is the only case where that can actually happen. And really the context here is Shadow realm is an advanced low-level API code. That executes it has to be at least the entry point, the, the first layer of across the callable boundary, as to be aware of the existence of the callable boundary and at that layer can catch all exceptions and report them as appropriate for that system. And so really this is, if the code executing inside the coal burner, he has no opportunity to do that. For example, as in the case of the module graph in exception being thrown away. unreasonable, when during the resolution of the module graph, -SYG: So concretely, are you saying that maybe we just special case errors thrown during import value during the module machinery? +SYG: So concretely, are you saying that maybe we just special case errors thrown during import value during the module machinery? MAH: I don't know how mechanically that would work. What I'm saying. In my mind, the operating guideline here should be, is their code executing in user code, executing inside, the shadow realm, that would be able to catch and handle the exception or not. @@ -602,7 +599,7 @@ MAH: ShadowRealm is a low-level API. And un-ergonomic in the first place. You ca RPR: Right, there is three minutes left on this time box. And the queue is quite deep. -KM: I guess, I guess I could start. I guess my question sort of was answered to some degree. one thing. thing. I don't understand what it's like talking about, I guess it's sort of a continuation of the last topic to some degree. Why is it like, isn't there or isn't it already possible for the creator of the realm to muck around with the internals, by just forcing Imports of its own files? What does it create? We shouldn't be overly worried about the outer realm seeing data from inside the realm. Like, like a, but I could imagine something where, like, the stack Trace is truncated inside the realm like and like to include or exclude frames from inside your realm. But that once you exit the realm like the outer context is allowed to look at the whole thing. Of course, thank you. I could imagine something like that. If that makes sense because I wouldn't guess you're eating. Yeah, group, debug abilities. +KM: I guess, I guess I could start. I guess my question sort of was answered to some degree. one thing. thing. I don't understand what it's like talking about, I guess it's sort of a continuation of the last topic to some degree. Why is it like, isn't there or isn't it already possible for the creator of the realm to muck around with the internals, by just forcing Imports of its own files? What does it create? We shouldn't be overly worried about the outer realm seeing data from inside the realm. Like, like a, but I could imagine something where, like, the stack Trace is truncated inside the realm like and like to include or exclude frames from inside your realm. But that once you exit the realm like the outer context is allowed to look at the whole thing. Of course, thank you. I could imagine something like that. If that makes sense because I wouldn't guess you're eating. Yeah, group, debug abilities. CP: I have a couple of comments on that very quickly. The problem with errors goes both ways because they can occur not only when you try to import something, but also, when you try to call something that might be a wrapped function, so we have to be careful about it. We just generalized the mechanism when an error occurs to shadow the error, so you don't see what's going on the other side. In the case of the importValue, we know who the caller is. @@ -622,9 +619,9 @@ KM: Right. It's just kind of questionable, If that's something that we should be MAH: I mean, you're talking about the module graph resolution, which is not some wholly fillable, and -LCA: I'm not talking about the module graphic solution. I don't think I'm just talking about generally platform versus user areas that she would bring up. +LCA: I'm not talking about the module graphic solution. I don't think I'm just talking about generally platform versus user areas that she would bring up. -MAH: Right. I didn't just make what I mentioned earlier, like, defining guidelines. Is there any? User code on the stack. and so if it's an error, that is raised without user code being evaluated, it is By definition. I don't believe it can be anything that wouldn't involve the shadow. Realm, being polyfilled as well. +MAH: Right. I didn't just make what I mentioned earlier, like, defining guidelines. Is there any? User code on the stack. and so if it's an error, that is raised without user code being evaluated, it is By definition. I don't believe it can be anything that wouldn't involve the shadow. Realm, being polyfilled as well. LCA: Sure, right. But like that, it only solves the errors coming from module import. much of the solution areas. It does not solve the general debugging use case, which maybe is fine if Shredder wants a low-level API here. Not this is not something that you meant @@ -644,7 +641,7 @@ YSV: Yeah maybe I'll clarify what my distinction is. I consider something like f MAH: I have a reply, some on the queue. I think that it addresses some of this. You? Yeah, you mentioned that this API only being used by a handful of organizations would be kinder to the goal here and it wouldn't really have an impact and I disagree with that. Like I think if a few organizations build libraries that can be used by the broader community, it can have a huge impact and the broader Community doesn't need to be concerned with the complexity and the are going to make nests of this API, if they can benefit from it, and I think that the important part is right now, the callable boundary is not their ergonomic by design for for the security foot guns, that SYG and CP mentioned. And because of that, you have to write specialized schools that run in it. My hope is still that we can have follow on, proposals that can make it built in more usable More usable to use, Shadow Realm. So, maybe they have built in membranes, or maybe have built-in mechanisms that allow it to make it more seamless, right? But this would be just a building block here. -YSV: Yeah. I worded my comment poorly. What I was worried about was that I didn't want to see a situation where this wasn't being used and that it became one of these mysterious parts of the JavaScript surface. For example, I think that one issue is that proxies are poorly understood. I think they're easier to use than what this API is shaping up to be. But I would like to see us be very careful when introducing things that are difficult for people to understand and make those choices. Really carefully, I think finalization groups and weak references was a good choice in that direction where we intentionally dropped economics as a goal from that. But I do think So, the reason that I bring this up is because there is this other issue, which is around how modules are imported into a shadow realm, and I don't understand why we're making it more difficult for users to do that import. And then I'm seeing a connection here where this is also. When hiding information from the user that we might surface, that may make it easier for them to understand why an error is occurring. So I'm questioning why we are not adding certain things easily. Ergonomic blocks for the user to use. Maybe this is just that I'm not fully. I follow Shadow Realms at a surface level. So I may simply not have the in-depth View. +YSV: Yeah. I worded my comment poorly. What I was worried about was that I didn't want to see a situation where this wasn't being used and that it became one of these mysterious parts of the JavaScript surface. For example, I think that one issue is that proxies are poorly understood. I think they're easier to use than what this API is shaping up to be. But I would like to see us be very careful when introducing things that are difficult for people to understand and make those choices. Really carefully, I think finalization groups and weak references was a good choice in that direction where we intentionally dropped economics as a goal from that. But I do think So, the reason that I bring this up is because there is this other issue, which is around how modules are imported into a shadow realm, and I don't understand why we're making it more difficult for users to do that import. And then I'm seeing a connection here where this is also. When hiding information from the user that we might surface, that may make it easier for them to understand why an error is occurring. So I'm questioning why we are not adding certain things easily. Ergonomic blocks for the user to use. Maybe this is just that I'm not fully. I follow Shadow Realms at a surface level. So I may simply not have the in-depth View. CP: I'm very sympathetic with that position. @@ -668,7 +665,7 @@ SYG: I think I have what I need. Thank you. Thank you. CP and other shadowRealm, RPR: Thank you for clarifying the way of putting things. -YSV: Yeah sorry about that. I, it was, I realized after I came out of my mouth, that it was sort of accusatory and I'm really sorry about that. That wasn't intentional. I just wanted to say that if we are making something unergonomic, it should be very intentional. Sorry. +YSV: Yeah sorry about that. I, it was, I realized after I came out of my mouth, that it was sort of accusatory and I'm really sorry about that. That wasn't intentional. I just wanted to say that if we are making something unergonomic, it should be very intentional. Sorry. RPR: That's alright. This is an example of how we can fix things. It's good to correct things. @@ -678,7 +675,7 @@ Presenter: Kevin Gibbons (KG) - [proposal](https://github.com/bakkot/proposal-duplicate-named-capturing-groups) -KG: Great. So, I don't have slides for this. It's a small change. I wasn't entirely sure if I wanted to do it through the proposal process or through a needs consensus PR, but the proposal process is, I think, easier to track things. Basically, we have named capture groups in regexes. Named groups are great. There is a restriction that they must be globally unique throughout the regular expression and sometimes that's a little annoying because you are trying to match something that can be written in two different ways. So, for example, here you might be trying to match something that is either a four digit year followed by a two digit month or a two digit month followed by a four-digit year, and the natural way that you would want to write this if you don't actually care which style it is written in is `/(?[0-9]{4})-[0-9]{2}|[0-9]{2}-(?[0-9]{4})/`. And then if you just want to extract the year you want to use the "year" name in both groups. But this regular expression is illegal in the current specifications, because "year" is duplicated. So this proposal is to relax restrictions specifically so that you can use capturing group names as long as they are in different alternatives. +KG: Great. So, I don't have slides for this. It's a small change. I wasn't entirely sure if I wanted to do it through the proposal process or through a needs consensus PR, but the proposal process is, I think, easier to track things. Basically, we have named capture groups in regexes. Named groups are great. There is a restriction that they must be globally unique throughout the regular expression and sometimes that's a little annoying because you are trying to match something that can be written in two different ways. So, for example, here you might be trying to match something that is either a four digit year followed by a two digit month or a two digit month followed by a four-digit year, and the natural way that you would want to write this if you don't actually care which style it is written in is `/(?[0-9]{4})-[0-9]{2}|[0-9]{2}-(?[0-9]{4})/`. And then if you just want to extract the year you want to use the "year" name in both groups. But this regular expression is illegal in the current specifications, because "year" is duplicated. So this proposal is to relax restrictions specifically so that you can use capturing group names as long as they are in different alternatives. KG: That there's other ways than "different alternatives" that you can statically prevent things from being reused, with lookahead and so on, but those are all very complicated whereas this is a very simple rule that says if two things are in different alternatives then they definitely can't both participate, or I should say they can't both participate except in repetition groups, but repetition groups, it is already established that only the last instance of the group controls. @@ -694,7 +691,7 @@ WH: I haven’t thought through all the implications of this. I don't know if th KG: I do, incidentally, have spec text written. It probably needs a rebase at this point. So, if you'd like to see how it's written down in more detail, the spec text is available. -MPC: Yeah, this is just pure clarifying question except not sure how it works currently, but what so what happens if you were to try to use this regex today, do you get an early error? +MPC: Yeah, this is just pure clarifying question except not sure how it works currently, but what so what happens if you were to try to use this regex today, do you get an early error? KG: Yes, it's an early error. @@ -748,7 +745,7 @@ Presenter: Kristen Hewell Garrett (KHG) - [slides](https://slides.com/pzuraq/decorators-normative-changes-2022-06) -KHG: Okay, it's okay. Sharing my screen. And so yeah, so today we just have a kind of normative update that were considering for the proposal that I wanted to bring to the committee. It was suggested RBN after the proposal, made it into stage three and basically it would change the way that initializers currently work to a little bit more flexible by allowing. users to specify when they're initializer should run from any decorator. So today, when you add an additional initializer, using the app at initializer method on the context of a decorator, that adds the initializer function to one of three possible stages of initialization, if it's an instance field or method then that initializer runs. The beginning of class, instance, initialization. So when a new instance is created, if it is static it runs. Prior to class static class field assignment. So, when the class itself is initialized and if it's a class decorator that it runs after the class has been fully initialized. So all static Fields have been assigned. This code example, kind of shows the different areas where these run essentially. So for instance. Initializers, they run before, the first class field is assigned. This is literally very close to how we transpile it actually in the Babel plugin. For instance or static fields and static initializers. They run prior to the first static bill being signed, and then for class initializes, they run right after the class is defined. so, the proposal would change it so that you could specify the three placements as the first option to add an initializer. You would be required to specify one of the displacements actually. And that would allow it to be placed, you know, in one of those three locations the main two reasons to do this. The first one is that it provides more flexibility overall. There are a number of use cases where you have for instance, and instance method or field that wants to provide do some setup on the class itself wants to initialize the class in some way. An example of this is like props on web components or event listeners on web components as well. So that flexibility to be able to have like an instance and an instance elements be able to add an initialization for the class as a whole would be very useful. But another motivating example motivate action, here would be that this could become a way to add metadata. So previously we broke out the metadata portion of the decorators proposal into a separate proposal and that proposal is currently at stage 2 with this change. We would be able to instance elements and static elements. They would all be able to add their metadata via initializers in the via static initializers, or class initializers. They would do this much like the same way that they currently add metadata by currently meeting like the previous, you know, stage 1 and stage, two proposals added metadata. Meaning, they would just get a reference to the class itself, static initializers and class initilisers get that and they would be able to WeakMap the metadata to the class or just a sign it as a property on the class and expose it that way. So this wouldn't mean that we would still potentially explore the metadata proposal because there would be other benefits to having a shared standard for metadata, but it would also make that proposal not as Necessary for a lot of common use cases. So we would need some of that metadata. And any of the other additional APIs, so that's that's pretty much it. Any questions? +KHG: Okay, it's okay. Sharing my screen. And so yeah, so today we just have a kind of normative update that were considering for the proposal that I wanted to bring to the committee. It was suggested RBN after the proposal, made it into stage three and basically it would change the way that initializers currently work to a little bit more flexible by allowing. users to specify when they're initializer should run from any decorator. So today, when you add an additional initializer, using the app at initializer method on the context of a decorator, that adds the initializer function to one of three possible stages of initialization, if it's an instance field or method then that initializer runs. The beginning of class, instance, initialization. So when a new instance is created, if it is static it runs. Prior to class static class field assignment. So, when the class itself is initialized and if it's a class decorator that it runs after the class has been fully initialized. So all static Fields have been assigned. This code example, kind of shows the different areas where these run essentially. So for instance. Initializers, they run before, the first class field is assigned. This is literally very close to how we transpile it actually in the Babel plugin. For instance or static fields and static initializers. They run prior to the first static bill being signed, and then for class initializes, they run right after the class is defined. so, the proposal would change it so that you could specify the three placements as the first option to add an initializer. You would be required to specify one of the displacements actually. And that would allow it to be placed, you know, in one of those three locations the main two reasons to do this. The first one is that it provides more flexibility overall. There are a number of use cases where you have for instance, and instance method or field that wants to provide do some setup on the class itself wants to initialize the class in some way. An example of this is like props on web components or event listeners on web components as well. So that flexibility to be able to have like an instance and an instance elements be able to add an initialization for the class as a whole would be very useful. But another motivating example motivate action, here would be that this could become a way to add metadata. So previously we broke out the metadata portion of the decorators proposal into a separate proposal and that proposal is currently at stage 2 with this change. We would be able to instance elements and static elements. They would all be able to add their metadata via initializers in the via static initializers, or class initializers. They would do this much like the same way that they currently add metadata by currently meeting like the previous, you know, stage 1 and stage, two proposals added metadata. Meaning, they would just get a reference to the class itself, static initializers and class initilisers get that and they would be able to WeakMap the metadata to the class or just a sign it as a property on the class and expose it that way. So this wouldn't mean that we would still potentially explore the metadata proposal because there would be other benefits to having a shared standard for metadata, but it would also make that proposal not as Necessary for a lot of common use cases. So we would need some of that metadata. And any of the other additional APIs, so that's that's pretty much it. Any questions? MAH: So we are supportive of this change as it really increases the flexibility. However, I think there's still benefits to explore. a solution that empowers metadata and also multiple decorators working together as it's very ergonomic to do with just initializers. and so just to increase, the ergonomics of decorators going to make these of the Creator's. It would be great to continue exploring, In addition, an opaque context were direct that data API. @@ -774,7 +771,7 @@ YSV: Yeah, I actually echo a lot of the thoughts that SYG had. I think that the RPR: That's all. Thank you. I'll let go that just on the on the timing. We have the 10-day cut off in the agenda rules. Please read them on the agenda. It's totally legit for this one to be pushed back on just for the deadline alone. Thank you. -RBN: All right. One thing that I think is kind of interesting with this. Is that the current proposal for decorators, you can add a static initializer. which runs before extra initializer. Which runs before are met static fields and a class initializer extradition laser, which runs after all the static Fields have been initialized. Of those give you the ability to run Dynamic code during the static definition of the class, for instance decorators. There's the only way that you can add something that adds a instance extra initializer is to decorate a method or a field which requires introducing a method or a field if you wanted to do something at through the instance, whereas something like a class decorator, you don’t actually have to write a declaration if you want to run something that happens during just the static definition, the class. So there is an interesting value that this provides that you could have a decorator that wants to do something per instance, but doesn't necessarily need that declaration to attach to and there's currently no way to do that with the decorators proposal without this. +RBN: All right. One thing that I think is kind of interesting with this. Is that the current proposal for decorators, you can add a static initializer. which runs before extra initializer. Which runs before are met static fields and a class initializer extradition laser, which runs after all the static Fields have been initialized. Of those give you the ability to run Dynamic code during the static definition of the class, for instance decorators. There's the only way that you can add something that adds a instance extra initializer is to decorate a method or a field which requires introducing a method or a field if you wanted to do something at through the instance, whereas something like a class decorator, you don’t actually have to write a declaration if you want to run something that happens during just the static definition, the class. So there is an interesting value that this provides that you could have a decorator that wants to do something per instance, but doesn't necessarily need that declaration to attach to and there's currently no way to do that with the decorators proposal without this. JHD: It be reasonable to require this string but like, throw an error if you try to add an instance initializer and you're not an instance thing and so on, so that we leave the design space open? Because if we ship now with that initializer, just taking a function argument, we would then be forced to add a string later and that may or may not be something we want. Like for me, I do think that's a second argument. diff --git a/meetings/2022-06/jun-07.md b/meetings/2022-06/jun-07.md index ba779992..1d1c8dd4 100644 --- a/meetings/2022-06/jun-07.md +++ b/meetings/2022-06/jun-07.md @@ -192,7 +192,7 @@ RBN: There should be no difference. It should not be allowed. So if that's the c WH: Okay, so the intent is that you do not allow group accessors with `{}` in either context? -RBN: That is correct. +RBN: That is correct. WH: Okay. There's another bug in which there is a class parameter on *AccessorGroup*, but it’s never set. Thus the class grammar is not never invoked for grouped accessors. @@ -212,7 +212,7 @@ RPR: There's no response. Okay. Well, shall we move on? WH: [thumbs up] -RPR: I'm interpreting that as WH done. So, moving on to KG. +RPR: I'm interpreting that as WH done. So, moving on to KG. KG: Yeah, so this is, if I have understood correctly, primarily intended to solve a use case for decorators, it's not a thing that you would particularly care to use without decorators and it's a fairly niche case at that. And decorators, of course, are expected to be written by relatively few people. They are primarily intended for library authors. Although the point that you do have to use a pair of decorators and that affects users is well taken. But given all this, I would personally be a lot more comfortable if we waited a while for decorators to ship and got experience in the real world that led us to believe that in JavaScript this was a case that comes up a lot and is therefore worth adding a bunch of syntax to the language. I would prefer not to advance this until that point, because decorators only just got stage 3 and this is a lot of syntax that we would be adding just to make a small corner of the decorators use case nicer. So I am not excited to move this to stage two right now. @@ -240,7 +240,7 @@ RBN: I'm not sure. I'm clear on what you just stated. RRD: Yeah, I'm trying to reformulate in a different way because I don't see the TypeScript example on the screen. In any case we can discuss this offline but I would really like to see what that looks like currently in TS. -MM: I don't remember this going to stage one, not disputing it, but I'm not the only person who is surprised. I got a private message - when did this go to Stage 1? So the stage one consideration of this is something that I missed was going on. So some comments, a clarifying question and then I'll State my opinion. First of all, with regard to the issue that WH raises. About opinionated APIs. I am not of the anti-opinionated religion. I believe that as language designers We do opinionated design all the time and we should and that the main place where our opinions are most valuable is in not making the language unnecessarily accident prone. so the the rationale that you, Ron, had about not having a public set and private get, I think that rationale is a good rationale. Whether I agree with that in this case, that's a separate matter, but I don't disapprove of it because it’s opinionated. The other thing I want to compliment you on is in your presentation, your case analysis is quite thorough and led to a much better understanding of the overall rationale, so I want to applaud you on that. +MM: I don't remember this going to stage one, not disputing it, but I'm not the only person who is surprised. I got a private message - when did this go to Stage 1? So the stage one consideration of this is something that I missed was going on. So some comments, a clarifying question and then I'll State my opinion. First of all, with regard to the issue that WH raises. About opinionated APIs. I am not of the anti-opinionated religion. I believe that as language designers We do opinionated design all the time and we should and that the main place where our opinions are most valuable is in not making the language unnecessarily accident prone. so the the rationale that you, Ron, had about not having a public set and private get, I think that rationale is a good rationale. Whether I agree with that in this case, that's a separate matter, but I don't disapprove of it because it’s opinionated. The other thing I want to compliment you on is in your presentation, your case analysis is quite thorough and led to a much better understanding of the overall rationale, so I want to applaud you on that. MM: And a clarifying question. Could you go back to the slide that was up just a short while ago where you're showing both the class literal and the object literal? Yes. Yes. Thank you. So in the class case, if I understand this syntax would result in an implicit declaration of a #x variable. Correct? @@ -436,7 +436,7 @@ USA: Right. Well, as from the point of view of a proposal author, this would be SYG: I'm just super confused now. DurationFormat is already its own proposal. My understanding of the situation is that it's already its own proposal. There was bug, then fix up the bug, created entanglement with temporal. This is attempt to disentangle? -USA: one minor clarification, it was somewhat entangled even before the bug fix and the bug fix only exacerbated that and made it more evident that we needed to disentangle? And yeah, it is its own proposal. But now the question is, does it split up into two proposals? One of them blocked on temporal, does it transfer functionality to temporal. thing that I didn't consider was just does it State's own proposal and and get implemented in parts? +USA: one minor clarification, it was somewhat entangled even before the bug fix and the bug fix only exacerbated that and made it more evident that we needed to disentangle? And yeah, it is its own proposal. But now the question is, does it split up into two proposals? One of them blocked on temporal, does it transfer functionality to temporal. thing that I didn't consider was just does it State's own proposal and and get implemented in parts? DE: So this is all about abstract operation logic how that's distributed among the specs, you know, for private class features. I did a lot of that moving around and all it cost was confusion for everyone. reading the specifications. So I'd recommend against doing that. I think you should probably just leave it in one place and make the reference from one document to the other. Even if they're both at stage 3. Like I think both these proposals are going to happen. @@ -523,7 +523,7 @@ YSV: Thanks for the presentation and all the work that you've done here. I think JHX: Thank you. -JWK: I think the concern of MAH is not a problem for me. In the old-style, the function.sent only available in the function scope and it's a bit like “this”. If you want to use it in a nested context, it will be very annoying. This new form of syntax we can introduce binding so you can assess it in the nested functions. I also agree with YSV, this proposal looks like it does not have a very useful use case because generators are less used. But I also appreciate the new syntax design for this problem. +JWK: I think the concern of MAH is not a problem for me. In the old-style, the function.sent only available in the function scope and it's a bit like “this”. If you want to use it in a nested context, it will be very annoying. This new form of syntax we can introduce binding so you can assess it in the nested functions. I also agree with YSV, this proposal looks like it does not have a very useful use case because generators are less used. But I also appreciate the new syntax design for this problem. JHX: Yes. Thank you. I'll probably use case. I have a one another comments about that because of the time are not list all the use cases. Actually. There are some other use real-world use case, case. For example, the crank. JS framework, they use the generators to and the JSX to replace the react. It is their design. They are some small problem in which the court in general cannot provide, but I do not have time to, to explain the that use case. I would like to write it and in the repo, and we can't discuss it in that in that. Okay. okay. Thank you @@ -595,7 +595,7 @@ Presenter: HE Shi-Jun (JHX) - [proposal](https://github.com/hax/proposal-this-parameter) - [slides](https://johnhax.net/2022/this-param/slide) -JHX: This parameter for stage 1. Yeah, I think it's we were familiar with it. See if you already use TypeScript. It's TypeScript at `this` parameter syntax and what this proposal want do is just allow this syntax, so typescript will just this is just added in the type of notation on the disk parameter of JavaScript is perameter, okay? Currently TypeScript and the Flow type already support this syntax and the Java also have this syntax. There is a old proposal written by Gilbert six years ago, Even before T has added this type check, that proposal is much bigger proposal includes renaming and destruction exists. The main motivation of that proposal is solved this This confusion issue because this is always shadowing this. This proposal was presented on two years ago, but not advanced because I believe that the deck is have some concerns about whether the same syntax would be renaming. A data structuring the existing syntax could really solve the issue. And it seems it added too many syntax. And this this Farm is also included the type annotation proposal, which are the ones to to stage one in last meeting. But Strictly speaking. This form itself is not a type anotation. It's it's it is included in type of notation proposal because you need this syntax to make the type annotation of this parameter work. And another problem is I don't taste type annotation proposal is I believe it's just that proposal would try to avoid introduce any runtime semantics, but but I think that this parameter it could have the runtimes semantics. Who it's maybe it's better to to spec in a separate proposal. So this proposal. Drops the features of renaming and it's a structuring of the proposal and focuses. Is the same syntax and the runtime semantics? the motivations of the motivation is has. I think we it's a good thing to standardize the Syntax for TS and Flow, which are already supporting square of well simplify the toolchains and narrow the gap between JavaScript and a TypeScript ecosystem. Newcomers always have some confusion: “What what is the feature of JavaScript and what is the feature of TypeScript?” and a single with back to narrow. This is this is this permanent have been discussed in the last meeting us Just also, if aught of reducing the syntax burdens of type annotation proposal, because in the discussion, think many directory syntax. The current type annotation proposal includes too many syntax, so it's bad. As to, reduce syntax, who with back-to-back the syntax. If it's fit for the separate proposal. Usage type of meditation. Yeah, do we? Are they were doubts types will have flow. You can write code like this. You can go take this, you can add type for this parameter. Here. We we add a type for the handle. Click that. So the this should be HTML elements and also the decorator or this it's really, so I expect we will revisit the parameter decorators in the future. if we have parameter decorator, it should be be able to decorate to the this parameter here. In this example we use the decorator to do a runtime type guard. Instead, of compile-time type annotation. The earlier it's not this, it's just a follow the current TS and Flow errors. So the this parameter should be in the first position and error function. should. Now not have this parameter because our function always use the black Shu's is and the Constructor to Constructor. Also do not have this parameter because it is not argument. The `Reflect.construct` is very clear that they are no this arguments with, it's not the this in the Constructor, always generate magically or from the superclass. it down the runtime errors. during runtime errors also match the flow behavior. For example here. Because I think we should add. We should make them early. If we use, it is for me to issues should not have the [[construct]] internal method. So it's just a circle type error here and another runtimes a magical week. We think we can introduce is If it's, it's used this farming to makes it expects to be called with this arguments passed in. So if you direct calls, its we could just throw type error here. So if you can only use the call apply or you, add it to an object to and make it like a work. Like a method this weekend chords. So, just just introduce another extra motivation that provides syntax semantics for method. In JavaScript as a spec if you have a property, which is a function. It's just method the, but us developer. I think most developers methods are the front concept. That's that that there are three different type of functions the Constructor, which is actually know this argument and no master that you can order this Armenians and the method it expects this argument that the espalier disarming parsing and ends in the method body if we will use some property on the JavaScriot source. Stand it to some other place since you like that. Before, before es6, we only have the function which when you declare a function it plays three roles, but after es6 we have a dedicated Syntax for constructing a class. We also have the data case in telephone number method of the arrow functions, the always ignored existing this argument. It's always used lexical this but we do not have that. This in-house for method means method. What I mean method here, is we expected to these arguments. So we could use this parameter to for the dedicated syntax for methods. This allows programmers to explicit mark a function as a method. It for you. For example, we have a method here, but actually it's not easy to recognize weather some method on the because the use this thing a very deep nasty, and there are many code. And so it's very easy to miss use that. It's actually not So, here are around used, but it's not easy to discover that say especially We use this here conditionally, so it's possible here, here, the to not really use this. So it will not generally generate any runtime errors. So but sometimes it's well, sometimes it works. So this distance this is where hard for track the to discover the bug in the first place. But if we use the this parameter here and if we could have the runtime semantics I described before, we just got a TypeError and this also have the type error because of how it will work cards directly so it will help the help the developer to first. It's very to recognize. This is a method too. If we use its misuse that it will give the error as early as possible. Another usage is static class. Static methods in TypeScript also can't have these arguments. So in many other programming languages that mass would work. Well not. It cannot use this. So in JavaScript, that's that's a I think I'm, see me several times that people use the stack stack Master, which are used exist. Use this to to get water passes but usually going wrong way. So like this. Now, we add this here, it's just so type error and help to catch the error and you write code like that too explicitly mark this is a special stacked nicely though. It's the static but I still need this still need this parameter here. So make the intention clear. It's also useful to extension. And of course this proposal, for example, this is you can use it in court this proposal here and if you use it like this is Justice. Wrote a book in the discussion the causes proposal. There are many concerns about because the call-this proposal allows you to spread that the methods all over, we are and we have to concern about whether this leads to increased confusion. But at least I think is this could be mitigated by the this parameter for me. So if it's a method, or you can only use, it's like this. So I think it helped to solve the problem. Similarly, the extension proposal it do not use that because it already throws reference error here, but still, it could could be reversible to utilizing the this parameter. So it could only accept the methods which declared in this parameter syntax. If it's Just the so type print out here. It is. Even give a stronger protection. Okay, this summary of this proposal is adopted the TS and Flow syntax and allow you a note taker and decoratives argument and it provides methods syntax and the semantics. OK, OK, that's it. So let's check the queue. +JHX: This parameter for stage 1. Yeah, I think it's we were familiar with it. See if you already use TypeScript. It's TypeScript at `this` parameter syntax and what this proposal want do is just allow this syntax, so typescript will just this is just added in the type of notation on the disk parameter of JavaScript is perameter, okay? Currently TypeScript and the Flow type already support this syntax and the Java also have this syntax. There is a old proposal written by Gilbert six years ago, Even before T has added this type check, that proposal is much bigger proposal includes renaming and destruction exists. The main motivation of that proposal is solved this This confusion issue because this is always shadowing this. This proposal was presented on two years ago, but not advanced because I believe that the deck is have some concerns about whether the same syntax would be renaming. A data structuring the existing syntax could really solve the issue. And it seems it added too many syntax. And this this Farm is also included the type annotation proposal, which are the ones to to stage one in last meeting. But Strictly speaking. This form itself is not a type anotation. It's it's it is included in type of notation proposal because you need this syntax to make the type annotation of this parameter work. And another problem is I don't taste type annotation proposal is I believe it's just that proposal would try to avoid introduce any runtime semantics, but but I think that this parameter it could have the runtimes semantics. Who it's maybe it's better to to spec in a separate proposal. So this proposal. Drops the features of renaming and it's a structuring of the proposal and focuses. Is the same syntax and the runtime semantics? the motivations of the motivation is has. I think we it's a good thing to standardize the Syntax for TS and Flow, which are already supporting square of well simplify the toolchains and narrow the gap between JavaScript and a TypeScript ecosystem. Newcomers always have some confusion: “What what is the feature of JavaScript and what is the feature of TypeScript?” and a single with back to narrow. This is this is this permanent have been discussed in the last meeting us Just also, if aught of reducing the syntax burdens of type annotation proposal, because in the discussion, think many directory syntax. The current type annotation proposal includes too many syntax, so it's bad. As to, reduce syntax, who with back-to-back the syntax. If it's fit for the separate proposal. Usage type of meditation. Yeah, do we? Are they were doubts types will have flow. You can write code like this. You can go take this, you can add type for this parameter. Here. We we add a type for the handle. Click that. So the this should be HTML elements and also the decorator or this it's really, so I expect we will revisit the parameter decorators in the future. if we have parameter decorator, it should be be able to decorate to the this parameter here. In this example we use the decorator to do a runtime type guard. Instead, of compile-time type annotation. The earlier it's not this, it's just a follow the current TS and Flow errors. So the this parameter should be in the first position and error function. should. Now not have this parameter because our function always use the black Shu's is and the Constructor to Constructor. Also do not have this parameter because it is not argument. The `Reflect.construct` is very clear that they are no this arguments with, it's not the this in the Constructor, always generate magically or from the superclass. it down the runtime errors. during runtime errors also match the flow behavior. For example here. Because I think we should add. We should make them early. If we use, it is for me to issues should not have the [[construct]] internal method. So it's just a circle type error here and another runtimes a magical week. We think we can introduce is If it's, it's used this farming to makes it expects to be called with this arguments passed in. So if you direct calls, its we could just throw type error here. So if you can only use the call apply or you, add it to an object to and make it like a work. Like a method this weekend chords. So, just just introduce another extra motivation that provides syntax semantics for method. In JavaScript as a spec if you have a property, which is a function. It's just method the, but us developer. I think most developers methods are the front concept. That's that that there are three different type of functions the Constructor, which is actually know this argument and no master that you can order this Armenians and the method it expects this argument that the espalier disarming parsing and ends in the method body if we will use some property on the JavaScriot source. Stand it to some other place since you like that. Before, before es6, we only have the function which when you declare a function it plays three roles, but after es6 we have a dedicated Syntax for constructing a class. We also have the data case in telephone number method of the arrow functions, the always ignored existing this argument. It's always used lexical this but we do not have that. This in-house for method means method. What I mean method here, is we expected to these arguments. So we could use this parameter to for the dedicated syntax for methods. This allows programmers to explicit mark a function as a method. It for you. For example, we have a method here, but actually it's not easy to recognize weather some method on the because the use this thing a very deep nasty, and there are many code. And so it's very easy to miss use that. It's actually not So, here are around used, but it's not easy to discover that say especially We use this here conditionally, so it's possible here, here, the to not really use this. So it will not generally generate any runtime errors. So but sometimes it's well, sometimes it works. So this distance this is where hard for track the to discover the bug in the first place. But if we use the this parameter here and if we could have the runtime semantics I described before, we just got a TypeError and this also have the type error because of how it will work cards directly so it will help the help the developer to first. It's very to recognize. This is a method too. If we use its misuse that it will give the error as early as possible. Another usage is static class. Static methods in TypeScript also can't have these arguments. So in many other programming languages that mass would work. Well not. It cannot use this. So in JavaScript, that's that's a I think I'm, see me several times that people use the stack stack Master, which are used exist. Use this to to get water passes but usually going wrong way. So like this. Now, we add this here, it's just so type error and help to catch the error and you write code like that too explicitly mark this is a special stacked nicely though. It's the static but I still need this still need this parameter here. So make the intention clear. It's also useful to extension. And of course this proposal, for example, this is you can use it in court this proposal here and if you use it like this is Justice. Wrote a book in the discussion the causes proposal. There are many concerns about because the call-this proposal allows you to spread that the methods all over, we are and we have to concern about whether this leads to increased confusion. But at least I think is this could be mitigated by the this parameter for me. So if it's a method, or you can only use, it's like this. So I think it helped to solve the problem. Similarly, the extension proposal it do not use that because it already throws reference error here, but still, it could could be reversible to utilizing the this parameter. So it could only accept the methods which declared in this parameter syntax. If it's Just the so type print out here. It is. Even give a stronger protection. Okay, this summary of this proposal is adopted the TS and Flow syntax and allow you a note taker and decoratives argument and it provides methods syntax and the semantics. OK, OK, that's it. So let's check the queue. USA:You have close to ten minutes and a big queue. @@ -663,7 +663,7 @@ JHX: Possibly, not only the errors, but also it allowed people to mark the inten SYG: Like, there is so I would not say stage 1 for anything shown in the slides here, basically. So, I'm not entirely like I understand that stage one can be just for a problem statement, but it's like I do not really understand a concise problem statement. because, as MAH said, if the if the problem statement is about like exploring that problem space of Intent. Like that sounds like syntax, right? It's like how do we get a more scoped problem statement? That is not like I'm not convinced. Okay, let me rephrase this. I'm not convinced that there is a solution to the problem statement that you have said that I would agree to for stage two. Therefore, I feel uncomfortable agreeing to stage one. -USA: there is Mark on the Queue but we're way over time I think, think +USA: there is Mark on the Queue but we're way over time I think, think MM:, it mine is not essential I pass. diff --git a/meetings/2022-06/jun-08.md b/meetings/2022-06/jun-08.md index 96f5a1af..ed31d32f 100644 --- a/meetings/2022-06/jun-08.md +++ b/meetings/2022-06/jun-08.md @@ -46,13 +46,13 @@ WH: I just want to clarify something. You said in the presentation that possessi RBN: So backtracking within a possessive quantifier is still valid. Primarily it's not supported because every engine that I've checked that supports this does not support it for an exact match in quantifiers, so I'm trying to not go too far from the trend. And if there is a case where you do want that behavior, you can still use the atomic group is a concern about whether or not it should be supported just because of consistency and I can be that could be argued for introduced or allowing syntax. - WH: Okay. So if they’re omitted because they’d be a no-op in other engines, then that brings up a question of what possessive quantifiers actually do in other engines. There are two possible interpretations and they differ in behavior. One, is that a possessive quantifier is just syntactic sugar for turning off backtracking and having a regular quantifier inside there, which is what you presented. The other alternative is that the possessive quantifier does not backtrack on the number of things it matches, but it does backtrack into its contents. Those would be different semantics. I don't know what other engines do. Do you know? +WH: Okay. So if they’re omitted because they’d be a no-op in other engines, then that brings up a question of what possessive quantifiers actually do in other engines. There are two possible interpretations and they differ in behavior. One, is that a possessive quantifier is just syntactic sugar for turning off backtracking and having a regular quantifier inside there, which is what you presented. The other alternative is that the possessive quantifier does not backtrack on the number of things it matches, but it does backtrack into its contents. Those would be different semantics. I don't know what other engines do. Do you know? RBN: So let me see if I can find that. WH: The case I gave on the queue distinguishes those two (`​​/(a*){1}a{3}/.exec("aabaaaaa")` vs `/(a*){1}+a{3}/.exec("aabaaaaa")`). None of the examples on the slides distinguish those two interpretations, but that case does distinguish those two. -RBN: The intended behavior Is that as far as I recall the and possessive quantifiers should essentially act as if you had wrapped in an atomic group. backtracking still should occur within the atomic group, but not at the boundary. So I believe that - I have to parse through the example you have here. so, if this were a regular Atomic group, wrapping the repeat of A so the repeat of A would match and then since it matches successfully and there is nothing to the right of A within its Atomic group as it were, A would match successfully and then the A 3 would match separately as though be Quantified, a star would be matched. So yes, these two things would have different Behavior. If the plus were allowed as A qualifier, the question is whether how often is this Usually the case? And it's You don't usually see a case of, I want to repeat `a*` once. So it's usually the reason why - at least I imagine, in other engines, the reason why it's not supported for these characters for a fixed length quantifier, is that it's not a very common case and you're probably doing something wrong. So it's not that it wouldn't work. The behavior would be the same as it would for any part of the atomic group case. It's just that it's probably not an intended use. +RBN: The intended behavior Is that as far as I recall the and possessive quantifiers should essentially act as if you had wrapped in an atomic group. backtracking still should occur within the atomic group, but not at the boundary. So I believe that - I have to parse through the example you have here. so, if this were a regular Atomic group, wrapping the repeat of A so the repeat of A would match and then since it matches successfully and there is nothing to the right of A within its Atomic group as it were, A would match successfully and then the A 3 would match separately as though be Quantified, a star would be matched. So yes, these two things would have different Behavior. If the plus were allowed as A qualifier, the question is whether how often is this Usually the case? And it's You don't usually see a case of, I want to repeat `a*` once. So it's usually the reason why - at least I imagine, in other engines, the reason why it's not supported for these characters for a fixed length quantifier, is that it's not a very common case and you're probably doing something wrong. So it's not that it wouldn't work. The behavior would be the same as it would for any part of the atomic group case. It's just that it's probably not an intended use. WH: Yes, if that is what other engine semantics actually do for possessive quantifiers. I want to double-check this because I don't know. @@ -66,7 +66,7 @@ WH: Yeah, in that case my comment would be that we shouldn't disallow {n}+, but MAH: Yeah, I have to admit. I might knowledge of regular Expressions is just about using them, not much about implementation, but I am wondering if this would allow to identify a subset of regex static. Goodbye scientific analysis, that would then be guaranteed to never get a certificate. We backtrack. -RBN: I can't speak to that. I do know a co-worker who has a contact someone whose work he works. In a group that has researched regular expression, aesthetic, amounts of not static, analyzer ability for regular Expressions to determine these types of cases, and their work is awful, used in various tools to actually recognize specific cases of catastrophic backtracking and regular Expressions. I do not know if this could be used to. I do know that if you compose it regular expression, that consists only of atomic operations that correctly, it would be possible to recognize that it does not catastrophicly backtrack. I also know that it's possible to use at least within a regular expression engine static analysis to determine that if you have no backtracking that you can actually perform certain for use heuristics, that determine you don't have the ability to backtrack, that you can avoid some operations and actually significantly improve performance for other cases such as the CVE that I brought up in the last last time I presented this suffered from two issues. One was that, it was for trim new lines and matching every single new line character and then filling to match the end of the string was catastrophic both because the result wasn't Atomic and because the, our behavior for matching - if it fails to match, we then advance the index by 1. So that's something that if you have the ability to static analysis to avoid backtracking, then you can have heuristics, which significantly improve that. But I think as mentioned by WH, you can check something is definitely not going to backtrack, but what you can't do is know if something is unlikely to backtrack that there are you can compose regular expressions? Where the is possibly confusing and certainly depends on inputs. +RBN: I can't speak to that. I do know a co-worker who has a contact someone whose work he works. In a group that has researched regular expression, aesthetic, amounts of not static, analyzer ability for regular Expressions to determine these types of cases, and their work is awful, used in various tools to actually recognize specific cases of catastrophic backtracking and regular Expressions. I do not know if this could be used to. I do know that if you compose it regular expression, that consists only of atomic operations that correctly, it would be possible to recognize that it does not catastrophicly backtrack. I also know that it's possible to use at least within a regular expression engine static analysis to determine that if you have no backtracking that you can actually perform certain for use heuristics, that determine you don't have the ability to backtrack, that you can avoid some operations and actually significantly improve performance for other cases such as the CVE that I brought up in the last last time I presented this suffered from two issues. One was that, it was for trim new lines and matching every single new line character and then filling to match the end of the string was catastrophic both because the result wasn't Atomic and because the, our behavior for matching - if it fails to match, we then advance the index by 1. So that's something that if you have the ability to static analysis to avoid backtracking, then you can have heuristics, which significantly improve that. But I think as mentioned by WH, you can check something is definitely not going to backtrack, but what you can't do is know if something is unlikely to backtrack that there are you can compose regular expressions? Where the is possibly confusing and certainly depends on inputs. MAH: Right before - tend to think this enables to write more regular expression with what syntax that or with behavior that people are used to such as matching multiple characters and to know that for sure this will not backtrack. I'm wondering if that was clear. You said you might be able to identify that some expressions will for sure never backtrack. I'm not asking backtrack. I'm not asking to allow all regular expression syntax, and make sure that those will never backtrack. I'm just wondering if it's possible to identify a subset of regular expressions and be sure those will never backtrack. @@ -161,7 +161,7 @@ GB: Then in terms of supporting WebAssembly.Module as a reflection, to just gett GB: So yeah, module blocks. Another nice thing about this new module instance structure, is that you could in theory take any module such as module block. So we could have all these different types of modules on the platform, like a block or whatever. And you could pass a modular block, into `new ModuleInstance` as well, and at the moment module blocks are singular in that, there's one instance in a given context that you import. Whereas, if we had this module instance machinery, you could actually multiply instantiate a single module block. You have multiple instances out of a single module block or multiple linkages out of a single module block. So you could do things like, use as a mocking process from mocking libraries. Have it built up for your tests and then throw that for the stuff away once you're done with it. And again, there's no path dependence on these things in the way that this is kind of being suggested. These kind of features are optional and additive to both specifications. I think that's quite enough thing to think about, in terms of the layering of all this stuff and how it integrates. Is that it as long the paths are open and there are ways, then things can kind of move at their own different pieces around this. -GB: So, in relation to compartments, there would definitely be a huge amount of benefit if we could share the module and instance definitions with compartments. And so, that's a big question and discussion and I've discussed briefly with Kris some of these details, but that's something we need to definitely discuss further. So, what isn't included in this current suggested design is custom global environments, linking boundaries, or anything to do with the loader definitions, but those things could potentially be seen as additive to this kind of minimal reflection of what our module reflection might look like. So there's a few ways to go about it. We could possibly specify this very basic JS reflection as part of our reflection work with this proposal. Alternatively we could just treat it as an arbitrary reflection and rather shift that to the compartments side, and rather say that - so then there's those kind layering discussions to be had. +GB: So, in relation to compartments, there would definitely be a huge amount of benefit if we could share the module and instance definitions with compartments. And so, that's a big question and discussion and I've discussed briefly with Kris some of these details, but that's something we need to definitely discuss further. So, what isn't included in this current suggested design is custom global environments, linking boundaries, or anything to do with the loader definitions, but those things could potentially be seen as additive to this kind of minimal reflection of what our module reflection might look like. So there's a few ways to go about it. We could possibly specify this very basic JS reflection as part of our reflection work with this proposal. Alternatively we could just treat it as an arbitrary reflection and rather shift that to the compartments side, and rather say that - so then there's those kind layering discussions to be had. GB: And then deferred modules as well. So, when you're importing a module reflection because it hasn't been evaluated. You are effectively lazily loading that module in a sense, but it's not a comprehensive load because you're not loading and resolving the dependencies of that module. So for most pre-loading or lazy loading scenarios, you probably want it to be pre-loading the entire module graph or doing work at that entire module graph level for the actual execution instance. And so we do think these pre-loading and deferred evaluation problems are best seen as separate to this kind of reflection work, at least as far as we've been able to dig into the problem space. @@ -169,11 +169,11 @@ GB: So in terms of the actual host hooks that would be exposed, as mentioned, th GB: So what do we get out of this reflection? Well, firstly and the driving use case, ideally with just a simple reflection mechanic, it allows webassembly to define that it's going to permit this reflection in the platform. And that's something that is needed at the moment for the wasm integration and can be unblocked by this work and something that can start moving forward. And that's the kind of immediate use case. `` So yeah with the invariants of the hook we could then state that certain reflections are reserved for es262, that the module reflection for a JS module is reserved and then we could add this JS reflection at any point of time. As I say, we would be happy to specify something minimal in this specification or not. Either can work for us. And then it also permits new host defined reflection types in the future and we can maybe have some wording about what sort of Reflections are permitted. But there could be some other interesting reflection types enabled and the second example that we want to bring up is asset reflection. -LCA: So asset reflection, this is based on the asset references proposal, that was presented in 2018, if I recall correctly, which provides a way to get an unforgeable reference to an asset by means of creating an essentially wrapper object around a results but specifier there's unforgeable using static syntax. So this is what the syntax look like in that proposal. You replace the import keyword with asset keyword and And you can use this asset reference then to pass that to APIs that already take resource identifiers such as fetch or import. And currently, this is often done by using new URL and -import.meta.url. This does not really work very well though, because a) it does not actually go through the proper host resolver. I'd rather it just, it only works if the host resolver, only uses the ultimate (?), which is not the case of things like node.js or if you're using import maps, for example, where a specifier could map to There's some results. Specifier amended is. And then b) it's also dynamic, which is difficult to statically analyze versus a static syntax. You want to use static syntax for this because we want to make it easier for bundlers to find these references. So they can process them for their work. What I'm going to show is that this asset reference is really also just another reflection. So asset reference can be represented. as part of this, as a reference or import Reflections proposal because as a references are really just a reflection of the assets prior to loading the asset. So they are they take the resolved, specifier wrap that in an opaque object and don't actually perform load. And this means that one could perform the - one could have the asset references proposal happen without requiring additional syntax, it could just be another reflection that is part of the input reflections proposal or later and you could even be done in like as a host extension outside of TC39, for example, in HTML. +LCA: So asset reflection, this is based on the asset references proposal, that was presented in 2018, if I recall correctly, which provides a way to get an unforgeable reference to an asset by means of creating an essentially wrapper object around a results but specifier there's unforgeable using static syntax. So this is what the syntax look like in that proposal. You replace the import keyword with asset keyword and And you can use this asset reference then to pass that to APIs that already take resource identifiers such as fetch or import. And currently, this is often done by using new URL and -import.meta.url. This does not really work very well though, because a) it does not actually go through the proper host resolver. I'd rather it just, it only works if the host resolver, only uses the ultimate (?), which is not the case of things like node.js or if you're using import maps, for example, where a specifier could map to There's some results. Specifier amended is. And then b) it's also dynamic, which is difficult to statically analyze versus a static syntax. You want to use static syntax for this because we want to make it easier for bundlers to find these references. So they can process them for their work. What I'm going to show is that this asset reference is really also just another reflection. So asset reference can be represented. as part of this, as a reference or import Reflections proposal because as a references are really just a reflection of the assets prior to loading the asset. So they are they take the resolved, specifier wrap that in an opaque object and don't actually perform load. And this means that one could perform the - one could have the asset references proposal happen without requiring additional syntax, it could just be another reflection that is part of the input reflections proposal or later and you could even be done in like as a host extension outside of TC39, for example, in HTML. LCA: Yeah, so that's the presentation for today. We'd like to start the discussion now, so we'd like feedback on the overall shape of the proposal about the JS reflection API that guy presented and about how this interacts with us, it reflection and also how this interacts with all the other proposals that we mentioned module blocks compartments and similar. How this? Yeah, this layering between those proposals and we're looking for stage two reviewers. So, is there anything on the queue? -KKL: Thank you for the presentation. As you pointed out there is a lot of overlap with the compartments proposal, which is stage one and we invite you both to join the champion group for that since there's so much well considered material between this presentation and what we've accumulated for the compartments proposal. The compartments proposal, just by way of update for this group, the champion group has decided to limit the scope of that proposal to just solving the problem of JavaScript’s missing module loader API, so, evaluating modules in general. And integration with WASM is part of the scope of the concerns that we've been considering over the last couple of years. The only portion of this presentation not covered by the compartments proposal as-writ is a mechanism for statically analyzing a non-executed dependency. That is to say expressing a dependency for which you wish to defer execution, which is super useful as pointed out for bundling use cases and such where you want to execute later but declare that dependency so that it is statically analyzable and so that the bundler can retrieve the transitive deps. Compartments do do a few things relevant to this proposal. For example, they do already reflect static module records. And as this proposal proposes. We are proposing a separation of module instances from the reification, and the replication module environment, records. The shared loader, which compartments have shared loader caches, which do not necessarily refer to a static, a synthetic static module record. So it is possible. There are complementary semantics in compartment proposal that would use, for example, if you were to use, if you were to use the import reflection to State a dependency that you do not wish to execute you, if that would be beneficial in combination with using compartment to pass the cached static module record to another compartment where could be executed later or possibly multiple times in multiple compartments, which is of course also relevant to hot module replacement. The compartments proposal has no less power but it does encapsulate a few more concerns, and that is something that we're open to iterating upon. The compartments proposal hides linkage as a concern, and it doesn't reduce the power of the proposal. Anything that could be linked, before can be linked in with compartments, but that's something that we'd like to discuss as well. And as mentioned, we can already linked WASM with a synthetic or third-party static module record in the compartments proposal as written, but that does not necessarily solve - but the compartments proposal does not necessarily to reify that synthetic module record, we could in a complimentary amendment be able to take a host's wasm static module record, which is not reified and pass it to another compartment. Moddable XS's compartments actually depend upon this feature, because they never went in compiling JavaScript for an embedded system's ROM. They never reify the static module record and they exclude the sources from the ROM. So you just get compiled JavaScript in the ROM, but they can still use compartments to pass the static module records from compartment to compartment, which is very useful for their needs as well. And compartments also answer the question of asset reflection with synthetic static module records. And so yeah, again to re-emphasize, for the portions of this proposal that overlap compartments we would really very much like to join our efforts and we're going to attempt to present and request stage 2 for compartments at the next meeting. I believe toward the end of July. And that's what I've got. Thank you. +KKL: Thank you for the presentation. As you pointed out there is a lot of overlap with the compartments proposal, which is stage one and we invite you both to join the champion group for that since there's so much well considered material between this presentation and what we've accumulated for the compartments proposal. The compartments proposal, just by way of update for this group, the champion group has decided to limit the scope of that proposal to just solving the problem of JavaScript’s missing module loader API, so, evaluating modules in general. And integration with WASM is part of the scope of the concerns that we've been considering over the last couple of years. The only portion of this presentation not covered by the compartments proposal as-writ is a mechanism for statically analyzing a non-executed dependency. That is to say expressing a dependency for which you wish to defer execution, which is super useful as pointed out for bundling use cases and such where you want to execute later but declare that dependency so that it is statically analyzable and so that the bundler can retrieve the transitive deps. Compartments do do a few things relevant to this proposal. For example, they do already reflect static module records. And as this proposal proposes. We are proposing a separation of module instances from the reification, and the replication module environment, records. The shared loader, which compartments have shared loader caches, which do not necessarily refer to a static, a synthetic static module record. So it is possible. There are complementary semantics in compartment proposal that would use, for example, if you were to use, if you were to use the import reflection to State a dependency that you do not wish to execute you, if that would be beneficial in combination with using compartment to pass the cached static module record to another compartment where could be executed later or possibly multiple times in multiple compartments, which is of course also relevant to hot module replacement. The compartments proposal has no less power but it does encapsulate a few more concerns, and that is something that we're open to iterating upon. The compartments proposal hides linkage as a concern, and it doesn't reduce the power of the proposal. Anything that could be linked, before can be linked in with compartments, but that's something that we'd like to discuss as well. And as mentioned, we can already linked WASM with a synthetic or third-party static module record in the compartments proposal as written, but that does not necessarily solve - but the compartments proposal does not necessarily to reify that synthetic module record, we could in a complimentary amendment be able to take a host's wasm static module record, which is not reified and pass it to another compartment. Moddable XS's compartments actually depend upon this feature, because they never went in compiling JavaScript for an embedded system's ROM. They never reify the static module record and they exclude the sources from the ROM. So you just get compiled JavaScript in the ROM, but they can still use compartments to pass the static module records from compartment to compartment, which is very useful for their needs as well. And compartments also answer the question of asset reflection with synthetic static module records. And so yeah, again to re-emphasize, for the portions of this proposal that overlap compartments we would really very much like to join our efforts and we're going to attempt to present and request stage 2 for compartments at the next meeting. I believe toward the end of July. And that's what I've got. Thank you. GB: If I could just respond to that briefly, this is something that we would be hoping to be able to get stage progression soon for, as well. As mentioned module reflection is primarily the mechanic of reflection. And certainly, I think There's some some really interesting collaboration work and I look forward to working with you on that Kris. What this brings up in these cross-cutting concerns is, I think, firstly, the primary kind of forcing function of reflection being that it kind of makes this stuff static in the module system and gives you this these static security properties that we kind of need today for wasm. And then secondly, it's something which - there are certain constraints on what we need from source text module records and these module instances in order to be able to interop with this kind of a model. And so I think using this illustrative example to say these are the constraints and this is these are the static guarantees we need. But then yeah, if we can put our heads together and work out how expose the layer that kind of reflection. That would be great. diff --git a/meetings/2022-07/jul-19.md b/meetings/2022-07/jul-19.md index 16e549d1..23a4ba51 100644 --- a/meetings/2022-07/jul-19.md +++ b/meetings/2022-07/jul-19.md @@ -2,7 +2,7 @@ ----- -**In-person attendees:** +**In-person attendees:** | Name | Abbreviation | Organization | | -------------------- | -------------- | ------------------ | | Bradford C. Smith | BSH | Google | @@ -66,32 +66,32 @@ IS: Okay, now I would like to welcome the two new TC39 members. I don't think th IS: So please go to the next slide. Okay, so this is the usual meeting participation table. Go immediately to the next slide because I am only showing the last entry from the last meeting. -IS: You will see that we had 59 remote participants representing 23 companies. Three invited experts. And you can see that the dropping of the participants is not unusual in a June TC39 meeting because basically ES2022 has been completed from the TC39 point of view. And then we will again see a bump going up after the summer meetings. So, that's the reason why we have had relatively low participation. But still 59 is a very big participation number for an Ecma technical committee. +IS: You will see that we had 59 remote participants representing 23 companies. Three invited experts. And you can see that the dropping of the participants is not unusual in a June TC39 meeting because basically ES2022 has been completed from the TC39 point of view. And then we will again see a bump going up after the summer meetings. So, that's the reason why we have had relatively low participation. But still 59 is a very big participation number for an Ecma technical committee. -IS: So this is the list of Ecma TC39 standards download statistics. I will be very, very quick on that because the trend I'm showing to you is still the same as what we had. There were close to 60000 downloaded Ecma standards and half of them or more than half of them were TC39 standards. +IS: So this is the list of Ecma TC39 standards download statistics. I will be very, very quick on that because the trend I'm showing to you is still the same as what we had. There were close to 60000 downloaded Ecma standards and half of them or more than half of them were TC39 standards. - IS: Okay, So these are for ECMA-262. So for the main language standards and for the different editions on the left hand side, it is the html-access part, right hand side, it is the download. What we have seen in past meetings is that the first edition-numbers of download are not correct, that's the reason why we have “?” marks there. I can only say with confidence, the latest editions of ECMA-262 are ok for the html-access figures. For the download version we don't have the problem of the numbers of the first editions. So, that's correct. +IS: Okay, So these are for ECMA-262. So for the main language standards and for the different editions on the left hand side, it is the html-access part, right hand side, it is the download. What we have seen in past meetings is that the first edition-numbers of download are not correct, that's the reason why we have “?” marks there. I can only say with confidence, the latest editions of ECMA-262 are ok for the html-access figures. For the download version we don't have the problem of the numbers of the first editions. So, that's correct. IS: Well, this is the same for ECMA-402. Obviously the numbers are much lower. Again, the access numbers for the first editions are really questionable. They are not true, but this is what we are getting from Google Analytics. But the download numbers are correct. IS: Okay, this is an important slide where we are now. So first all, the GA approved ECMA-262 2022, congratulations to the group! And it has also been approved with the new alternative text copyright license (which is a more permissive text copyright license). Immediately after the approval as usual, Patrick - from the Ecma Secretariat - has immediately published both the HTML versions, which, as you know, this is the “master version” and takes preference over the PDF version. It is perfect and we are still using that. Then we have published a very “rough PDF” version, because that was available at the time of the General Assembly. The good news is that since then we have completed the project. AWB did this. So we have now a nice PDF version for both of the standards, 262 and 402. And AWB did an absolutely great job and I am very thankful to him. They are already published. Actually they were published before last weekend. And so, if you go and if you download the latest PDF versions now they are already those “nice PDF” versions. -IS: We have a separate presentation from AWB longer than this presentation. For ES2023 project we need to solve the “nice PDF” publication problem in a more long-term way. We need a much more stable solution for that - built in into some kind of Ecma TC39 tool. But this will be a project for next year and so on, but AWB will be talking about it in much greater detail. +IS: We have a separate presentation from AWB longer than this presentation. For ES2023 project we need to solve the “nice PDF” publication problem in a more long-term way. We need a much more stable solution for that - built in into some kind of Ecma TC39 tool. But this will be a project for next year and so on, but AWB will be talking about it in much greater detail. -IS: Okay, so this is the table for future TC39 meetings. We know that the Tokyo meeting is going to be remote. There is a question mark for the November meeting, is it in Europe, whether it will be in Norway, Etc, Etc, we have already on GitHub sort of dialogue on it. For the meetings for the next year I don't have a schedule on table yet. +IS: Okay, so this is the table for future TC39 meetings. We know that the Tokyo meeting is going to be remote. There is a question mark for the November meeting, is it in Europe, whether it will be in Norway, Etc, Etc, we have already on GitHub sort of dialogue on it. For the meetings for the next year I don't have a schedule on table yet. IS: Okay, so I’ve picked up a couple of results from a 2022 June GA meeting where - as TC39 - we have to act here. The GA requested for the “nonviolent communication training” of TC39 some further information. Basically, it conditionally approved 10,000 Swiss francs for such a request, but had three specific questions. And the answers have to be provided to the ExeCom. -IS: Okay, so this is exactly the copy that I have cut out from the GA minutes. So what I‘ve read and a little bit surprised me was the first one. It says: “discussion among all members of tc39”, so we have to discuss “the incidents occurred” and the “implication to reach consensus on a solution”. I have not the slightest idea what the incidents are. and also obviously about the implication and I don't know, you know what kind of communication went back and forth between those who were present at the meeting and the general assembly so somebody who was at the meeting can maybe give us a little more insight about this. +IS: Okay, so this is exactly the copy that I have cut out from the GA minutes. So what I‘ve read and a little bit surprised me was the first one. It says: “discussion among all members of tc39”, so we have to discuss “the incidents occurred” and the “implication to reach consensus on a solution”. I have not the slightest idea what the incidents are. and also obviously about the implication and I don't know, you know what kind of communication went back and forth between those who were present at the meeting and the general assembly so somebody who was at the meeting can maybe give us a little more insight about this. IS : And then the second point, I was also rather surprised to see that was written in the second question about the TC39 Code of Conduct Committee. So far, you know, each time we had the report about the work of the Code of Conduct Committee, when it came to the pointin TC39 meetings we have not learned about “incidents in TC39”. IS: Okay, the third question is a request for a high level description of the details of the proposed training. I think this is really very important. So this is the task that we have to pick up and have to do, I don't know how we are going to do it and here I am asking here, the TC39 management to react to it at any point in time. I'm just reporting you from the GA meeting. -IS: This slides are very, very short ones. As I said at the last meeting, the Ecma Bylaws and the Rules were going to be changed by the June 22 GA meeting very, very slightly, but very logically. So here, the first message is that it was approved by the GA both on the Bylaws and also on the Rules. So it is done. +IS: This slides are very, very short ones. As I said at the last meeting, the Ecma Bylaws and the Rules were going to be changed by the June 22 GA meeting very, very slightly, but very logically. So here, the first message is that it was approved by the GA both on the Bylaws and also on the Rules. So it is done. -IS: I picked up some interesting news. So there was a vacant position of the Ecma Vice President, Actually we had one candidate. It was Dan Ehrenberg. Then he was unanimously approved. Congratulations Daniel to your election as Ecma Vice President. +IS: I picked up some interesting news. So there was a vacant position of the Ecma Vice President, Actually we had one candidate. It was Dan Ehrenberg. Then he was unanimously approved. Congratulations Daniel to your election as Ecma Vice President. [applause] @@ -137,7 +137,7 @@ Presenter: Shane F. Carr (SFC) SFC: Ok. So hello everyone. My name is Shane. I'm the convener of the ECMA-402, TC39 task group. So, what is ECMA-402? For those who don't know, it's JavaScript's built-in, internationalisation Library. The slide shows some things that you can do with it. and sold out date-time format. Formatting dates in localized forms and localized waste. -SFC: how is it developed? We're developed a separate specification from ECMA-262 to divide. The TC39 task group 2, there are now three task groups. There's task group one, which is the one that we're in right now, task group 2, there is also task group three which is the relatively new security task group. However, all our proposals move through the standard TC39 stage process, we have monthly 2 hour phone calls to discuss the details. There are some links, which I'll show again later in the presentation for how to join and how to get information. These are some of the Personnel, the editors are USA and RGN, we've also continued to get advice from Leo. So thank you for your continued input there. Especially when we're trying to get the ES2022 together. I'm the convener and on this screen shows some of the delegates who have come to recent meetings. So thank you again for all of your contributions. +SFC: how is it developed? We're developed a separate specification from ECMA-262 to divide. The TC39 task group 2, there are now three task groups. There's task group one, which is the one that we're in right now, task group 2, there is also task group three which is the relatively new security task group. However, all our proposals move through the standard TC39 stage process, we have monthly 2 hour phone calls to discuss the details. There are some links, which I'll show again later in the presentation for how to join and how to get information. These are some of the Personnel, the editors are USA and RGN, we've also continued to get advice from Leo. So thank you for your continued input there. Especially when we're trying to get the ES2022 together. I'm the convener and on this screen shows some of the delegates who have come to recent meetings. So thank you again for all of your contributions. IS: And thanks to Google for sponsoring Igalia to continue to develop Intl this year. This is a continuation of that contract. I thank you Igalia for your work on doing things like Test262, two and Deanna. Many other things, @@ -145,17 +145,17 @@ SFC: So ES 2022. This is the pdf version of the specification. I can go open the RGN: Yeah I think it would be worth discussing. Is there an agenda item later or is this the opportunity? -SFC: I don't believe there's an agenda item later unless you have one, but this would be the opportunity to introduce this to the group, +SFC: I don't believe there's an agenda item later unless you have one, but this would be the opportunity to introduce this to the group, RGN: So passing on a longer discussion and just introducing it now: the idea is basically to clarify how ECMA-402 and ECMA-262 relate to each other, with the intent specifically of having ECMA-402 only constrain behavior that is valid for 262 implementations and only in particular ways. So we want to say that of all the possibilities that are left open in 262, every 402 implementation must behave in this narrower way and stay in these tighter lanes. -SFC: Okay, so procedurally is this, I know that JHD has previously reviewed this pull request? Should we seek consensus on like TG1 approval on this pull request now or is this something that we feel that it would be better to discuss that more length later in the agenda +SFC: Okay, so procedurally is this, I know that JHD has previously reviewed this pull request? Should we seek consensus on like TG1 approval on this pull request now or is this something that we feel that it would be better to discuss that more length later in the agenda RGN: later is probably better, unless unless we have support already SFC: JHD since you did review this, do you have any initial thoughts on this? Does this General Direction look good to you? -SFC: while JHD is looking that up I'm going to go back to the slides and depending on how much time we have in our time box here we might be able to get a little further discussion on 690. I want to go back to the slides and just look at the proposal status. so we have this beautiful little wiki page that RCA keeps updated with all the updates on all of our proposals, old ones and new ones together. There are three proposals that are shipping in the 2022 edition and attract, you know, the documentation and all the implementations of those. I would note that our polyfill Champion recently left the company that they were working at. So we don't currently have a polyfill champion. This is an opportunity for you to contribute to ECMA-402. If you're interested in getting involved, write more polyfills for it. We also have these stage three proposals, which you've all seen presentations on. I'll be giving an update on intl number format tomorrow and you can see the status of all the implementations here and all the different browsers it. browsers also in this Wiki page also shows the version of the browser that first ships to the feature, which is good for you to sort of track. When these features are going to available to use these buttons are also, clickable you can click any of these buttons and it brings you usually to the issue on that respective browser to see the status. So that's the status of all of our stage 4 and stage 2 proposals. We don't have any currently staged two proposals. We do have a number of stage one proposals, If you're interested in helping, any more of the stage one proposals are getting more updates. Again, you can join our monthly meetings. +SFC: while JHD is looking that up I'm going to go back to the slides and depending on how much time we have in our time box here we might be able to get a little further discussion on 690. I want to go back to the slides and just look at the proposal status. so we have this beautiful little wiki page that RCA keeps updated with all the updates on all of our proposals, old ones and new ones together. There are three proposals that are shipping in the 2022 edition and attract, you know, the documentation and all the implementations of those. I would note that our polyfill Champion recently left the company that they were working at. So we don't currently have a polyfill champion. This is an opportunity for you to contribute to ECMA-402. If you're interested in getting involved, write more polyfills for it. We also have these stage three proposals, which you've all seen presentations on. I'll be giving an update on intl number format tomorrow and you can see the status of all the implementations here and all the different browsers it. browsers also in this Wiki page also shows the version of the browser that first ships to the feature, which is good for you to sort of track. When these features are going to available to use these buttons are also, clickable you can click any of these buttons and it brings you usually to the issue on that respective browser to see the status. So that's the status of all of our stage 4 and stage 2 proposals. We don't have any currently staged two proposals. We do have a number of stage one proposals, If you're interested in helping, any more of the stage one proposals are getting more updates. Again, you can join our monthly meetings. SFC: and yeah, so you get involved here is the links that I showing earlier. so, the way to get involved, you can write documentation, you can do polyfills., test262 tests and again, if you're interested in joining our monthly call, you can click the link to this e-mail. @@ -175,9 +175,9 @@ DE: Is this only a 402 PR or is there a 262 PR associated. RGN: It is exclusively 402. -SFC: I'll go ahead and put back up "the how to get involved" slide. We love when people hop on into our monthly calls, and everyone is invited. +SFC: I'll go ahead and put back up "the how to get involved" slide. We love when people hop on into our monthly calls, and everyone is invited. - Okay, if there is no other comments or questions, I can give back a few minutes by appreciate chamber and thank you. +Okay, if there is no other comments or questions, I can give back a few minutes by appreciate chamber and thank you. ## ECMA404 Status Update @@ -205,41 +205,41 @@ ACE: We also want to get feedback on this issue that was raised. Which was askin ACE: If I just flip over to the slide presented back in stage 2 and one of the things that this proposal we were looking at is given the current state of the world that we had. Where does an immutable interface sit in this? As in, would then be the interface that the Tuple proposal represents up or proposed, proposed, or well, and where does that prototype sit in this Venn diagram and what we're trying to do is carve out space where tuple sits in this so that it will actually sit within this Venn diagram rather than sit like have an extra overlap outside of it. -ACE: So, things like toReverse, and toSorted so that kind of fits quite nicely because they're already here in this middle section here with both array and on TypedArray. Splice is the odd one out here in that it is only on array. So you would right now the proposal you would have typedArray toSpliced but you can't do TA splice, because like this whole – I want to say quadrant but it's a sextant, I think? – these all modify length and you can't modify the length of the typed array. So it kind of if you know the TA is a fixed length, it kind of makes sense that it doesn't have splice and that would also follow that Tuple doesn't have any of these. So I’m curious what opinions people have. One concern we have is if the concern is primarily about something having toSpliced without having splice, that would also be true when we present the record and tuple proposal. So if that's like a more general rule I guess that's more concerning for us because then that makes us go back to the drawing board, a bit with records and tuples. But if it's specifically just about TypedArray, it's kind of less about a larger design space that we're looking at. +ACE: So, things like toReverse, and toSorted so that kind of fits quite nicely because they're already here in this middle section here with both array and on TypedArray. Splice is the odd one out here in that it is only on array. So you would right now the proposal you would have typedArray toSpliced but you can't do TA splice, because like this whole – I want to say quadrant but it's a sextant, I think? – these all modify length and you can't modify the length of the typed array. So it kind of if you know the TA is a fixed length, it kind of makes sense that it doesn't have splice and that would also follow that Tuple doesn't have any of these. So I’m curious what opinions people have. One concern we have is if the concern is primarily about something having toSpliced without having splice, that would also be true when we present the record and tuple proposal. So if that's like a more general rule I guess that's more concerning for us because then that makes us go back to the drawing board, a bit with records and tuples. But if it's specifically just about TypedArray, it's kind of less about a larger design space that we're looking at. -JHD: I understand why we can't have splice on typed arrays because it mutates length and we can't mutate the length of a TypedArray. It seems like, there hasn't been a missing Solution on typed arrays around splice or there would have been an attempt to create a function on typed arrays before this that solved it; in other words, whatever toSplice is doing like there hasn't hasn't been clamoring until it before this proposal, I have this use case at camp with typed arrays. It needs something like splice that doesn't mutate to solve it. One person I think has provided a use case but not until this was almost removed from this proposal. so it doesn't seem like we have a compelling problem statement, the toSpliced TypedArray solves. I know what it solves on Tuple because I know what it sells on a raise. And to me tuples in arrays are highly analogous but typed arrays are kind of specialized and also it didn't go through the same design process that most of the rest of the language did. And, you know, I just, I think that it's ok, that they're weird in some ways. So I think that if there's a really compelling reason to have toSpliced on typed arrays, that's we should know before adding it. And I don't see one. +JHD: I understand why we can't have splice on typed arrays because it mutates length and we can't mutate the length of a TypedArray. It seems like, there hasn't been a missing Solution on typed arrays around splice or there would have been an attempt to create a function on typed arrays before this that solved it; in other words, whatever toSplice is doing like there hasn't hasn't been clamoring until it before this proposal, I have this use case at camp with typed arrays. It needs something like splice that doesn't mutate to solve it. One person I think has provided a use case but not until this was almost removed from this proposal. so it doesn't seem like we have a compelling problem statement, the toSpliced TypedArray solves. I know what it solves on Tuple because I know what it sells on a raise. And to me tuples in arrays are highly analogous but typed arrays are kind of specialized and also it didn't go through the same design process that most of the rest of the language did. And, you know, I just, I think that it's ok, that they're weird in some ways. So I think that if there's a really compelling reason to have toSpliced on typed arrays, that's we should know before adding it. And I don't see one. ACE: thanks SYG: So what initially prompted me to ask the question is that there's two axes of discoverability of these methods here. One is the mutable/immutable pair, and that's what's missing here, and the other is – I guess the other is, is there a set of immutable helpers that we expect of all collections? Maybe that's a goal that seems more that seen sounds more solid footing. But like for that, whether to splice should be included in that said, I think is independent of the debate because… it's weird. I have a pretty weak…. Since this is stage 3, and this is already litigated. This is a very weak request and to kind of reaffirm that this is what we want because it slipped my mind at the time that you didn't have splice on TypedArray. And then I'll close with, I have a pretty weak argument also against complexity for security Yeah, I guess complexity for security because typed arrays are just special anything we add to them. The attack surface because that's like stage one of your chain exploit to get your shellcode into some type the right thing. So, the smaller that surface is for typed arrays, the better and if we don't need to splice, they would be nice to not have it. But that's pretty weak. -ACE: Thank you. +ACE: Thank you. YSV: When we maintain consistency, we are aiding learnability of the language. So I believe in reusing the toothpaste. As you mentioned, reason the to splice naming, in this case, may actually hurt ability for Learners of the language or even Before experience with language to Triple H. What is the relationship between this feature and splice? Additionally, I think splice is a rather confusing word. In isolation is missing. I think that we would in some ways be a benefit and who [audio issues] -YSV: I'll just get to the point. We could implement this separately from change array by copy, and I think there's reason to consider that because it may aid learnability and it may aid as a more coherent [audio cut out]. I think I think you guys get the idea. +YSV: I'll just get to the point. We could implement this separately from change array by copy, and I think there's reason to consider that because it may aid learnability and it may aid as a more coherent [audio cut out]. I think I think you guys get the idea. DE: We heard we heard a lot of criticism of this but you mentioned that the champions' point of view is that it should be included. Could you repeat that rationale? -ACE: Yeah, so I guess there's a few reasons why I like it. One is kind of counter to SYG's Point while I do see reducing the complexity of TypedArray, that it makes sense from a security point of view is that the fact that this method is a little bit complicated to implement, the spec text is a little bit longer than the other methods to me, I like the fact that it is implemented in the language for me. Splice is one of those operations that when you need it it's a real pain to implement yourself. It's not like some of the other methods where you can quite quickly – like reversing a list you can quite easily write that out yourself. and you're trying to do splice, there's quite a lot of little bits of arithmetic involved in there, so it's quite a to me, it's the kind of like the Swiss army knife of methods, or it's one I really I am pleased it's there for me because I really don't want to write it myself. So there's the utility argument that I guess that's about splicing in general – Having it on typed arrays – I like the fact that, I guess I am more pro keeping typed arrays… While typed arrays are still relatively close to matching the array prototype. You know there's only a few exceptions. It feels correct to me like typed arrays, not having flat, makes complete sense. You can't have a two-dimensional TypedArray. So, I'm more for not sending a like setting a new precedent of not letting typed arrays fall further adrift from array. While this method, I think is still quite in keeping with typed arrays in that. It's from the arguments passed in, you know, immediately up front the size you need to allocate for the actual buffer. This isn't like flatmap, yes, to me flat map doesn't feel very TypedArray because it's very dynamic. You can't just allocate it upfront and populate your TypedArray that you've got to actually kind of do all the work and then transfer that so, it kind of doesn't feel like you're getting much benefit of it being a built-in method. If you don't have toSpliced you can but you did have it on the array, you know, you could transfer do the work on the array and then transfer The Irate back into a TypedArray, +ACE: Yeah, so I guess there's a few reasons why I like it. One is kind of counter to SYG's Point while I do see reducing the complexity of TypedArray, that it makes sense from a security point of view is that the fact that this method is a little bit complicated to implement, the spec text is a little bit longer than the other methods to me, I like the fact that it is implemented in the language for me. Splice is one of those operations that when you need it it's a real pain to implement yourself. It's not like some of the other methods where you can quite quickly – like reversing a list you can quite easily write that out yourself. and you're trying to do splice, there's quite a lot of little bits of arithmetic involved in there, so it's quite a to me, it's the kind of like the Swiss army knife of methods, or it's one I really I am pleased it's there for me because I really don't want to write it myself. So there's the utility argument that I guess that's about splicing in general – Having it on typed arrays – I like the fact that, I guess I am more pro keeping typed arrays… While typed arrays are still relatively close to matching the array prototype. You know there's only a few exceptions. It feels correct to me like typed arrays, not having flat, makes complete sense. You can't have a two-dimensional TypedArray. So, I'm more for not sending a like setting a new precedent of not letting typed arrays fall further adrift from array. While this method, I think is still quite in keeping with typed arrays in that. It's from the arguments passed in, you know, immediately up front the size you need to allocate for the actual buffer. This isn't like flatmap, yes, to me flat map doesn't feel very TypedArray because it's very dynamic. You can't just allocate it upfront and populate your TypedArray that you've got to actually kind of do all the work and then transfer that so, it kind of doesn't feel like you're getting much benefit of it being a built-in method. If you don't have toSpliced you can but you did have it on the array, you know, you could transfer do the work on the array and then transfer The Irate back into a TypedArray, SYG: I said you can just do a .call(). I was corrected and said that the return value is still an array, not a TA. So I need to make a copy. -ACE: I guess some one thing I'm curious about is some of the feedback we've gotten is not specific to TypedArray. It's more general. like this is a controversial name I think I'm never sure when people complain about the word splice. How much of that is just like a meme or how much of that is genuine confusion. Personally, anecdotally I've always been okay With the word, anecdotal stories I used to me, as a film student, I spent years splicing film. So to me the word splice is very natural to get feedback, that is not great, but then that also applies to array.prototype.toSpliced and would apply to Tuple toSpliced. So, if the concern is the naming then it'd be good to know that is more General because then we should kind of look more widely at the proposal wider and then also the records and tuple proposal. I do get it. A more General negative vibe is there. Is there anyone That's positive because, you know, if the main Vibe is – I don't think feeling is strong enough to oppose a majority negative. +ACE: I guess some one thing I'm curious about is some of the feedback we've gotten is not specific to TypedArray. It's more general. like this is a controversial name I think I'm never sure when people complain about the word splice. How much of that is just like a meme or how much of that is genuine confusion. Personally, anecdotally I've always been okay With the word, anecdotal stories I used to me, as a film student, I spent years splicing film. So to me the word splice is very natural to get feedback, that is not great, but then that also applies to array.prototype.toSpliced and would apply to Tuple toSpliced. So, if the concern is the naming then it'd be good to know that is more General because then we should kind of look more widely at the proposal wider and then also the records and tuple proposal. I do get it. A more General negative vibe is there. Is there anyone That's positive because, you know, if the main Vibe is – I don't think feeling is strong enough to oppose a majority negative. YSV: I think that when we have the relationship and the history behind toSpliced and spliced fine but with typedArrays we may want something different to make a distinction. Okay. So I'll try to quickly summarize YSV: my feeling about splice and toSpliced is because they have the relationship that they are a mutable and immutable variant. The end that there is this long history of splice, then this is a nice relationship between those two and I wouldn't propose that we remove that. I think that's fine as it is. For typed arrays. I think when we have tried historically to push typed arrays and arrays closer together in chat, it was also mentioned that sorting a TypedArray is a bit of a strange thing to do. For example, a TypedArray may be backed by different kinds of buffers. You might be doing different kinds of work like working with graphics and the data from a graphical output Isn't going to make sense, that's you're not going to get something meaningful in that case. So I think that trying to push these two together too much and adhering to this idea that we want to have this as close as possible may actually be hurting how we're designing typed arrays. In the case of toSpliced I agree that this is a meaningful API, but I would say that unless we have a very clear overlap – and in a sense we do with toSpliced but this is a vague case – unless we have a very clear overlap, I don't think we should be pushing those two data structures, those two built-ins to be the same. That's all I have to say. -ACE: Yeah, that's great. It's good to hear that the concern is mostly for typed arrays to more closely align with – as well and I'm trying to get eyeballs in the other limbo delegates in the room whether or not we'd want to retract a position of supporting this method. +ACE: Yeah, that's great. It's good to hear that the concern is mostly for typed arrays to more closely align with – as well and I'm trying to get eyeballs in the other limbo delegates in the room whether or not we'd want to retract a position of supporting this method. RRD:I guess there was a proposition of a temperature. Check. Just asking for everyone else but we hear the implementers' arguments. DE: Concretely, we could do a temperature check on the proposition of specifically removing TypedArray toSpliced. It seems like we all agree on that array. Sorry, Tuple tospliced and array to spliced make sense to maintain. And so the temperature check is positive sentiment If you want to maintain TypedArray to splice, And negative sentiment If you want to remove typed array to splice. [room questions what expresses negative sentiment in the queue software] -DE Okay, so positive. If you want a typed array toSpliced to exist and unconvinced, if you're not that TA toSpliced place should exist. Indifferent means indifferent means. +DE Okay, so positive. If you want a typed array toSpliced to exist and unconvinced, if you're not that TA toSpliced place should exist. Indifferent means indifferent means. RRD: Taking the tally: 1 Positive, 9 Indifferent, 11 Unconvinced/against @@ -258,7 +258,7 @@ Presenter: Shu-yu Guo (SYG) - [issue](https://github.com/tc39/proposal-resizablearraybuffer/pull/99) -SYG: Was implementing it. Notice the bug. So asking for consensus on the normative fix is a little bit strange so I'll go over it real quick. So quick recap or a golfer. the prototype that transfers is basically reallocated; it transfers the contents of the receiver array buffers into a new array buffer and detaches the original array buffer. So to do this, they must know that they're the original array, and a buffer is attached. Currently the spec draft does one detached check and then does an argument coercion on the new length and it does not do another detach check after it does the coercion. Of course, the coercion could call user code and result in the receiver array buffer being detached. So this fixes that, but it fixes that in a particular way, which is that if the new length argument is present, it does the ToIndex first and that it does detach check. This is arguably inconsistent with always checking detach first and then doing all the argument conversions, then doing another detach check. But there's a single argument here. So if we do detach checks, I figured that just seems kind of useless. So that's the one weirdness about this PR, but we need to fix this regardless. Any thoughts on – are folks okay with that weirdness? Any concerns there. +SYG: Was implementing it. Notice the bug. So asking for consensus on the normative fix is a little bit strange so I'll go over it real quick. So quick recap or a golfer. the prototype that transfers is basically reallocated; it transfers the contents of the receiver array buffers into a new array buffer and detaches the original array buffer. So to do this, they must know that they're the original array, and a buffer is attached. Currently the spec draft does one detached check and then does an argument coercion on the new length and it does not do another detach check after it does the coercion. Of course, the coercion could call user code and result in the receiver array buffer being detached. So this fixes that, but it fixes that in a particular way, which is that if the new length argument is present, it does the ToIndex first and that it does detach check. This is arguably inconsistent with always checking detach first and then doing all the argument conversions, then doing another detach check. But there's a single argument here. So if we do detach checks, I figured that just seems kind of useless. So that's the one weirdness about this PR, but we need to fix this regardless. Any thoughts on – are folks okay with that weirdness? Any concerns there. DE: I'm ok with that. I don't think it's the only time we have weird checks. @@ -308,7 +308,7 @@ Presenter: Kevin Gibbons (KB) - [pull request](https://github.com/tc39/ecma262/pull/2812) -KB: Yes. Okay so this was an issue that was raised on the discourse, our more asynchronous forum. Someone points out that the BigIntint Constructor, which is used to coerce values to a bigint, does coercion twice. So here I am passing a value has different Behavior the first time you call toPrimitive on it than the second time, and even though you're only passing it to BigInt once it would get coerced twice. This is silly, it doesn't match any of the major implementations, although GraalJS and Engine 262 actually implement spec correctly, so congratulations to them, but I'm proposing to change the specification to match implementations. So that you first do a call to toPrimitive and then if that results in a value which is not already a number, you use the post-to Primitive value as the argument to toBigInt rather than using the original value again, which would call toPrimitive a second time. That's the change. Do we have anything on the Queue or can I ask for a consensus for this change? +KB: Yes. Okay so this was an issue that was raised on the discourse, our more asynchronous forum. Someone points out that the BigIntint Constructor, which is used to coerce values to a bigint, does coercion twice. So here I am passing a value has different Behavior the first time you call toPrimitive on it than the second time, and even though you're only passing it to BigInt once it would get coerced twice. This is silly, it doesn't match any of the major implementations, although GraalJS and Engine 262 actually implement spec correctly, so congratulations to them, but I'm proposing to change the specification to match implementations. So that you first do a call to toPrimitive and then if that results in a value which is not already a number, you use the post-to Primitive value as the argument to toBigInt rather than using the original value again, which would call toPrimitive a second time. That's the change. Do we have anything on the Queue or can I ask for a consensus for this change? RPR: so, okay, so Kevin is asking the consensus on the change, any objections? No objections, congratulations. You have consensus. And JHD has given it explicit +1. @@ -326,9 +326,9 @@ Presenter: Robin Ricard (RRD), Ashley Claymore (ACE) RRD: So, this is just an update for the record and tuple that we plan to present. Probably at the next meeting or any subsequent meeting. Our goal today is to go through this agenda. So we're going to give you a brief tour of records and tuples. Again, the current motivation that we have for this proposal, the different proposal dependencies. So it's other proposals that are related to record and tuple and how they play together. Then we also have other standards that are related to our R&T we're going to talk about. Then the current employment is the state of implementations that we have right now because record and tuple are currently at stage 2 of the spec text. Then a quick discussion. Hopefully, on frozen wrappers, and then we're going to discuss state three reviewers. -RRD: okay so quick reminder again to get started. your, if you use this hash prefix syntax that you're seeing on the screen the first time you're going to be able to either create a record or a tuple they are deeply mutable and are also Primitives. So, that means that if you do typeof, you're going to get a record or tuple. And so you can do updates if you cannot update them in place, but you can update them by copy. And we also backported the top of methods that create new tuples by copy to array and TypedArray. So we talked about it just earlier, which is called “change array by copy”. And so we also have the principle of value equality instead of referential quality. So that means that if you're using triple equal or map or set you are going to compare records and Tuple by the value that the R or T contains instead of references to them. One important thing is that, in order to make this useful for records, we are sorting lexicographically the key ordering in records, and that means we cannot have symbol keys and in records because you have symbols, you would be able to observe the order of creations of the symbols, which is not something that we want, and they cannot contain objects or functions. So we're going to talk about this in the next Slide. The benefit for us is that we can guarantee deep immutability and it's also deep equality and hashing of the records and tuples, and so that means that we can have primitives that are not able to carry communications channels. We're going to get to this because this is related to Shadow Realms and the fact that they can only take primitives as arguments is being passed down. And so finally, this avoids literal Hazard: if I'm just writing record or tuple structure and I forget to put hash inside of that structure, if II, by accident, put an object or an array in there then this is going to throw immediately and tell me that I'm trying to put an object or an array inside of the structure. But we do understand that not everyone – I mean some usages requires still to be able to reference objects or arrays…, we understand that there is a always, a need to do the things sometimes such as referencing objects, that could be DOM elements, that could be functions, could be anything And so the escape hatch for this is to use symbols as weakmap keys. you would use the symbol that you would then put in a WeakMap key that would map symbols to the objects that we would like to reference here. +RRD: okay so quick reminder again to get started. your, if you use this hash prefix syntax that you're seeing on the screen the first time you're going to be able to either create a record or a tuple they are deeply mutable and are also Primitives. So, that means that if you do typeof, you're going to get a record or tuple. And so you can do updates if you cannot update them in place, but you can update them by copy. And we also backported the top of methods that create new tuples by copy to array and TypedArray. So we talked about it just earlier, which is called “change array by copy”. And so we also have the principle of value equality instead of referential quality. So that means that if you're using triple equal or map or set you are going to compare records and Tuple by the value that the R or T contains instead of references to them. One important thing is that, in order to make this useful for records, we are sorting lexicographically the key ordering in records, and that means we cannot have symbol keys and in records because you have symbols, you would be able to observe the order of creations of the symbols, which is not something that we want, and they cannot contain objects or functions. So we're going to talk about this in the next Slide. The benefit for us is that we can guarantee deep immutability and it's also deep equality and hashing of the records and tuples, and so that means that we can have primitives that are not able to carry communications channels. We're going to get to this because this is related to Shadow Realms and the fact that they can only take primitives as arguments is being passed down. And so finally, this avoids literal Hazard: if I'm just writing record or tuple structure and I forget to put hash inside of that structure, if II, by accident, put an object or an array in there then this is going to throw immediately and tell me that I'm trying to put an object or an array inside of the structure. But we do understand that not everyone – I mean some usages requires still to be able to reference objects or arrays…, we understand that there is a always, a need to do the things sometimes such as referencing objects, that could be DOM elements, that could be functions, could be anything And so the escape hatch for this is to use symbols as weakmap keys. you would use the symbol that you would then put in a WeakMap key that would map symbols to the objects that we would like to reference here. -RRD: We also have various APIs on this, so the record and tuple global constructors. and so they have record or tuple to pull those from to pull that of one thing to note, the record prototype is null. So that means that we don't have any methods that you're going to be able to call on record themselves. But obviously, you have to record that record and record that from entries where you're available. Tuple prototype is a subset of array prototype that is Tuple specific and that is also driven by what we presented in change array by copy. +RRD: We also have various APIs on this, so the record and tuple global constructors. and so they have record or tuple to pull those from to pull that of one thing to note, the record prototype is null. So that means that we don't have any methods that you're going to be able to call on record themselves. But obviously, you have to record that record and record that from entries where you're available. Tuple prototype is a subset of array prototype that is Tuple specific and that is also driven by what we presented in change array by copy. RRD: Finally, we're introducing, JSON.parseImmutable, which means that if you're passing a string to JSON string to parseImmutable, you're going to, instead of getting objects and arrays as the result of JSON.parse, JSON.parse, immutables to give you records and tuples., the motivation for us is really that it is okay for guaranteed immutability. @@ -336,7 +336,7 @@ RRD: That's something that's been done in the JavaScript Community for a while t RRD: And then as we talked about earlier, we have value-based equality. I really like value-based equality, for example, in this case where you can use a record to be able to assemble a few values together to do a lookup you also do the same thing with a tuple and what's interesting in that example is that. Yes, we are able to forge a completely different record but effectively, it has the same value as the first one so we can use both as the same keys to look at the same object -RRD: okay? So we talked about proposal dependencies. So the first one that we've talked about change-array-by-copy, which got to stage 3 in March 2022 thanks to Ashley and it already shipped under technology preview 146 and I believe it's going to ship it into the next stable version of (??). Then go symbols as weakmap keys that we went with to stage 3 at the last meeting and And we're considering this work web and Mel and done. So we want to do work on WebIDL to permit records where objects would normally be accepted and same thing with tuples when arrays would be normally accepted.. So the proposal webIDL are still pending, we are going to start working on this Shorty. and similarly to what we have in ECMA-262, we need to designate the Spec, internal records designation that's in web IDL to be obviously differentiated to the reintroduced record from this proposal. +RRD: okay? So we talked about proposal dependencies. So the first one that we've talked about change-array-by-copy, which got to stage 3 in March 2022 thanks to Ashley and it already shipped under technology preview 146 and I believe it's going to ship it into the next stable version of (??). Then go symbols as weakmap keys that we went with to stage 3 at the last meeting and And we're considering this work web and Mel and done. So we want to do work on WebIDL to permit records where objects would normally be accepted and same thing with tuples when arrays would be normally accepted.. So the proposal webIDL are still pending, we are going to start working on this Shorty. and similarly to what we have in ECMA-262, we need to designate the Spec, internal records designation that's in web IDL to be obviously differentiated to the reintroduced record from this proposal. RRD: We do plan in this proposal to keep "record". The disambiguation is only going to be spec internal. So since we are exposing something here, We believe that the spec internal change is possible, and when it comes to HTML and DOM changes NRO made structural integration of record and tuple. @@ -344,17 +344,17 @@ RRD: Okay, we have in terms of implementation a babel transforms a coalition of MF: So I saw that you had on the slide JSON.parseImmutable. This wasn't a part that I had done any review of. Is the inclusion of JSON.parseImmutable as part of this proposal necessary? If we had it as a separate proposal, would it be motivated enough? And for the design space: I know that JSON.parse has a large design space. Do you think JSON parseImmutable has a fairly narrow, straight-forward design space which would make it not necessary for it to go through the stage process on its own? -ACE: Yeah, it's a question because you can pre-select if it's not clear this slide is the entire API of the proposal. It's not like a snippet, a flavor. So it's hopeful it's evident the proposal is trying to form an even introducing new primitive types. Like it's a big proposal. We are trying to be quite lean on the API and while APIs have been suggested, we said that they could be following the proposal. So things are good questions. to answer your question on the design space of this. It's fairly unambiguous how there's very little subjectivity because everything JSON is representable as a primitive. It's not like going the other way. We're going from primitive to JSON is ambiguous because undefined isn't in JSON. The other way, objects being records and JSON arrays being tuples. It kind of just perfectly maps and there's no kind of edge cases that were just skirting over. +ACE: Yeah, it's a question because you can pre-select if it's not clear this slide is the entire API of the proposal. It's not like a snippet, a flavor. So it's hopeful it's evident the proposal is trying to form an even introducing new primitive types. Like it's a big proposal. We are trying to be quite lean on the API and while APIs have been suggested, we said that they could be following the proposal. So things are good questions. to answer your question on the design space of this. It's fairly unambiguous how there's very little subjectivity because everything JSON is representable as a primitive. It's not like going the other way. We're going from primitive to JSON is ambiguous because undefined isn't in JSON. The other way, objects being records and JSON arrays being tuples. It kind of just perfectly maps and there's no kind of edge cases that were just skirting over. -RRD: And to add to this, the motivation I think for me is that a mutable person is going to be kind of a primitive to build up more functionality towards. So it's really the minimum we can do to create interoperability. At some point maybe we do plan to add more compatibility, for example, in web APIs to return records and tuples. But at the beginning, at least, if we can just get JSON strings and convert them, that would be the bare minimum. If really, we want to reconsider it because we think that expands too much. We can discuss it. +RRD: And to add to this, the motivation I think for me is that a mutable person is going to be kind of a primitive to build up more functionality towards. So it's really the minimum we can do to create interoperability. At some point maybe we do plan to add more compatibility, for example, in web APIs to return records and tuples. But at the beginning, at least, if we can just get JSON strings and convert them, that would be the bare minimum. If really, we want to reconsider it because we think that expands too much. We can discuss it. -MF: Yeah, I didn't want to express a strong opinion one way or the other. I just wanted to see what the confidence level is on whether that should be part of this proposal. +MF: Yeah, I didn't want to express a strong opinion one way or the other. I just wanted to see what the confidence level is on whether that should be part of this proposal. ACE: Yes. The answer is high confidence. Yeah, it should be part of this. RRD: Yeah. Because again, this is just a minimum thing. You need to reach other things. -SYG: Is there reviver support and that Source text availability in reviver proposal support for parsing mutable? +SYG: Is there reviver support and that Source text availability in reviver proposal support for parsing mutable? ACE: There is reviver support. Yes, I guess no to this like seeing how well it interrupts with the sourcemap proposal. @@ -362,13 +362,13 @@ SYG: Okay, I bring that up because it's not clear to me that, given reviver supp ACE: [back to slides] So yes, the current state of the spec text. So, there's a fair amount of spec text, we really need reviewers because it's the process and the more of you the more likely spec text is going to be high quality.high known that the beginning, you know, what will officially formally ask for reviewers, and this is the kind of warning that we're going to be doing that. So start to think about whether you're going to yourself forward as a reviewer. That'd be great. -ACE: So as Robin said, We have intentions to also look at the kind of wider ecosystem of specs, but that work hasn't started in terms of writing things yet you know we've talked about it and looked into how we would go about doing it. But really in terms of what spec text is ready to review, we're talking the 262 spec in the proposal's GitHub repository, The fact that this introduces new primitives, it's going to really open up this question of now from our perspective on one side, these new primitives that are very different objects but also we don't want to be entirely different from objects, we want we don't want to kind of fork the world. So our approach to this proposal is that as much as possible that you can use these things as if, if someone currently takes an object, then you can give a record, and if something takes an array, you can give it a tuple. Because as a JavaScript developer, you create and access these things very much like objects and arrays and if you're indexing into them the literal syntax is very similar. +ACE: So as Robin said, We have intentions to also look at the kind of wider ecosystem of specs, but that work hasn't started in terms of writing things yet you know we've talked about it and looked into how we would go about doing it. But really in terms of what spec text is ready to review, we're talking the 262 spec in the proposal's GitHub repository, The fact that this introduces new primitives, it's going to really open up this question of now from our perspective on one side, these new primitives that are very different objects but also we don't want to be entirely different from objects, we want we don't want to kind of fork the world. So our approach to this proposal is that as much as possible that you can use these things as if, if someone currently takes an object, then you can give a record, and if something takes an array, you can give it a tuple. Because as a JavaScript developer, you create and access these things very much like objects and arrays and if you're indexing into them the literal syntax is very similar. ACE: So there is a current PR that's open, which is all the other places and 262, where we need to make a slight tweak to be effective. you do like a control F and you look for all the places where we say, “if type of argument is object, do this thing” in a lot of places that is, is it object? Then it's checking if it's callable else in that case those records won't ever be callable Get in the other places in its checking. I guess a very trivial place for this is the new error clause, options bag. It's checking if something is an object and then it's just doing a get for calls and And so could we think of being very friendly. Someone could just pass in a record with a calls property with then some primitive cause there are other places like the second argument to JSON.stringify where you can give it array of which Keys should actually be. Clank preserved in the actual JSON produced and the Order of those keys again that's that's place currently you know, it's doing a check “Is this an array?” We think it should also then accept a tuple. ACE: And the main thing, which is more of a chore, is that fact that "Record" is already a thing inside the spec. And this is the case, the same for web IDL of aspects. It turns out the word "record" is a very popular word to describe this type of structure. Our plan is to, you know, work in each of these places to make sure it's completely unambiguous when you're talking about the actual Ecma primitive language value record vs the spec internal record. -ACE: so, some things that kind of are still open issues, even though the Champions group has a stance on this, so one is the wrapper objects of these whether or not they should be frozen in the current spec they are Frozen. So to be clear what you're talking about is you know, if you even while you can't see new hold on Newt up or it's like symbol that will throw it's still possible to get the actual object wrappers if you do as this example is here, if you have a sloppy function and you then call it so that the receiver is now primitive. That's an implicit kind of like passing that to the object Constructor or if you explicitly pass these things to the object Constructor as a call. So in that in those kind of cases, we think that, unlike all the other Primitives that are extensible, in this case we think it's important that because they're – because the entire raison d'etre of these Primitives is that they are immutable, especially for record where it's entire purpose is it's a bag of string/keys, it shouldn't be extensive or even when you've got the gun, object or a perversion I think is, especially true, because of the fact that if you the record, do is record API. If you pass that object wrapper to it, it is saying “true”. And we think it would be kind of surprising quirk of the language. If something could say, yes, this is record, but then, in the very next line you could extend it. You feel like that's just kind of a wet moment that we would introduce so, yes, at this kind of the saying what I just said approaching cancer, slide do it. Yeah. We'd like to kind of break with tradition and have Frozen exotic wrappers because we make this matches the mental model. +ACE: so, some things that kind of are still open issues, even though the Champions group has a stance on this, so one is the wrapper objects of these whether or not they should be frozen in the current spec they are Frozen. So to be clear what you're talking about is you know, if you even while you can't see new hold on Newt up or it's like symbol that will throw it's still possible to get the actual object wrappers if you do as this example is here, if you have a sloppy function and you then call it so that the receiver is now primitive. That's an implicit kind of like passing that to the object Constructor or if you explicitly pass these things to the object Constructor as a call. So in that in those kind of cases, we think that, unlike all the other Primitives that are extensible, in this case we think it's important that because they're – because the entire raison d'etre of these Primitives is that they are immutable, especially for record where it's entire purpose is it's a bag of string/keys, it shouldn't be extensive or even when you've got the gun, object or a perversion I think is, especially true, because of the fact that if you the record, do is record API. If you pass that object wrapper to it, it is saying “true”. And we think it would be kind of surprising quirk of the language. If something could say, yes, this is record, but then, in the very next line you could extend it. You feel like that's just kind of a wet moment that we would introduce so, yes, at this kind of the saying what I just said approaching cancer, slide do it. Yeah. We'd like to kind of break with tradition and have Frozen exotic wrappers because we make this matches the mental model. RRD: The main thing if we want to discuss, use cases for mutable wrappers because this is something that's not clear to us. @@ -404,7 +404,7 @@ RRD: but yeah, but without IsConcatSpreadable, essentially congrats, probably wo ACE: Yeah. I think that makes perfect sense. Yeah. -JHD: So I have a counter opinion to that, I think that it even if we had note lists. Array concat would have still always spread arrays. I believe that's not controversial. I similarly would always expect it to spread tuples and I would expect tuple concat to spread arrays, and that's unrelated to the protocol, and it sounds like everyone's on board with that as a plan, regardless of how its implemented. I agree that there's no use case for spreading a node list of objects into a tuple, but there's also no use case for passing a node list into concat because that is itself, an object and we'll throw. So it's sort of Irrelevant in the discussion. It’s workable to hard code into both of those methods the behavior. But it seems weird like I think that Is concat spreadable is the mechanism that the language chose to indicate whether concat spreads it and it just seems unfortunate to me to build in a hard-coded special case solely because we think that a specific symbol protocol is icky. +JHD: So I have a counter opinion to that, I think that it even if we had note lists. Array concat would have still always spread arrays. I believe that's not controversial. I similarly would always expect it to spread tuples and I would expect tuple concat to spread arrays, and that's unrelated to the protocol, and it sounds like everyone's on board with that as a plan, regardless of how its implemented. I agree that there's no use case for spreading a node list of objects into a tuple, but there's also no use case for passing a node list into concat because that is itself, an object and we'll throw. So it's sort of Irrelevant in the discussion. It’s workable to hard code into both of those methods the behavior. But it seems weird like I think that Is concat spreadable is the mechanism that the language chose to indicate whether concat spreads it and it just seems unfortunate to me to build in a hard-coded special case solely because we think that a specific symbol protocol is icky. ACE: Like, I said, we could expand on icky. Potentially, I'm trying to run because I remember those a blog post recently made in the last year where Something hit, massive performance, Cliff purely because it touched symbol is can cat spreadable, which then, in I think V8 was one of these kind of if you touch that invalidates a lot of assumptions. So it’s more so than just feeling icky and that it really does have ecosystem concerns @@ -420,7 +420,7 @@ DE: Well, you could say the same about species about toPrimitive and I think the RRD: I guess this goes a bit further than the decisions that we would like to take today. What about we propose a PR that would hard code instead of using isConcatSpreadable and from there, we could see how it looks like. I personally, would like to see the difference in Spec text because yeah, -DE: yeah. Sounds like a good way forward. +DE: yeah. Sounds like a good way forward. [ Other queue items for isConcatSpreadable @@ -454,7 +454,7 @@ JHD: I will continue to try and write something down and supply that I wanted to RRD: Yeah, that's awesome specifically because you will be reviewing bottom up -WH: `Record.isRecord` and `Tuple.isTuple` returning true for wrappers bothers me because it allows you to generate an unlimited number of identical records which are all not === to each other, violating the value semantics that records and tuples provide. If I read the spec correctly, === comparing a wrapper with the record it’s wrapping returns false, and likewise for two wrappers of the same record. +WH: `Record.isRecord` and `Tuple.isTuple` returning true for wrappers bothers me because it allows you to generate an unlimited number of identical records which are all not === to each other, violating the value semantics that records and tuples provide. If I read the spec correctly, === comparing a wrapper with the record it’s wrapping returns false, and likewise for two wrappers of the same record. ACE: Yes. @@ -480,7 +480,7 @@ ACE: So it's the detection facility that it would not be provided by an event fu JHD: oh yeah. So and that is true. But try catching as a slot detection mechanism is by and large a legacy from es6 that we have not continued, we haven't added anything since es6 as Well, but it's slow and not ergonomic. And I think it's a It would be pretty unfortunate if that was our way forward. -RRD: I think we're going to go ahead of time before I already over time. Yeah, yeah. +RRD: I think we're going to go ahead of time before I already over time. Yeah, yeah. DE: I guess just quickly for my queue item Jordan if you could follow up with something written about your use case, @@ -517,7 +517,7 @@ LCA: To quickly go over the proposal scope because it has changed since last tim GB: Just to recap some of the changes since last time. Previously we had the syntax with "as module" at the end of the import statement where it was an arbitrary string. Based on further feedback that we had on that we have now moved to using a direct reflection keyword in the syntax and that was through discussions in the module calls that we've been having. And this means that the module keyword can be explicitly defined as opposed to defining more generic reflection mechanics with this arbitrary string functionality, but we still look at that. Keyword, as being the reflection keyword, so that other keywords in future can follow the same pattern that we're creating and we can also consider these other types of reflection. So what we're doing, In is we're very much thinking about doing it in a way that can extend to other use cases of the future, but at the same time, narrowing our Focus to this particular syntax. So when Luca mentioned the asset references proposal everything we've done is fully compatible with that. you could replace the module import reflection keyword with an asset keyword the same Syntax would work that you can import assets and on Dynamic employed, you could have reflect asset as well. So we feel that we've kind of thought about it generally and we have a convention that can work for lots of different things and expand things but then we don't actually need to Define it as a generic mechanic; we can just very explicitly specify the exact syntax for our purposes here. -GB: Just to reiterate the exact webassembly use case, because I think it quite often comes up what exactly are the economics. And the kind of interactions that play, this is this would basically be the recommended way of loading webassembly today. And it there's a lot of stuff going on here and it's a long for You're not familiar with all the equal, the details to reply. So, webassembly.compileStreaming takes a fetch response and gives you back a compiled object. Then you have this new URL pattern to do a portable import. And finally, you're getting our to your module. So I mean, there is a lot of different things going here. I just want to break down some of the pieces. So the first point to note is that even with the webassembly ES module integration. There will always be a need to directly get access to these webassembly dot module objects because of the fact that in most cases you need to perform additional instrumentation around the module. That wouldn't necessarily be possible directly in the ES/wasm module integration. and if you make one small mistake that syntax, You are going to have some issues. So for example, if you don't use the new URL, specifier comma import metadata URL pattern. Then you creating something that works, but it's not going to be portable as a library, and it's not going to relocate. Well, and you're going to need to have some out-of-band configuration mechanism for users to point to the webassembly location. And as soon as you do that, you do static analysis that you no longer know what you're importing and so so that's that's a really strong motivation and problem that webassembly, you're losing the information of what you're actually. executing which is what the EAS module system gives us in the first place. has its a lot of information about what's executed. If you run on platform that doesn't provide a fetch function, then it's just not going to work. And then you have to have these branching statements to then lower the platform-specific file loading APIs. And then not use webassembly compile streaming, but then just use webassembly compile or something, I always forget which one, there's a whole bunch of APIs. And then you have to fall down into these Alternatives and before long, that function, that was already very complicated, starts getting split up into 20 or 30 lines of code that is all very custom, very manual and doing a lot of things. Iif you split out the URL, as its own variable and you have a branch on the Fetch and then you're assigning to a fetch function and then your maybe using compile streaming or maybe not depending on which branch you hit, you end up with a lot of variations of This coffee's code paths. So there's this kind of huge explosion of possibilities of how you could write this and we've completely destroyed static analysis. So there is our way that your tooling Even This is complicated to analyze in a build tool to be able to know what webassembly is being executed so that you can actually analyze the execution but once you explode out into all these conditional variations things. It's almost impossible for any tool to know what, what's being executed. +GB: Just to reiterate the exact webassembly use case, because I think it quite often comes up what exactly are the economics. And the kind of interactions that play, this is this would basically be the recommended way of loading webassembly today. And it there's a lot of stuff going on here and it's a long for You're not familiar with all the equal, the details to reply. So, webassembly.compileStreaming takes a fetch response and gives you back a compiled object. Then you have this new URL pattern to do a portable import. And finally, you're getting our to your module. So I mean, there is a lot of different things going here. I just want to break down some of the pieces. So the first point to note is that even with the webassembly ES module integration. There will always be a need to directly get access to these webassembly dot module objects because of the fact that in most cases you need to perform additional instrumentation around the module. That wouldn't necessarily be possible directly in the ES/wasm module integration. and if you make one small mistake that syntax, You are going to have some issues. So for example, if you don't use the new URL, specifier comma import metadata URL pattern. Then you creating something that works, but it's not going to be portable as a library, and it's not going to relocate. Well, and you're going to need to have some out-of-band configuration mechanism for users to point to the webassembly location. And as soon as you do that, you do static analysis that you no longer know what you're importing and so so that's that's a really strong motivation and problem that webassembly, you're losing the information of what you're actually. executing which is what the EAS module system gives us in the first place. has its a lot of information about what's executed. If you run on platform that doesn't provide a fetch function, then it's just not going to work. And then you have to have these branching statements to then lower the platform-specific file loading APIs. And then not use webassembly compile streaming, but then just use webassembly compile or something, I always forget which one, there's a whole bunch of APIs. And then you have to fall down into these Alternatives and before long, that function, that was already very complicated, starts getting split up into 20 or 30 lines of code that is all very custom, very manual and doing a lot of things. Iif you split out the URL, as its own variable and you have a branch on the Fetch and then you're assigning to a fetch function and then your maybe using compile streaming or maybe not depending on which branch you hit, you end up with a lot of variations of This coffee's code paths. So there's this kind of huge explosion of possibilities of how you could write this and we've completely destroyed static analysis. So there is our way that your tooling Even This is complicated to analyze in a build tool to be able to know what webassembly is being executed so that you can actually analyze the execution but once you explode out into all these conditional variations things. It's almost impossible for any tool to know what, what's being executed. LCA: And I want to mention real quick, that this is not a hypothetical concern. There's like this is the actual output of a build tool, I don't know if you can see this. These really this 40 lines of code, which essentially does this. There's a bunch of different branches here doing different types and fetching different webassembly instantiations. A different build tool here generates again similar output with a bunch of different branches. This is not the entirety of the code. There's more in this file. Like this is a very non-trivial amount of code. @@ -549,7 +549,7 @@ SYG: Yeah. So I find Luca’’s analogy mostly correct. like, I think JHD, if y JHD: Like it's still import something, right? It's like a function return value that. It's still the function could me anything. -SYG: Right. So module reflection doesn't exist yet. So what is being proposed here? Is that what you could get from a true reflection? It's not completely arbitrary. It's it's like narrow down to be completely arbitrary but it's not so narrow down as to be always exactly the same thing but that's part of the proposal, I'm not sure why would have the intuition that it ought to be like. It doesn't seem like an argument to If it says if the argument is what I heard was its syntax, therefore should always returning. Same thing. Default export is also syntax, but you have already internalized it does not return the same thing because that's how it works. I don't see why that's an argument this should not be done. There are other arguments you could make for why it should not be a narrow (?), but the fact that it is syntax isn't an argument for that. +SYG: Right. So module reflection doesn't exist yet. So what is being proposed here? Is that what you could get from a true reflection? It's not completely arbitrary. It's it's like narrow down to be completely arbitrary but it's not so narrow down as to be always exactly the same thing but that's part of the proposal, I'm not sure why would have the intuition that it ought to be like. It doesn't seem like an argument to If it says if the argument is what I heard was its syntax, therefore should always returning. Same thing. Default export is also syntax, but you have already internalized it does not return the same thing because that's how it works. I don't see why that's an argument this should not be done. There are other arguments you could make for why it should not be a narrow (?), but the fact that it is syntax isn't an argument for that. JHD: Yeah, so I agree with you. The way I phrased. It was, didn't make sense for the reasons you said, it's not simply that it's produced by syntax. its that, if I, if my mental model is meant to be, this is generic reflection around. the module which will tell me information about the module which may or may not contain wasm specific things. But like some problem, presumably, presumably there's some Universal things that module reflection will have. And then there's some extra stuff that an individual module might have and so on that in general, seems fine to me, is the like inheritance hierarchy of, this is an instance of a special constructor, that's not in the language spec, like all that stuff. Seems really strange to me. We could have had import.meta be something like that, or the module namespace object be something like that. that. But like instead it's a very tightly specified normal thing or consistently exotic thing and so it's all the capabilities is trying unlock. I had that seems great to me and bringing WASM and JS closer together is great to me and acknowledging them wasm is not any random other language. It is a special sibling of JavaScript or something, or cousin, or whatever that also seems great to be I'm but I don't get a good vibe from getting a like this magic instance. I'm sure there's other designs that would be worth exploring, maybe they'll end up not being worth it. And this current design is better. But I think it's worth Exploring that further, whether that's prior to or within stage two, I don't know what will be most appropriate, but like it just feels very strange to me. And also the general existential thing. I can say I'm concerned about with any proposal is that, if it leaves the gate too far open, might never be able to tighten it again, or do I like close it again? And so I am always interested in maximally restricting, what can be done. and then as soon as we find there's a re the, you know, with the paths to loosen it. So that as we're told that there, Use case, we can provide for it. @@ -714,7 +714,7 @@ SYG: but I think I've said my piece on my concerns about the motivation for laye KKL: Again, that that is as close as I could ever hope to receive in terms of roaring positive feedback. -MM: I think there's everything that KKL said is correct but without but there's a particular concept that I think needs to be mentioned, needs to be explained in order to address the objections, that SYG is raising, which is that we know how to, in user code, freeze all the primordial Zone we have been very careful in TC39 to keep any hidden State out of the primordials and to keep any hidden powers out of the primordial. Those so when when KKL and CP and others as Well, as moddable and MetaMask all, say the compartments Can can compartments or evaluators that enable us to build compartments can be used for isolation, If you don't freeze the globals then everything SYG says is exactly correct. It is completely incoherent to use compartments or evaluators as a isolation mechanism with any interesting guarantees if they're they're all sharing same primordial. And those primordials All as mutable as they start out, so it just want to clarify. That what we found is that in building shims walking all the primordials down ourselves, is not that painful, and therefore, that's part of what we have in mind when we talk about actually using this as an isolation mechanism in user code. And using it for in particular least Authority linkage for dealing with supply chain attacks. And Realms are realistic for least Authority linkage of packages with each other in order do to give them separate initial authorities. because having packages assumed object contact, operating through realm boundaries is going to be more pain than we will typically want to bet. +MM: I think there's everything that KKL said is correct but without but there's a particular concept that I think needs to be mentioned, needs to be explained in order to address the objections, that SYG is raising, which is that we know how to, in user code, freeze all the primordial Zone we have been very careful in TC39 to keep any hidden State out of the primordials and to keep any hidden powers out of the primordial. Those so when when KKL and CP and others as Well, as moddable and MetaMask all, say the compartments Can can compartments or evaluators that enable us to build compartments can be used for isolation, If you don't freeze the globals then everything SYG says is exactly correct. It is completely incoherent to use compartments or evaluators as a isolation mechanism with any interesting guarantees if they're they're all sharing same primordial. And those primordials All as mutable as they start out, so it just want to clarify. That what we found is that in building shims walking all the primordials down ourselves, is not that painful, and therefore, that's part of what we have in mind when we talk about actually using this as an isolation mechanism in user code. And using it for in particular least Authority linkage for dealing with supply chain attacks. And Realms are realistic for least Authority linkage of packages with each other in order do to give them separate initial authorities. because having packages assumed object contact, operating through realm boundaries is going to be more pain than we will typically want to bet. KKL: So with what this concretely looks like, terms of an implementation is that given the existence of an evaluators primitive if you wanted to isolate a particular module such that, it did not have the ability to reach any powerful mutable objects as you would construct, a global object for the contains only Frozen ,deeply, Frozen intrinsics, and any powers that you expressly wish to granted. @@ -742,11 +742,11 @@ DE: yeah, I find the mocking or virtualizing the global environment use case ver KKL: I like what you said and about the function of an epic, I think that the function of an epic is to allow us to co-evolve multiple proposals such that such that any individual proposal does not preclude a later proposal in that layering and to that end having these features at the end of it is although they might Advance separately having them in the same epic in order make sure that a change of the lower layer does not preclude. thing that occurs in another layer is, I think a useful function for epics How's the cube? -RPR: So I'm glad you're talking about epics on modules. I think there's more value to come from modules. And we've talked a bit about virtualization and the benefits there for isolation. And the benefits for flexibility. And DE & MM referenced developer productivity - cases of using test Frameworks and so on. I would say if we're about to do a big push on modules, to make them great in the spec, the elephant in the room, or the elephant in the spec, with modules, is that at the moment in the industry, if you look at the main place where JavaScript libraries exist, which is npm, if you look at the main runtime that people use on the server side, which is node, there adoption of ES Modules is very small. It has been very slow to make progress and even just maybe in the last month, there was a thread on the Node project of "shall we recommend that developers use ES modules in future?" and they weren't able to come to the answer of "yes". So I feel like if we're going to put some work into making modules great, it would be very useful if we can connect with the node community and see if any of the things we're proposing might lead to a greater industry uptake of modules. +RPR: So I'm glad you're talking about epics on modules. I think there's more value to come from modules. And we've talked a bit about virtualization and the benefits there for isolation. And the benefits for flexibility. And DE & MM referenced developer productivity - cases of using test Frameworks and so on. I would say if we're about to do a big push on modules, to make them great in the spec, the elephant in the room, or the elephant in the spec, with modules, is that at the moment in the industry, if you look at the main place where JavaScript libraries exist, which is npm, if you look at the main runtime that people use on the server side, which is node, there adoption of ES Modules is very small. It has been very slow to make progress and even just maybe in the last month, there was a thread on the Node project of "shall we recommend that developers use ES modules in future?" and they weren't able to come to the answer of "yes". So I feel like if we're going to put some work into making modules great, it would be very useful if we can connect with the node community and see if any of the things we're proposing might lead to a greater industry uptake of modules. KKL: Yes, absolutely. One of the one of the things that we've been up to, for the last two years is answering the question for ourselves is: do compartments provide a bridge from commonjs to esm because that is what is missing is well there are a number of bridges, not all of them are the same. Not all of them work in the same way, but for but the idea of making a sufficiently large subset of existing cjs usable as a transitive dependency of an ESM project in a meaningful way. For the cases where it makes most sense which notably are not just running on the back end but also running on the front end and what, what we've done at Agoric and with help from fromfolks at MetaMask consensus is build an object at the number of number of the layer 3. Even number two here, that allows us to make an opinionated and opinionated binding to CommonJS. that allows most common J's and we've been working to maximize what we mean most, for to participate in this particular loader and the neat thing about this loader is that it's an ESM loader, which means that it's asynchronous, which means it depends on. It depends on the static and of the, the aesthetic, analyzability of the module and common was Is intended to be statically analyzable to the extent that that was useful. And it doesn't matter what we intended or what we wrote in the commonjs spec, which does say that the argument of require must be a string. No one has to follow that rule but they do have to follow that rule. If they have a prayer of using browserify, or webpack in their library. And so there's been this sort of like, you know, like the moon's that make rings though, the shepherd's rings as a there. Our the the bundling ecosystem is a Shepherd for the commonJS's. Ecosystem that puts it in a position where the vast bulk of common J's can be loaded in an asynchronous loader and then bundled in the captured and Etc and statically analyzed. So we took Guy Bedford's lexical static analysis, tool for that he built for node extended it, so that it can do the thing that no doesn't just to say, analyzing the Imports as well as the exports, to recap. Guy wrote a tool that does a static lexical lexical. Analysis of the commonJS module in order to figure it out, its named exports. So that name that exports can work better node more like what you can get with Babel and node because it's using a synchronous common JS, loader that. Cannot break off from choosing not to solve the import side, but we decided to take that and extend it so that it can do the import side and have commonJS is a narrower subset commonJS, Admittedly, that's able to participate in this framework. And yet again that depends on the ability to virtualize. third-party module types. And the nice thing about doing it in a way that is defined in user code instead saying, hey, We need to bring commonJS into 262, which I would never say. It is a taint that we do not need to bring into these halls. We do need a way to make. We do need a way for specific applications to make decision opinionated decisions about what subset of the common JS Ecosystem, they want to lift into the ESM ecosystem and this framework allows us to us to do that. -RPR: Yes, I think my two points here are that, we should reach out and see if that proposal can address that gap. And if we find it does, then I think that these proposals may then have significantly more value to the community and more people will find them compelling. +RPR: Yes, I think my two points here are that, we should reach out and see if that proposal can address that gap. And if we find it does, then I think that these proposals may then have significantly more value to the community and more people will find them compelling. KKL: Absolutely, so it doesn't if forth for the record we would very much like to produce a to involve folks from who are involved. And in in the evolution of the node.js node.js module loader, GB himself, who has joined us among the co-champions four compartments and it's also a champion of other module Harmony proposals and we need, we need their voices. @@ -768,9 +768,9 @@ RPR: The point on jest yes, KKL: You mentioned instrumentation. suspect that the layer number 2 that is the third layer. I really should not have used zero. is would be useful to the end of instrumenting esm. But what to say that you could construct a module instance, knowing the bindings of another source and created an adapter. -CP: So the way I see these is when it comes to solving the interoperability issues. You probably can go very far when coming from CJS point of view, with layer one reflection mechanism. This is up for discussion. Obviously, we need to include the ability to access the hoist functions that are declared as export values. You can get very far but we haven't got to that part of that discussion. It's okay, okay. That's interesting. You will still need to figure out what to do with the TDZ though. You can use layer one from CJS and get very far. In the case of pulling from ESM, importing from commonjs, you definitely need layers 0 and 2. You have to be able to create virtual modules that represent whatever the exports saying that you have. You are only going to come halfway there because obviously if you have values that are set onto exports later on, they are not going to be qualifying. You still get very far with these three layers, I believe. And that's why we have been pushing on getting these things nailed down on these three layers that are the most important because they create a foundation for a bunch of other things. +CP: So the way I see these is when it comes to solving the interoperability issues. You probably can go very far when coming from CJS point of view, with layer one reflection mechanism. This is up for discussion. Obviously, we need to include the ability to access the hoist functions that are declared as export values. You can get very far but we haven't got to that part of that discussion. It's okay, okay. That's interesting. You will still need to figure out what to do with the TDZ though. You can use layer one from CJS and get very far. In the case of pulling from ESM, importing from commonjs, you definitely need layers 0 and 2. You have to be able to create virtual modules that represent whatever the exports saying that you have. You are only going to come halfway there because obviously if you have values that are set onto exports later on, they are not going to be qualifying. You still get very far with these three layers, I believe. And that's why we have been pushing on getting these things nailed down on these three layers that are the most important because they create a foundation for a bunch of other things. -SFC: Yeah, my comment regarding the question about ESM adoption. One problem that's that I've been experiencing a lot, and you can also talk to my intern Quinn about this, is WebAssembly ESM. LCA's proposal that was presented earlier is, I think, a really big step in the right direction. But there is not currently a module loader that does a very good job with WebAssembly. If ESM would be the standard for how you should do WebAssembly modules, I think that would really drive adoption, because that's currently a pain point. And if we can solve that pain point, I think that would be a nice thing to focus on as a priority. +SFC: Yeah, my comment regarding the question about ESM adoption. One problem that's that I've been experiencing a lot, and you can also talk to my intern Quinn about this, is WebAssembly ESM. LCA's proposal that was presented earlier is, I think, a really big step in the right direction. But there is not currently a module loader that does a very good job with WebAssembly. If ESM would be the standard for how you should do WebAssembly modules, I think that would really drive adoption, because that's currently a pain point. And if we can solve that pain point, I think that would be a nice thing to focus on as a priority. CP: There's the side note on that, we're really not shooting anymore for a loader. The way we think about that, (at least the way I'm thinking about it), It's like we have been struggling for 10 years try to create a what I call it a parameterize artifact or parameterize API that allows you to do everything and we're trying to escape that trap by conforming to a low Level APIs that allow you to construct whatever artifact you want that can act as a loader in a cohesive way for a module graph or a segment of it. Just as a side note, there is no such thing as a loader anymore. diff --git a/meetings/2022-07/jul-20.md b/meetings/2022-07/jul-20.md index 082fc908..8bf6519b 100644 --- a/meetings/2022-07/jul-20.md +++ b/meetings/2022-07/jul-20.md @@ -2,7 +2,7 @@ ----- -**In-person attendees:** +**In-person attendees:** | Name | Abbreviation | Organization | | -------------------- | -------------- | ------------------ | | Robin Ricard | RRD | Bloomberg | @@ -29,104 +29,103 @@ Day 1 <- yesterdays notes -CERTAIN PARTS OF THE NOTES ARE GOING TO BE WRITTEN BY THE ATTENDEES OR EDITING, IT WILL REQUIRE KIND OF HUMAN EDITING AND CONFIRMATION. bUT STILL, TRANSCRIBING EVERYTHING THAT WE ARE SAYING IN that section leading up to that will be good. +CERTAIN PARTS OF THE NOTES ARE GOING TO BE WRITTEN BY THE ATTENDEES OR EDITING, IT WILL REQUIRE KIND OF HUMAN EDITING AND CONFIRMATION. bUT STILL, TRANSCRIBING EVERYTHING THAT WE ARE SAYING IN that section leading up to that will be good. Transcribed. -tHAT IS THE STANDARD WAY OF DOING IT. oKAY. +tHAT IS THE STANDARD WAY OF DOING IT. oKAY. I will do it small here. -You know, if it’s not too difficult to do it in kind of natural casing, then that would be nice. But it’s understandable, but that’s the standard way of doing the text. +You know, if it’s not too difficult to do it in kind of natural casing, then that would be nice. But it’s understandable, but that’s the standard way of doing the text. -Yeah, I think she has it turned on so it’s – it’s in normal text here, it looks like. +Yeah, I think she has it turned on so it’s – it’s in normal text here, it looks like. -?: okay. Perfect. So . . . my understanding from our conversation before, is that if you don’t know who is talking, you will put in +?: okay. Perfect. So . . . my understanding from our conversation before, is that if you don’t know who is talking, you will put in ?: SPEAKER: -?: we have them in there right now, and we can put in either speaking or just chevrons. Because we just got that name list, and there’s no way we can even get remotely close to doing that today. +?: we have them in there right now, and we can put in either speaking or just chevrons. Because we just got that name list, and there’s no way we can even get remotely close to doing that today. -?: right. +?: right. -?: okay. Yeah. Then we will – we will work on filling that in. +?: okay. Yeah. Then we will – we will work on filling that in. -?: okay. Great. +?: okay. Great. -?: one thing . . . Dan, would you prefer, like, chevron speaker colon or just chevrons? +?: one thing . . . Dan, would you prefer, like, chevron speaker colon or just chevrons? -DE: we – so what we usually do for the final version of the notes is, you know, we have the three letter acronyms, you know, that there’s no time to use those for today. 3 letter acronym, colon and the comment. So if you could +DE: we – so what we usually do for the final version of the notes is, you know, we have the three letter acronyms, you know, that there’s no time to use those for today. 3 letter acronym, colon and the comment. So if you could -?: I can do an audio check at any time. +?: I can do an audio check at any time. -DE: can people go around the room and say, hello. I work at Bloomberg -The audio needs to be turned up in order to hear everything +DE: can people go around the room and say, hello. I work at Bloomberg The audio needs to be turned up in order to hear everything -?: the last one was quite inaudible. +?: the last one was quite inaudible. -?: hi, I am Daniel. From Mozilla. +?: hi, I am Daniel. From Mozilla. -?: pretty good. +?: pretty good. -?: who are you? Brian? +?: who are you? Brian? -BT: hi, I am Brian I work for Microsoft. +BT: hi, I am Brian I work for Microsoft. -RPR: hi. I am Rick. I work for Bloomberg and one of the cochairs of the meeting so you may hear me meeting a little more than others. +RPR: hi. I am Rick. I work for Bloomberg and one of the cochairs of the meeting so you may hear me meeting a little more than others. DE: Julia, who are you -YSV: that is an excellent question. Hi, I am Julia. I work for Mozi +YSV: that is an excellent question. Hi, I am Julia. I work for Mozi -DE: okay. So . . . +DE: okay. So . . . -YSV: I have a question. Are we doing a test for the note-taker. +YSV: I have a question. Are we doing a test for the note-taker. -DE: so now we have professional captioners with us. And the first presentation of the day will be to review the motivation for this and open it up briefly to questions with them. Before going on to two hours of captioning support. And then, you know, they will leave. And we will decide whether we want to continue with this for future meetings +DE: so now we have professional captioners with us. And the first presentation of the day will be to review the motivation for this and open it up briefly to questions with them. Before going on to two hours of captioning support. And then, you know, they will leave. And we will decide whether we want to continue with this for future meetings -YSV: Okay. And the process will be forwarding to get a quick preview, it’s going to be our current bot-based note-taking or full note-taking? +YSV: Okay. And the process will be forwarding to get a quick preview, it’s going to be our current bot-based note-taking or full note-taking? -DE: they are taking the notes. We will have to fill in names because I failed to send the name list to the captioners before the event. Hopefully in the future meetings, it will be possible for people to – for the names to be part of this as well. +DE: they are taking the notes. We will have to fill in names because I failed to send the name list to the captioners before the event. Hopefully in the future meetings, it will be possible for people to – for the names to be part of this as well. -YSV: okay. +YSV: okay. -DE: and I will explain more in the presentation. I wanted to ask people, the first time they talk, to identify, just say their name and the three-letter acronym in this topic +DE: and I will explain more in the presentation. I wanted to ask people, the first time they talk, to identify, just say their name and the three-letter acronym in this topic -YSV: that sounds great. That’s a good idea +YSV: that sounds great. That’s a good idea DE: and that is a practice going forward so everything will start to catch on MF: should we ask Kevin to have the transcription about simultaneously into a separate document -DE: in my opinion, no. I am watching the transcription, and the bot needs a lot of human help anyway, so . . . I don’t – I don’t think that will be a good use of anyone’s time, to provide that human help, given this. +DE: in my opinion, no. I am watching the transcription, and the bot needs a lot of human help anyway, so . . . I don’t – I don’t think that will be a good use of anyone’s time, to provide that human help, given this. -MF: okay. +MF: okay. -?: yeah. A person. +?: yeah. A person. -?: okay. +?: okay. ?: someone in the room -?: no. They are on the Internet. So Duane O’Geil. Am I pronouncing the name properly? +?: no. They are on the Internet. So Duane O’Geil. Am I pronouncing the name properly? -?: no one ever does. It’s O’giel. +?: no one ever does. It’s O’giel. -DE: he runs a transcription company and we are working with somebody in his firm to do this, and this is sponsored by Bloomberg. +DE: he runs a transcription company and we are working with somebody in his firm to do this, and this is sponsored by Bloomberg. -?: awesome. [inaudible] +?: awesome. [inaudible] -DE: that transcribed as [inaudible]. +DE: that transcribed as [inaudible]. -Duane O’Giel: I think one of the key things beings just to note for everyone here too, when you are speaking, remember to be close to your mic. And also, allow others to finish their instead ofer sentence. It just makes it a little easier for the captioner to get the information in there properly. +Duane O’Giel: I think one of the key things beings just to note for everyone here too, when you are speaking, remember to be close to your mic. And also, allow others to finish their instead ofer sentence. It just makes it a little easier for the captioner to get the information in there properly. -DE: I wanted to request that you interrupt us, maybe through a message in the chat or just a message in the document when things are difficult to hear, so that it can be, you know, reported to the committee. Would that be okay? +DE: I wanted to request that you interrupt us, maybe through a message in the chat or just a message in the document when things are difficult to hear, so that it can be, you know, reported to the committee. Would that be okay? -Duane O’Giel: yeah. I think one of the things that you will see that the writer will do is, they will put up [inaudible]. +Duane O’Giel: yeah. I think one of the things that you will see that the writer will do is, they will put up [inaudible]. -DE: okay the great. People watching the notes, if you see that coming up, you can just, you know, shut out or put a point of order on TCQ to say, point of order, we have an [inaudible] comment. So we will repeat this when the presentation happens +DE: okay the great. People watching the notes, if you see that coming up, you can just, you know, shut out or put a point of order on TCQ to say, point of order, we have an [inaudible] comment. So we will repeat this when the presentation happens -?: okay. +?: okay. ## Professional Stenography @@ -135,89 +134,89 @@ Presenter: Dan Ehrenberg (DE) [issue](https://github.com/tc39/Reflector/issues/426) [glossary](https://github.com/tc39/how-we-work/blob/main/terminology.md) -DE: yes. So – so this is about professional support. This is my first presentation as Bloomberg, but it continues a line of work through to the TC39 group. This is an inclusion issue for multiple issues. I am happy to see many of you in San Francisco and remote. The note takers are exhausted. It falls on the same few people. In principle, the notes don’t have to be as extensive as what we have been taking. But in practice, we find there’s a lot of subtle things in the conversation that would get missed by this potentially and have been historically. So it’s important to have full transcriptions for us, for that purpose. Further, this can help, you know, hearing impaired participants follow during meeting as well by following the notes or for whatever reason that the audible version might not work as well. KG made a bot and working on incremental improvements based on speech that detects API. But it’s too much work for the human note-takers to keep up with corrections and end up checking in gibberish in the notes. +DE: yes. So – so this is about professional support. This is my first presentation as Bloomberg, but it continues a line of work through to the TC39 group. This is an inclusion issue for multiple issues. I am happy to see many of you in San Francisco and remote. The note takers are exhausted. It falls on the same few people. In principle, the notes don’t have to be as extensive as what we have been taking. But in practice, we find there’s a lot of subtle things in the conversation that would get missed by this potentially and have been historically. So it’s important to have full transcriptions for us, for that purpose. Further, this can help, you know, hearing impaired participants follow during meeting as well by following the notes or for whatever reason that the audible version might not work as well. KG made a bot and working on incremental improvements based on speech that detects API. But it’s too much work for the human note-takers to keep up with corrections and end up checking in gibberish in the notes. -DE: So this is our solution, a professional captioner. And we have a professional person on this call taking the notes right now, into our notes document on line. The flow is like before. The captioner writes the transcript into our Google docs document and corrections can happen from the committee, the other document, both during the meeting and after the meeting. +DE: So this is our solution, a professional captioner. And we have a professional person on this call taking the notes right now, into our notes document on line. The flow is like before. The captioner writes the transcript into our Google docs document and corrections can happen from the committee, the other document, both during the meeting and after the meeting. -DE: So some tips for enabling good transcription: if people can contribute to the glossary, this forms some base material that the captioner can use to assist in understanding what are the technical words that come up. We are using the replacement JS filed used in the bot, but into more human readable form. So please, when you talk, because we are a new group for the captioner, if the first time you speak, you can identify yourself, and your 3-letter acronym, then that will be helpful for the captioner. It might not be sufficient just one time, but if – TC39 delegates can fill in missing names, that’s good for today, at least. +DE: So some tips for enabling good transcription: if people can contribute to the glossary, this forms some base material that the captioner can use to assist in understanding what are the technical words that come up. We are using the replacement JS filed used in the bot, but into more human readable form. So please, when you talk, because we are a new group for the captioner, if the first time you speak, you can identify yourself, and your 3-letter acronym, then that will be helpful for the captioner. It might not be sufficient just one time, but if – TC39 delegates can fill in missing names, that’s good for today, at least. -DE: So we are joined by the IR broadcast captioning company from Duane O’Geil and his staff. They have been taking notes from the beginning of this presentation. Today we have two hours of professional captioning sponsored by Bloomberg. If this experiment work wells, then we plan to ask ECMA for support for ongoing professional captioning. This is well within the Ecma bylaws which state that the secretary should prepare the minutes of the meetings. It’s clear that ECMA was always in the loop for that. So I want to make sure, is everyone okay with her being here for the next two hours and later, maybe tomorrow, for example, we can recap and whether we want to continue with the professional captioner. That’s the whole presentation. Do people have any concerns or comments? +DE: So we are joined by the IR broadcast captioning company from Duane O’Geil and his staff. They have been taking notes from the beginning of this presentation. Today we have two hours of professional captioning sponsored by Bloomberg. If this experiment work wells, then we plan to ask ECMA for support for ongoing professional captioning. This is well within the Ecma bylaws which state that the secretary should prepare the minutes of the meetings. It’s clear that ECMA was always in the loop for that. So I want to make sure, is everyone okay with her being here for the next two hours and later, maybe tomorrow, for example, we can recap and whether we want to continue with the professional captioner. That’s the whole presentation. Do people have any concerns or comments? -BT: no, none in the queue at this time. But we can give it a couple of seconds to see if anyone is typing +BT: no, none in the queue at this time. But we can give it a couple of seconds to see if anyone is typing -DE: SPEAKER: We have Duane on the line if anyone has any questions. +DE: SPEAKER: We have Duane on the line if anyone has any questions. -JHD: I have a question. So the – for mark down, the style of things in the notes needs to end up a certain way. But obviously, that may not be, like, that would be disruptive to the process of taking the notes. So is there – is that something we should take on ourselves or feedback we can provide for the stenographer to clean it up after or what are your thoughts +JHD: I have a question. So the – for mark down, the style of things in the notes needs to end up a certain way. But obviously, that may not be, like, that would be disruptive to the process of taking the notes. So is there – is that something we should take on ourselves or feedback we can provide for the stenographer to clean it up after or what are your thoughts -DE: the answer is both. For today, I didn’t give advance instructions about how to do the mark down formatting. Over time, maybe this is something that they could be writing things a little bit more in that format or fixed up afterwards. But it will take time to, to, you know, have this conversation about the format. Duane, do you have any more comments on this +DE: the answer is both. For today, I didn’t give advance instructions about how to do the mark down formatting. Over time, maybe this is something that they could be writing things a little bit more in that format or fixed up afterwards. But it will take time to, to, you know, have this conversation about the format. Duane, do you have any more comments on this -Duane O’Geil: I think what you just said sums it up quite well. Once we have better information, understand the conversations and such, we can, you know, definitely adapt to that. However, you know, coming into this, as newbies, it’s going to be more difficult. So we will just have straight text. If this is something that does proceed in the future, we can certainly work to understanding what exactly it is that you need, and trying to incorporate that moreso into everything that we provide. +Duane O’Geil: I think what you just said sums it up quite well. Once we have better information, understand the conversations and such, we can, you know, definitely adapt to that. However, you know, coming into this, as newbies, it’s going to be more difficult. So we will just have straight text. If this is something that does proceed in the future, we can certainly work to understanding what exactly it is that you need, and trying to incorporate that moreso into everything that we provide. -JHD: and we will presumably have some feedback channel set up for having – for conveying that. +JHD: and we will presumably have some feedback channel set up for having – for conveying that. -DE: yeah. I have an email thread with them and we can be in touch. +DE: yeah. I have an email thread with them and we can be in touch. BT: a couple queue items, Michael: -MLS: our desire to take exhaustive notes may not be reasonable. Other TCs don’t do that. MLS, Michael. I know other TCs, they record salient points of the discussion, and the result of the discussion, it may be the case that it’s just not reasonable for us to continue the process. And I understand it is difficult for the note-takers or the bot fixers or whatever you want to call them +MLS: our desire to take exhaustive notes may not be reasonable. Other TCs don’t do that. MLS, Michael. I know other TCs, they record salient points of the discussion, and the result of the discussion, it may be the case that it’s just not reasonable for us to continue the process. And I understand it is difficult for the note-takers or the bot fixers or whatever you want to call them -DE: so I addressed that in the beginning of the presentation. Why isn’t it reasonable if we have a technical path forward with this sort of support? +DE: so I addressed that in the beginning of the presentation. Why isn’t it reasonable if we have a technical path forward with this sort of support? -MLS: because even with the stenographer, I don’t think we are going to fully capture everything without people going back and editing what has been recorded. Especially when we have – especially when we have multiple people involved in a conversation which occasionally happens +MLS: because even with the stenographer, I don’t think we are going to fully capture everything without people going back and editing what has been recorded. Especially when we have – especially when we have multiple people involved in a conversation which occasionally happens -DE: to clarity, this is – IR broadcast captioning has been flexible enough to work with our existing work flow. They are typing into Google docs. People who want to edit, can, but they will have more accurate base material than they did with the bot. +DE: to clarity, this is – IR broadcast captioning has been flexible enough to work with our existing work flow. They are typing into Google docs. People who want to edit, can, but they will have more accurate base material than they did with the bot. -MLS: I am not sure you’re understanding my point. +MLS: I am not sure you’re understanding my point. -DE: we would have humans editing the notes for some of the reasons they do today, to get – to make sure the points fully accurate, but they will be working with a base of more accurately transcribed material. I disagree with you that it’s feasible for us to choose the salient points. When looking at the past notes of TC39, important points have been left out by attempts to only list the salient points. It’s hard to read some of the notes from back – from back then. I think it’s person for our process to have this more fully captioned +DE: we would have humans editing the notes for some of the reasons they do today, to get – to make sure the points fully accurate, but they will be working with a base of more accurately transcribed material. I disagree with you that it’s feasible for us to choose the salient points. When looking at the past notes of TC39, important points have been left out by attempts to only list the salient points. It’s hard to read some of the notes from back – from back then. I think it’s person for our process to have this more fully captioned -MLS: let me continue what I was going to say, when you replied. I think – I think a speaker could register that there’s certain points they want captured in the notes. And if they do that, it reduces the amount of note-taking that needs to happen. +MLS: let me continue what I was going to say, when you replied. I think – I think a speaker could register that there’s certain points they want captured in the notes. And if they do that, it reduces the amount of note-taking that needs to happen. -DE: okay. That’s an interesting idea. +DE: okay. That’s an interesting idea. -BT: we have a few more items on the queue, and only like a minute or two left on in the timebox. So let’s try and get through this. Would you all have a reply on this topic +BT: we have a few more items on the queue, and only like a minute or two left on in the timebox. So let’s try and get through this. Would you all have a reply on this topic -USA: real quick, I understand Michael, the point that you’re trying to make, but I – I have a strong feeling, looking at the quality of notes that we have right now, that the proposal is still a lot for work than what we have currently, which is a very good base to – to make slight edits on. If speakers have to specify more details or if we go back to the bot or whatever, all of those require more work. Of course, without having to pay for the transcription. +USA: real quick, I understand Michael, the point that you’re trying to make, but I – I have a strong feeling, looking at the quality of notes that we have right now, that the proposal is still a lot for work than what we have currently, which is a very good base to – to make slight edits on. If speakers have to specify more details or if we go back to the bot or whatever, all of those require more work. Of course, without having to pay for the transcription. -DE: yeah. What we have seen is that the work for note-taking falls on a fall number of people who have trouble participating in meetings. And I think that would still be the case, even if the notes were high-level. But maybe note takers can comment. Can you extend the time box by 5 minutes? So we can work through the queue. +DE: yeah. What we have seen is that the work for note-taking falls on a fall number of people who have trouble participating in meetings. And I think that would still be the case, even if the notes were high-level. But maybe note takers can comment. Can you extend the time box by 5 minutes? So we can work through the queue. -BT: I will look at that. Let’s get Robin +BT: I will look at that. Let’s get Robin -RRD: So. Yeah. I’ve been taking notes in the past and this meeting yesterday. To first answer to JHD earlier, for the mark down formatting, I think the amount of work we need to adapt to our formatting is very minimal. So I am not at all concerned about this. Also, to answer to MLS . . . for us and what we are seeing right now, we haven’t gone into technical discussions yet. So there is a caveat on this. But we will see when we go to the technical discussions. Right now, the way the notes are being written is way more helpful than the bot ever was because there is less repetition. We really feel like this is transcribing what is being said in the room. So we are still able to go and edit, we are still doing this, but we are doing it at a frequency that is much lower than with the bot previously. I think there is huge plus on a different state +RRD: So. Yeah. I’ve been taking notes in the past and this meeting yesterday. To first answer to JHD earlier, for the mark down formatting, I think the amount of work we need to adapt to our formatting is very minimal. So I am not at all concerned about this. Also, to answer to MLS . . . for us and what we are seeing right now, we haven’t gone into technical discussions yet. So there is a caveat on this. But we will see when we go to the technical discussions. Right now, the way the notes are being written is way more helpful than the bot ever was because there is less repetition. We really feel like this is transcribing what is being said in the room. So we are still able to go and edit, we are still doing this, but we are doing it at a frequency that is much lower than with the bot previously. I think there is huge plus on a different state -DE: the note takers have a hard time getting people to listen to them, when they ask people to talk more slowly or – or to, you know, talk more to the microphone. So I think asking it to be on the committee to when a point is salient is kind of difficult. And it can be difficult to figure that out in an online conversation. +DE: the note takers have a hard time getting people to listen to them, when they ask people to talk more slowly or – or to, you know, talk more to the microphone. So I think asking it to be on the committee to when a point is salient is kind of difficult. And it can be difficult to figure that out in an online conversation. -BT: okay. There’s a few folks in the queue. Just a time note. We can continue this discussion to 10:20. But absolutely no later. So let’s try to actually get done in more than 5 minutes. +BT: okay. There’s a few folks in the queue. Just a time note. We can continue this discussion to 10:20. But absolutely no later. So let’s try to actually get done in more than 5 minutes. -DE: okay. Thanks. +DE: okay. Thanks. -HHM: yeah. Adding on to the same discussion here. We have taken notes a number of times, and we highlight the challenge . . . feeling what the stenographer convey, can we have a summary of what happened during this presentation? Like the notes in the meeting. Yes, it’s really useful. But to create a summary of what happened during this particular segment in the meeting. +HHM: yeah. Adding on to the same discussion here. We have taken notes a number of times, and we highlight the challenge . . . feeling what the stenographer convey, can we have a summary of what happened during this presentation? Like the notes in the meeting. Yes, it’s really useful. But to create a summary of what happened during this particular segment in the meeting. -DE: that is a welcome contribution. It can be done for past meetings. It’s pretty separate from the – the note-taking. I mean, one did not subsume the other. If somebody wants to do that work, it’s a great initiative. +DE: that is a welcome contribution. It can be done for past meetings. It’s pretty separate from the – the note-taking. I mean, one did not subsume the other. If somebody wants to do that work, it’s a great initiative. -HHM: okay. Then. Thank you. +HHM: okay. Then. Thank you. -BT: go ahead, Shane. +BT: go ahead, Shane. -SFC: I just wanted to remark that there is no delegate in the room that has complete context on every TC39 discussion, and relying only on note-takers to decide what the salient points are is just simply not scalable. Having a comprehensive or exhaustive baseline I think is essential. I want to reiterate that point. +SFC: I just wanted to remark that there is no delegate in the room that has complete context on every TC39 discussion, and relying only on note-takers to decide what the salient points are is just simply not scalable. Having a comprehensive or exhaustive baseline I think is essential. I want to reiterate that point. DE: any more on in queue -CP: I am with MLS on some of the things he mentioned. What is the point of having notes beyond the ECMA archives? I believe there is much more but it’s hard for me to think about how are we going to enable other people to rely on the notes and today, I know most people will never go back and look at the notes and look for points in the conversations. Maybe we have to do some work around it by creating tools that allow us to maybe link from proposals to the notes for every time that the proposal is discussed in plenary and so on . . . so maybe there’s more that we can get out of the notes, and having professional notes helps. So to date, it’s very, very minimal what is useful from the point of view of the notes. +CP: I am with MLS on some of the things he mentioned. What is the point of having notes beyond the ECMA archives? I believe there is much more but it’s hard for me to think about how are we going to enable other people to rely on the notes and today, I know most people will never go back and look at the notes and look for points in the conversations. Maybe we have to do some work around it by creating tools that allow us to maybe link from proposals to the notes for every time that the proposal is discussed in plenary and so on . . . so maybe there’s more that we can get out of the notes, and having professional notes helps. So to date, it’s very, very minimal what is useful from the point of view of the notes. -DE: my experience as a proposal author differs significantly from that. I need to look back at what was previously stated to follow up with the concerns raised. They are in the course of the discussion also. So, yeah. Improved tooling, separately separate from a captioner. This is an extra piece of work that we can’t contract for, and great to have volunteers for. +DE: my experience as a proposal author differs significantly from that. I need to look back at what was previously stated to follow up with the concerns raised. They are in the course of the discussion also. So, yeah. Improved tooling, separately separate from a captioner. This is an extra piece of work that we can’t contract for, and great to have volunteers for. -BT: first, MLS wanted to say that his point was the speaker identifies the points that they want recorded. But he doesn’t need to speak. YSV go ahead. +BT: first, MLS wanted to say that his point was the speaker identifies the points that they want recorded. But he doesn’t need to speak. YSV go ahead. -YSV: I wanted to remark, first, that I’ve been following the note takers and the note-taking is really high quality. I would like to remind folks that I had a project, it’s – I haven’t had time for it – but I had a project that was linking different parts of the discussion, how we were switching from which concerns were being addressed and how. And actually, doing a detailed tagging through all of the notes in order to generate for us a design rationale. This was part of the rationale project. That was a point in which I thought it was a really a shame we couldn’t have more high-quality notes because if we only take, for example, the salient notes, you don’t actually capture – if you go back in history, when we were only taking salient notes. Back when only taking the salient notes, it was very difficult to understand how a decision was come upon. Why we made a certain decision. Because only the decision was recorded and not how we got to it. We already have a problem where we are losing information about a decision that’s been made in the past. We end up with a design and we don’t know how we got to it. Like DE mentioned, I too go back into the notes to determine how we came to certain decisions. And that is much easier with a high-quality, high-resolution note-taking system. The only alternative, I could imagine, to what we do today with these high-resolution notes, is something like recording video of these. But that is a contentious alternative. +YSV: I wanted to remark, first, that I’ve been following the note takers and the note-taking is really high quality. I would like to remind folks that I had a project, it’s – I haven’t had time for it – but I had a project that was linking different parts of the discussion, how we were switching from which concerns were being addressed and how. And actually, doing a detailed tagging through all of the notes in order to generate for us a design rationale. This was part of the rationale project. That was a point in which I thought it was a really a shame we couldn’t have more high-quality notes because if we only take, for example, the salient notes, you don’t actually capture – if you go back in history, when we were only taking salient notes. Back when only taking the salient notes, it was very difficult to understand how a decision was come upon. Why we made a certain decision. Because only the decision was recorded and not how we got to it. We already have a problem where we are losing information about a decision that’s been made in the past. We end up with a design and we don’t know how we got to it. Like DE mentioned, I too go back into the notes to determine how we came to certain decisions. And that is much easier with a high-quality, high-resolution note-taking system. The only alternative, I could imagine, to what we do today with these high-resolution notes, is something like recording video of these. But that is a contentious alternative. -DE: yeah. Thank you very much. +DE: yeah. Thank you very much. -BT: we have to leave it there. But thankfully the queue is empty +BT: we have to leave it there. But thankfully the queue is empty -DE: I take it there’s no objection to continuing with the transcription until noon? Okay. Thank you. +DE: I take it there’s no objection to continuing with the transcription until noon? Okay. Thank you. -BT: thank you, DE. +BT: thank you, DE. ## Avoid triggering throw in corner case in async generators or Avoid mostly-redundant await in async yield* @@ -227,85 +226,85 @@ Presenter: Kevin Gibbons (KG) - Alternative PR: [Avoid mostly-redundant await in async yield*](https://github.com/tc39/ecma262/pull/2819) -KG: So this – the title of this issue is misleading because when I originally noticed the issue, I proposed a fix and realized that the fix was for an issue which shouldn’t have happened in the first place. So I have a slightly larger proposal to make. But it needs a lot of background. So we will spend a while doing background on this. So this is the main pull request that I want to get feedback for. There’s an alternative one linked there that makes the smaller more technical change. But it’s 2819 that I wanted to focus on today. So I'm going to do a bit of background. Some of this will be familiar to many of you. But it’s important to understand, and also, some of this will be important later for the iterator helpers proposal. +KG: So this – the title of this issue is misleading because when I originally noticed the issue, I proposed a fix and realized that the fix was for an issue which shouldn’t have happened in the first place. So I have a slightly larger proposal to make. But it needs a lot of background. So we will spend a while doing background on this. So this is the main pull request that I want to get feedback for. There’s an alternative one linked there that makes the smaller more technical change. But it’s 2819 that I wanted to focus on today. So I'm going to do a bit of background. Some of this will be familiar to many of you. But it’s important to understand, and also, some of this will be important later for the iterator helpers proposal. -KG: So . . . there are a couple of protocols in the spec that are not necessarily completely defined, or the definitions are not necessarily consistent with how they are used in the spec. I am going to talk about a thing I will call “the iterator protocol.” This method `.next` and optionally `.return`, where `.next` returns an object that has a `done` boolean and a `value`. And this is like the main way you interact with an iterator. You call`next` repeatedly and check the `done`Boolean. There is a method to call on the iterator to perform cleanup. So the value from return is required to be an object, but not otherwise particularly inspected. But I want to focus on the format of the return value from next because it’s important later. +KG: So . . . there are a couple of protocols in the spec that are not necessarily completely defined, or the definitions are not necessarily consistent with how they are used in the spec. I am going to talk about a thing I will call “the iterator protocol.” This method `.next` and optionally `.return`, where `.next` returns an object that has a `done` boolean and a `value`. And this is like the main way you interact with an iterator. You call`next` repeatedly and check the `done`Boolean. There is a method to call on the iterator to perform cleanup. So the value from return is required to be an object, but not otherwise particularly inspected. But I want to focus on the format of the return value from next because it’s important later. -KG: So the iterator protocol is used primarily by`for-of`loops. It will get Symbol.iterator from the iterable and invoke .next on the iterator. And if you exit the loop early, by calling break, it calls iterator.return. The value is the contents of the value slot from the return value of the next method. +KG: So the iterator protocol is used primarily by`for-of`loops. It will get Symbol.iterator from the iterable and invoke .next on the iterator. And if you exit the loop early, by calling break, it calls iterator.return. The value is the contents of the value slot from the return value of the next method. -KG: Okay. The generator protocol extends that slightly. By adding an additional method, throw, as well as by adding arguments to next and return. These arguments are not strictly given an interpretation in the generator protocol, but they are a way to communicate to the generator instead of consuming values from it. The point of throw is, according to the spec, to notify the generator that the caller has detected an error condition in the generator itself. But nothing calls that. That's not quite true, but we'll be coming back to that. Almost nothing in the spec calls throw. So the interpretation of throw is something you have to work out based on how it works in generators itself. There’s an example of a generator. If you are paused at a yield, the yield itself you have to call next to get to the first yield. And then the first call to next will give you done false and the value that was yielded. And then if you call throw after that, that will – while the generator is paused, it will trigger the catch block. If you call dot return it will not trigger the catch block, but it will trigger the finally block. So this is when it means to say that throw is for error conditions in the generator itself. +KG: Okay. The generator protocol extends that slightly. By adding an additional method, throw, as well as by adding arguments to next and return. These arguments are not strictly given an interpretation in the generator protocol, but they are a way to communicate to the generator instead of consuming values from it. The point of throw is, according to the spec, to notify the generator that the caller has detected an error condition in the generator itself. But nothing calls that. That's not quite true, but we'll be coming back to that. Almost nothing in the spec calls throw. So the interpretation of throw is something you have to work out based on how it works in generators itself. There’s an example of a generator. If you are paused at a yield, the yield itself you have to call next to get to the first yield. And then the first call to next will give you done false and the value that was yielded. And then if you call throw after that, that will – while the generator is paused, it will trigger the catch block. If you call dot return it will not trigger the catch block, but it will trigger the finally block. So this is when it means to say that throw is for error conditions in the generator itself. -KG: One other interesting part of syntactic generators, part of the syntax of generators, is that there’s the yield* operation. Which forwards all three of the things, including the arguments, to another generator. And there’s logic for if the other generator hasn't implemented part of the protocol. The important part it’s forwarding the entire protocol. Then we can make stuff async. This is pretty much the same as the iterator protocol, but you get promises for objects instead of just raw objects. But again, the done boolean and the value. I want to emphasize that the for await loop does one await. And that's the result of`iterator.next`. It’s awaiting this first promise on the screen here. It is not awaiting the values slot from the object inside of the promise. And in fact, it is possible to write an async iterator that has a promise in that value slot and the for await loop will observe, in the body of the loop, the promise. It will not unwrap that promise. And then again, we have the async extension of the generator protocol which is exactly like the regular generator protocol, but they return promises rather than just the objects. +KG: One other interesting part of syntactic generators, part of the syntax of generators, is that there’s the yield* operation. Which forwards all three of the things, including the arguments, to another generator. And there’s logic for if the other generator hasn't implemented part of the protocol. The important part it’s forwarding the entire protocol. Then we can make stuff async. This is pretty much the same as the iterator protocol, but you get promises for objects instead of just raw objects. But again, the done boolean and the value. I want to emphasize that the for await loop does one await. And that's the result of`iterator.next`. It’s awaiting this first promise on the screen here. It is not awaiting the values slot from the object inside of the promise. And in fact, it is possible to write an async iterator that has a promise in that value slot and the for await loop will observe, in the body of the loop, the promise. It will not unwrap that promise. And then again, we have the async extension of the generator protocol which is exactly like the regular generator protocol, but they return promises rather than just the objects. -KG: There’s an asterisk on the "anything" there. The reason for that asterisk is that in a syntactic generator, when you do a yield, it awaits the value yielded before it wraps it up in this {done,value} pair. So, for example, if you yield a rejected promise, instead of yielding, it will trigger the catch. It will not pause the generator. It pauses the generator in sense of performing an await. But not the value from next. But it will actually immediately trigger the catch just as if you had replaced that whole yield with a throw. I guess with a throw of `await 1`. And of course, if you yield a non rejected promise it will unwrap that promise just the same. So this is the syntactic generator. Similarly, the async from sync wrapper, where you are doing a for await loop for something that is a sync iterator of promises, it will unwrap those. so in practice, while it is technically possible to put an arbitrary promise in the value slot, the syntactic generator and the automatic wrapper for sync iterators will not ever have a promise there because it will await it first. And it’s basically for this reason that`for await` doesn’t unwrap promises in the value slot. It’s because the assumption is that a well-behaved generator is never going to do that. Let’s see. +KG: There’s an asterisk on the "anything" there. The reason for that asterisk is that in a syntactic generator, when you do a yield, it awaits the value yielded before it wraps it up in this {done,value} pair. So, for example, if you yield a rejected promise, instead of yielding, it will trigger the catch. It will not pause the generator. It pauses the generator in sense of performing an await. But not the value from next. But it will actually immediately trigger the catch just as if you had replaced that whole yield with a throw. I guess with a throw of `await 1`. And of course, if you yield a non rejected promise it will unwrap that promise just the same. So this is the syntactic generator. Similarly, the async from sync wrapper, where you are doing a for await loop for something that is a sync iterator of promises, it will unwrap those. so in practice, while it is technically possible to put an arbitrary promise in the value slot, the syntactic generator and the automatic wrapper for sync iterators will not ever have a promise there because it will await it first. And it’s basically for this reason that`for await` doesn’t unwrap promises in the value slot. It’s because the assumption is that a well-behaved generator is never going to do that. Let’s see. -KG: So that’s all of the background. The weird part is that yield star in async generators does do an await. If you `yield*` from a weird iterator like this, where you have very carefully manually arranged to have a promise value in the slot - and it’s only possible to do it like this - then the yield star will await that value. So it’s not just transparently forwarding the protocol to an inner generator; it’s putting this extra await here. And I claim this is weird. The only way this await is relevant is if you have a manually implemented iterator as on the previous slide. It’s not something that comes up with a syntactic generator. And it's different from how `for await of` works. `For await of` will observe the promise. I would like to propose we get rid of this await: this is a normative change, but likely to be web-compatible. You can run into this in the sense that your code will run slower because it’s doing an await it doesn’t need to do. But to see different values rather than just differences in timing, you have to have have a manual async iterator as well as do a `yield*` in an async generator of that async iterator. I don’t think this is likely to come up very much. I would like to propose that we get rid of it. +KG: So that’s all of the background. The weird part is that yield star in async generators does do an await. If you `yield*` from a weird iterator like this, where you have very carefully manually arranged to have a promise value in the slot - and it’s only possible to do it like this - then the yield star will await that value. So it’s not just transparently forwarding the protocol to an inner generator; it’s putting this extra await here. And I claim this is weird. The only way this await is relevant is if you have a manually implemented iterator as on the previous slide. It’s not something that comes up with a syntactic generator. And it's different from how `for await of` works. `For await of` will observe the promise. I would like to propose we get rid of this await: this is a normative change, but likely to be web-compatible. You can run into this in the sense that your code will run slower because it’s doing an await it doesn’t need to do. But to see different values rather than just differences in timing, you have to have have a manual async iterator as well as do a `yield*` in an async generator of that async iterator. I don’t think this is likely to come up very much. I would like to propose that we get rid of it. -KG: Also, I think this might have just been a mistake. We went back and forth on how yield should work with promises in async generators several times, and at the point we did decide that it should be yield that does the await rather than having `for await` doing two awaits, the decision was only really talking about yield itself, not yield*. There was even a line in the presentation that said `yield*` doesn’t peek into the promises returned from `next`. But because of the way the spec text was written, the change that made yield do the await, also happened to make `yield*` do an additional wait. So I would like to change this. Again, not guaranteed to be web compatible. There are different technical issues we can discuss if we don't like this, but I am hoping for consensus for this change. Do we have anything in the queue? +KG: Also, I think this might have just been a mistake. We went back and forth on how yield should work with promises in async generators several times, and at the point we did decide that it should be yield that does the await rather than having `for await` doing two awaits, the decision was only really talking about yield itself, not yield*. There was even a line in the presentation that said `yield*` doesn’t peek into the promises returned from `next`. But because of the way the spec text was written, the change that made yield do the await, also happened to make `yield*` do an additional wait. So I would like to change this. Again, not guaranteed to be web compatible. There are different technical issues we can discuss if we don't like this, but I am hoping for consensus for this change. Do we have anything in the queue? -BT: yeah. Kris, strongly supports this bug fix, but doesn’t want to talk. And then Mark has a question. +BT: yeah. Kris, strongly supports this bug fix, but doesn’t want to talk. And then Mark has a question. -MM: So there was a lot of discussion around all of these issues originally about where the double await should be and all that. And I was wondering besides looking at the old presentations, did you take a look at the – or were you around for when we were – when we had those arguments and . . . I am just wondering if you’re – if this is informed by what the arguments were, as well as what the conclusions were? +MM: So there was a lot of discussion around all of these issues originally about where the double await should be and all that. And I was wondering besides looking at the old presentations, did you take a look at the – or were you around for when we were – when we had those arguments and . . . I am just wondering if you’re – if this is informed by what the arguments were, as well as what the conclusions were? -KG: I was not around, or it was my first meeting, so I don’t particularly remember it. Yes, I went through the notes and the discussion in the pull requests and the various alternatives. There wasn’t much discussion of `yield*` itself. It was on the for-await loop and the yield. So as far as I am aware, this is just a case that we didn’t discuss at all +KG: I was not around, or it was my first meeting, so I don’t particularly remember it. Yes, I went through the notes and the discussion in the pull requests and the various alternatives. There wasn’t much discussion of `yield*` itself. It was on the for-await loop and the yield. So as far as I am aware, this is just a case that we didn’t discuss at all -MM: I did participate heavily in those, and everything is consistent with my memory. And I find it plausible. So I support this change. I do want to make sure that I didn’t misunderstand something. You said throw is almost never called into spec. If a for loop is exited with a thrown error, that causes – that does cause throw on the iterator being iterated, correct? +MM: I did participate heavily in those, and everything is consistent with my memory. And I find it plausible. So I support this change. I do want to make sure that I didn’t misunderstand something. You said throw is almost never called into spec. If a for loop is exited with a thrown error, that causes – that does cause throw on the iterator being iterated, correct? -KG: no. That calls return. It closes the iterator. But it is not considered an error condition in the iterator itself. +KG: no. That calls return. It closes the iterator. But it is not considered an error condition in the iterator itself. -MM: oh. Interesting. So you said it almost never calls. Is the only call the yield star? +MM: oh. Interesting. So you said it almost never calls. Is the only call the yield star? -KG: yes. Well, yes. The only call is in `yield*`. But, of course, that’s forwarding the protocol. So if you do – like a user had to write `.throw` manually, except for the issue which prompted this, which I can direct you to the GitHub issue to show you the code which does a yield star of a rejected promise in that case. +KG: yes. Well, yes. The only call is in `yield*`. But, of course, that’s forwarding the protocol. So if you do – like a user had to write `.throw` manually, except for the issue which prompted this, which I can direct you to the GitHub issue to show you the code which does a yield star of a rejected promise in that case. -MM: got it. Okay. I support this. I am done. +MM: got it. Okay. I support this. I am done. -BT: thank you, mark. YSV is up next +BT: thank you, mark. YSV is up next -YSV: yeah. So from our perspective, we don’t see an issue. And this is a nice bug fix. But we agree that there is a potential for web compatibility issues. This has been shipping for quite a while. That is only impacting this one case, where you have manually constructed an async generator and then yield star. Does yield star potentially have impacts where this extra await is observable for other code? Is there any way for us to investigate that? +YSV: yeah. So from our perspective, we don’t see an issue. And this is a nice bug fix. But we agree that there is a potential for web compatibility issues. This has been shipping for quite a while. That is only impacting this one case, where you have manually constructed an async generator and then yield star. Does yield star potentially have impacts where this extra await is observable for other code? Is there any way for us to investigate that? -KG: so this will affect code that is not doing this weird manually async iterator in the sense that an additional await implies an additional tick. So this will change the ordering of code that is doing `yield*` in an async generator. As far as I can tell, the only effects are the number of ticks and this case where you have a manually implemented iterator that puts a promise in the value slot and then you do a yield star in an async generator of that manual async iterator, that will observe different values. As far as I can tell, those are the only two cases. So timing and this weird manual async iterator case. Not aware of any other differences. +KG: so this will affect code that is not doing this weird manually async iterator in the sense that an additional await implies an additional tick. So this will change the ordering of code that is doing `yield*` in an async generator. As far as I can tell, the only effects are the number of ticks and this case where you have a manually implemented iterator that puts a promise in the value slot and then you do a yield star in an async generator of that manual async iterator, that will observe different values. As far as I can tell, those are the only two cases. So timing and this weird manual async iterator case. Not aware of any other differences. -YSV: okay. So the proposal here would be we take the risk, have an implementor test this, see if we get bugs? +YSV: okay. So the proposal here would be we take the risk, have an implementor test this, see if we get bugs? -KG: yeah. +KG: yeah. -YSV: okay. I have some web compat data. I will post it to the issue. +YSV: okay. I have some web compat data. I will post it to the issue. -KG: okay. +KG: okay. -BT: all right. KKL is next. +BT: all right. KKL is next. -KKL: KKL. I support this change. It looks like a bug fix to me. It should be possible to do ordered delivery of promises for which the promises that are delivered have an orthogonal order which is consistent with the way promise queue would work. It’s possible to deliver a promise that resolves much later. +KKL: KKL. I support this change. It looks like a bug fix to me. It should be possible to do ordered delivery of promises for which the promises that are delivered have an orthogonal order which is consistent with the way promise queue would work. It’s possible to deliver a promise that resolves much later. -BT: all right. Thank you, KKL. DE is up next. +BT: all right. Thank you, KKL. DE is up next. -DE: as someone who was involved in this particular discussion, I agree that yield* was kind of an after-thought. And the attempt was to plumb through in a consistent way and this fix makes a lot of sense. +DE: as someone who was involved in this particular discussion, I agree that yield* was kind of an after-thought. And the attempt was to plumb through in a consistent way and this fix makes a lot of sense. -BT: all right. Thank you, Dan. Next up is WH. +BT: all right. Thank you, Dan. Next up is WH. -WH: You have two issues here. What is the throw issue? Can you give an example of how it’s triggered? +WH: You have two issues here. What is the throw issue? Can you give an example of how it’s triggered? -KG: yes. I will pull up the code because it is very hard to talk about, without having code. `<>` Okay. Here is a horrible program. The thing that is going on here is that you have a manual implementation of an async iterator. The manual implementation returns a promise wrapper that contains done: false and `value` is a rejected promise. And then you are doing yield star on this manually implemented async iterator. The thing that happens, and I am almost certain this is a bug . . . is that when the – when you do `yield*` of the inner iterator, the promise rejection is awaited during the yield star, and because of how the spec is written, let’s see . . . so in the evaluation semantics for yield *, when we got the normal success the call to next, we call async generator yield. And inside of async generator yield, we are performing this await, and exceptions are propagated. We are awaiting a rejected promise and then propagating exceptions of the promise was rejected. So a throw completion. With this question mark. We propagate the throw completion. And then in the yield* semantics, we set received to the results of the async generator yield. And then the next time through, we say that `received.type` is ‘throw’. The name of this variable is probably indicative of what it is supposed to be. The idea is that ‘received’ is what the consumer of the generator is trying to provide to the generator. With a regular generator, the only way to get this is if someone called dot throw when the generator was paused. Similarly, the way you get a return completion is that someone called dot return while the generator was paused. But because the exception happens inside of async generator yield, rather than happening outside of async generator yield, where propagated, like called to generator is thrown, the spec language thinks basically that the throw completion came from the consumer of the generator, rather than coming from the rejected promise that the generator returned. <<15.5.5 AO>> As far as I can tell, this is the only place in the entire specification that dot throw will be called without a user having first called dot throw and that getting propagated somehow. And I am almost certain it’s a bug due to async generator yield having await inside of it. Does that make sense? +KG: yes. I will pull up the code because it is very hard to talk about, without having code. `<>` Okay. Here is a horrible program. The thing that is going on here is that you have a manual implementation of an async iterator. The manual implementation returns a promise wrapper that contains done: false and `value` is a rejected promise. And then you are doing yield star on this manually implemented async iterator. The thing that happens, and I am almost certain this is a bug . . . is that when the – when you do `yield*` of the inner iterator, the promise rejection is awaited during the yield star, and because of how the spec is written, let’s see . . . so in the evaluation semantics for yield *, when we got the normal success the call to next, we call async generator yield. And inside of async generator yield, we are performing this await, and exceptions are propagated. We are awaiting a rejected promise and then propagating exceptions of the promise was rejected. So a throw completion. With this question mark. We propagate the throw completion. And then in the yield* semantics, we set received to the results of the async generator yield. And then the next time through, we say that `received.type` is ‘throw’. The name of this variable is probably indicative of what it is supposed to be. The idea is that ‘received’ is what the consumer of the generator is trying to provide to the generator. With a regular generator, the only way to get this is if someone called dot throw when the generator was paused. Similarly, the way you get a return completion is that someone called dot return while the generator was paused. But because the exception happens inside of async generator yield, rather than happening outside of async generator yield, where propagated, like called to generator is thrown, the spec language thinks basically that the throw completion came from the consumer of the generator, rather than coming from the rejected promise that the generator returned. <<15.5.5 AO>> As far as I can tell, this is the only place in the entire specification that dot throw will be called without a user having first called dot throw and that getting propagated somehow. And I am almost certain it’s a bug due to async generator yield having await inside of it. Does that make sense? -WH: Kind of, yeah. +WH: Kind of, yeah. -KG: okay. And, of course, the way that getting rid of the await fixes it, is that getting rid of the await in yield * means this line will not be here. <<27.6.8 AO>> It will go back to being the case, the only way you get a throw completion out of async generator yield is if someone called throw explicitly instead of having another way of getting a throw completion. +KG: okay. And, of course, the way that getting rid of the await fixes it, is that getting rid of the await in yield * means this line will not be here. <<27.6.8 AO>> It will go back to being the case, the only way you get a throw completion out of async generator yield is if someone called throw explicitly instead of having another way of getting a throw completion. -WH: Okay. Thank you. +WH: Okay. Thank you. -KG: I don’t see anything else on the queue. I would like to ask for consensus for this normative change. I will write test262 tests and of course we would need an engine to volunteer to ship at some point and hopefully confirm it is web compatible. We can’t do that without getting consensus. So I would like to ask for consensus for this change. +KG: I don’t see anything else on the queue. I would like to ask for consensus for this normative change. I will write test262 tests and of course we would need an engine to volunteer to ship at some point and hopefully confirm it is web compatible. We can’t do that without getting consensus. So I would like to ask for consensus for this change. BT: I am not seeing any – Rick go ahead -RW: My only question was, you mentioned writing test262 tests . . . awesome. Have you identified existing test262 tests that will use – that will changed by this. If you haven’t, will you do that work and we can rely on you to make sure that this gets sorted. +RW: My only question was, you mentioned writing test262 tests . . . awesome. Have you identified existing test262 tests that will use – that will changed by this. If you haven’t, will you do that work and we can rely on you to make sure that this gets sorted. -KG: My current plan is to use engine626 and implement this change and run test 262 and see what fails. If it doesn’t work, I would have a harder time, but I'll do my best to identify any tests that will be invalidated. +KG: My current plan is to use engine626 and implement this change and run test 262 and see what fails. If it doesn’t work, I would have a harder time, but I'll do my best to identify any tests that will be invalidated. -RW: great. You literally jumped to my next point, which was, to recommend using GCL’s engine262 to find them. Perfect. Awesome. It sounds like you have a great plan. I love it. I am all in. +RW: great. You literally jumped to my next point, which was, to recommend using GCL’s engine262 to find them. Perfect. Awesome. It sounds like you have a great plan. I love it. I am all in. -BT: all right. It sounds like consensus to me. +BT: all right. It sounds like consensus to me. -KG: Thanks very much. +KG: Thanks very much. ### Conclusion/Decision @@ -318,41 +317,41 @@ Presenter: Richard Gibson (RGN) - [pr](https://github.com/tc39/ecma262/pull/2791) -RGN: okay. I am Richard Gibson, abbreviation RGN. And I am here to talk today about fixing an irregularity with regular expressions that was discovered or probably rediscovered as part of introducing the V-flag. So there are a number of regular expression methods that need to check the flags of their receiver or argument. And for the most part, the way they do it is by using the flags getter. Look at String.prototype.matchAll and String.prototype.replaceAll, both Get "flags" off the regular expression. And it's the same on RegExp.prototype for Symbol.matchAll and Symbol.split. But there are two methods that behave differently. And they are being changed by the V flag, prompting a comment from JHD in the pull request to introduce it, why are we doing a conditional Get of "unicode" after getting "unicodeSets"? We should instead be reading it unconditionally. More predictable behaviour. And the reason for that was that the existing method already did conditional reading. If you look in Symbol.match, we see an example of it. Right here. <<22.2.5.8>> Where it reads the "global" property off of the regular expression, if that’s false, does down one path, otherwise reads the "unicode" property only in the other one. The Symbol.replace operation is similar. Reads the "global" flag, and then only if it’s `true` reads "unicode". This is surprising. It surprised JHD and me, and the champions of the V flag proposal, and it’s a little awkward. The built-in "flags" getter, in fact, does do observable Gets of the individual properties, always all of them and always in the same deterministic order, and concatenates them to a string, which is what we see the other methods such as split interacting with (after forcing ToString just in case an override has done something weird), checking it for specific characters representing the flags. So the needs-consensus pull request is proposing that we update the two divergent methods to instead behave like the others. Rather than reading individual properties directly and conditionally, a pattern which gets worse after we have the V flag and after we introduce even more relevant regular expression flags, we just basically cut it off now if we can do so with web compatibility. It makes things a little bit simpler and certainly more in alignment with the other four methods. And that’s where this stands. It is a normative change. I suspect web compatibility, but I don’t have any firm evidence of that yet and the best way to find out is to attempt this if we get consensus to try. That’s the end of the presentation, and I open it up to the queue. +RGN: okay. I am Richard Gibson, abbreviation RGN. And I am here to talk today about fixing an irregularity with regular expressions that was discovered or probably rediscovered as part of introducing the V-flag. So there are a number of regular expression methods that need to check the flags of their receiver or argument. And for the most part, the way they do it is by using the flags getter. Look at String.prototype.matchAll and String.prototype.replaceAll, both Get "flags" off the regular expression. And it's the same on RegExp.prototype for Symbol.matchAll and Symbol.split. But there are two methods that behave differently. And they are being changed by the V flag, prompting a comment from JHD in the pull request to introduce it, why are we doing a conditional Get of "unicode" after getting "unicodeSets"? We should instead be reading it unconditionally. More predictable behaviour. And the reason for that was that the existing method already did conditional reading. If you look in Symbol.match, we see an example of it. Right here. <<22.2.5.8>> Where it reads the "global" property off of the regular expression, if that’s false, does down one path, otherwise reads the "unicode" property only in the other one. The Symbol.replace operation is similar. Reads the "global" flag, and then only if it’s `true` reads "unicode". This is surprising. It surprised JHD and me, and the champions of the V flag proposal, and it’s a little awkward. The built-in "flags" getter, in fact, does do observable Gets of the individual properties, always all of them and always in the same deterministic order, and concatenates them to a string, which is what we see the other methods such as split interacting with (after forcing ToString just in case an override has done something weird), checking it for specific characters representing the flags. So the needs-consensus pull request is proposing that we update the two divergent methods to instead behave like the others. Rather than reading individual properties directly and conditionally, a pattern which gets worse after we have the V flag and after we introduce even more relevant regular expression flags, we just basically cut it off now if we can do so with web compatibility. It makes things a little bit simpler and certainly more in alignment with the other four methods. And that’s where this stands. It is a normative change. I suspect web compatibility, but I don’t have any firm evidence of that yet and the best way to find out is to attempt this if we get consensus to try. That’s the end of the presentation, and I open it up to the queue. -BT: WH is up first. +BT: WH is up first. -WH: If I were changing this, I would rather move in the direction of reading individual flags rather than what this is doing, which is moving from reading individual flags to calling a method which reads every flag, accumulates them into a string, and then decodes the string looking for single characters. It just seems like it’s doing more unnecessary work. So — what is the reason for the change? +WH: If I were changing this, I would rather move in the direction of reading individual flags rather than what this is doing, which is moving from reading individual flags to calling a method which reads every flag, accumulates them into a string, and then decodes the string looking for single characters. It just seems like it’s doing more unnecessary work. So — what is the reason for the change? -RGN: largely to avoid unnecessary litigation with every new regular expression proposal about which individual flag properties should be read in which order and under what circumstances. +RGN: largely to avoid unnecessary litigation with every new regular expression proposal about which individual flag properties should be read in which order and under what circumstances. -JHD: I put a response on the queues. It’s unnecessary work when somebody writes a subclass because otherwise will detect that all of these getters are unmodified and they will pull the value out of the slot. Since essentially zero people on the planet write – the overwrite design of the getters is something that while the committee decided it wasn’t worth the turn to change the feeling of the room when it’s been discussed, it’s consistently been that we wish we hadn’t included all of it. It’s simpler from a perspective of this specific algorithm like `match`, in this case that we are looking at, it’s simpler in terms of the observable calls are deterministic and also avoidable for the common case. But that seems like an improvement +JHD: I put a response on the queues. It’s unnecessary work when somebody writes a subclass because otherwise will detect that all of these getters are unmodified and they will pull the value out of the slot. Since essentially zero people on the planet write – the overwrite design of the getters is something that while the committee decided it wasn’t worth the turn to change the feeling of the room when it’s been discussed, it’s consistently been that we wish we hadn’t included all of it. It’s simpler from a perspective of this specific algorithm like `match`, in this case that we are looking at, it’s simpler in terms of the observable calls are deterministic and also avoidable for the common case. But that seems like an improvement -WH: If approximately nobody encounters this, what’s the point of changing it? +WH: If approximately nobody encounters this, what’s the point of changing it? -RGN: to reiterate, to avoid litigation with every regular expression proposal. Only two methods of the six read individual flag properties directly, and it would be a better use of our time if they didn’t. Reading "flags" ends up in the same place, except for pathological cases that intentionally diverge from the standard built-in behaviour. +RGN: to reiterate, to avoid litigation with every regular expression proposal. Only two methods of the six read individual flag properties directly, and it would be a better use of our time if they didn’t. Reading "flags" ends up in the same place, except for pathological cases that intentionally diverge from the standard built-in behaviour. BT: YSV has a reply on this topic -YSV: yeah, to chime in. This might be something that is optimizable at the engine level. But the benefit here is at the spec level where we have consistency across all of the methods and then for, for example, in an implementation we could comment, keep an existing version if it is fully one-on-one transation. Except for the branch case, which might be a – subclass built-in, for example +YSV: yeah, to chime in. This might be something that is optimizable at the engine level. But the benefit here is at the spec level where we have consistency across all of the methods and then for, for example, in an implementation we could comment, keep an existing version if it is fully one-on-one transation. Except for the branch case, which might be a – subclass built-in, for example -BT: thank you, YSV. Dan has a comment next. +BT: thank you, YSV. Dan has a comment next. -DLM: yes. I just wanted to express positive from the SpiderMonkey’s team +DLM: yes. I just wanted to express positive from the SpiderMonkey’s team RGN: thanks, I’m happy to hear that early on from an implementation -BT: with that, the queue is empty. Would you like a call for consensus on this change +BT: with that, the queue is empty. Would you like a call for consensus on this change RGN: absolutely. -BT: Anyone object to making this change? +BT: Anyone object to making this change? -BT: all right. I am not seeing anyone on the queue. So it sounds like consensus to me. +BT: all right. I am not seeing anyone on the queue. So it sounds like consensus to me. -RGN: okay. Next up is going to be, I think, preparing test 262 changes to capture them. The pull request against Ecma 626 is already in good shape. So thank you +RGN: okay. Next up is going to be, I think, preparing test 262 changes to capture them. The pull request against Ecma 626 is already in good shape. So thank you -BT: all right. Excellent. And I believe you have the next agenda item as well. +BT: all right. Excellent. And I believe you have the next agenda item as well. ### Conclusion/Decision @@ -364,95 +363,95 @@ Presenter: Richard Gibson (RGN) - [pr](https://github.com/tc39/ecma262/pull/2695) -RGN: This one is regarding `function.prototype.toString`. The highlight of the issue in question . . . behaviour of XS, not matching the other implementations with respect to the name portion of the output from the built-in `function.prototype.toString`. Other implementations always output either an identifier or nothing at all, or in the special cases of a symbol-name function, a special branch for a computed name. What XS is doing, instead is outputting a computed name for it, which is allowed in a different branch of `function.prototype.toString` but not for a built-in. The concern and the proposal here is around whether or not that should be permitted. Essentially, we are looking at the . . . the behaviour that is required when the function in question is built in. Currently, it is the case that the portion of the returned string that would be managed by a particular production must be the value of the [[InitialName]] slot. The [[InitialName]] slot is something that every built in function has. And it is set at creation to equal the "name" property, which in certain circumstances could be changed later. Most of the time, the initial name slot value is an IdentifierName, but again in the case of the symbol-named functions, it’s actually instead a computed property name that looks like “[Symbol.something]” optionally preceded by “get” or “set”. << Production: NativeFunction >> The question here, the normative aspect of the change is around whether or not we should permit the bracketed string literal to be considered valid for a built in function. It is important to note that in the case of a function which is not built in but also doesn’t have source text, such as one that is a proxy or the output of bind, must also conform to NativeFunction but doesn’t have this name constraint and so the XS output is valid. So the fundamental problem here, I think, is that we probably could nail down `function.prototype.toString` a little bit better. This particular change is normative in the direction of being more flexible, but I would be willing to listen to should it be less flexible . . . but the pull request here is, towards increased flexibility and I think that’s the starting point of the discussion for today. With that, I am ready to hit the queue. +RGN: This one is regarding `function.prototype.toString`. The highlight of the issue in question . . . behaviour of XS, not matching the other implementations with respect to the name portion of the output from the built-in `function.prototype.toString`. Other implementations always output either an identifier or nothing at all, or in the special cases of a symbol-name function, a special branch for a computed name. What XS is doing, instead is outputting a computed name for it, which is allowed in a different branch of `function.prototype.toString` but not for a built-in. The concern and the proposal here is around whether or not that should be permitted. Essentially, we are looking at the . . . the behaviour that is required when the function in question is built in. Currently, it is the case that the portion of the returned string that would be managed by a particular production must be the value of the [[InitialName]] slot. The [[InitialName]] slot is something that every built in function has. And it is set at creation to equal the "name" property, which in certain circumstances could be changed later. Most of the time, the initial name slot value is an IdentifierName, but again in the case of the symbol-named functions, it’s actually instead a computed property name that looks like “[Symbol.something]” optionally preceded by “get” or “set”. << Production: NativeFunction >> The question here, the normative aspect of the change is around whether or not we should permit the bracketed string literal to be considered valid for a built in function. It is important to note that in the case of a function which is not built in but also doesn’t have source text, such as one that is a proxy or the output of bind, must also conform to NativeFunction but doesn’t have this name constraint and so the XS output is valid. So the fundamental problem here, I think, is that we probably could nail down `function.prototype.toString` a little bit better. This particular change is normative in the direction of being more flexible, but I would be willing to listen to should it be less flexible . . . but the pull request here is, towards increased flexibility and I think that’s the starting point of the discussion for today. With that, I am ready to hit the queue. -MF: So you gave a single example there, where XS outputs a computed name looking thing, but it’s using empty string, which isn’t compelling because they could omit that name at that point. Match the implementations. Is there a case where you could cause XS to output something that wouldn’t be a valid identifier at that spot? +MF: So you gave a single example there, where XS outputs a computed name looking thing, but it’s using empty string, which isn’t compelling because they could omit that name at that point. Match the implementations. Is there a case where you could cause XS to output something that wouldn’t be a valid identifier at that spot? -RGN: yes. That is the case for some built ins, but also the case for the ones I was talking about, like when you bind it. You get things like this . . . and this . . (highlighted parts of PR). XS has the interesting property that the string inside the brackets for these functions always matches the name of the function as far as I can tell, which is not the case for the other implementations. There’s this, like, split between the "name" property of the function, and the content of the string output by `toString`. +RGN: yes. That is the case for some built ins, but also the case for the ones I was talking about, like when you bind it. You get things like this . . . and this . . (highlighted parts of PR). XS has the interesting property that the string inside the brackets for these functions always matches the name of the function as far as I can tell, which is not the case for the other implementations. There’s this, like, split between the "name" property of the function, and the content of the string output by `toString`. -BT: all right. MM has the next queue item. +BT: all right. MM has the next queue item. -MM: yeah. Um, I strongly object to the idea of making the spec more permissive and less deterministic. You know, the whole purpose of having a standards committee originally was because of the capability problems between browsers and the pathological game theory as we called it. All of the engine-makers have an interest, especially in things like this, where what the decision is matters much less than whether there’s agreement. We all have an interest in having the engines agree so that the programs that are out there, that were – tested against some engines work on other engines. We have a normative spec. We don’t have adequate test 262 coverage to flag those in error and has a simple solution to, add the test 262 tests to bring everyone in conformance with the normive spec, and in any case, over time, because of this – the capability issue, it more important than the particular resolution of how strings print, we should be seeking to be more deterministic in the perspective, not less deterministic. +MM: yeah. Um, I strongly object to the idea of making the spec more permissive and less deterministic. You know, the whole purpose of having a standards committee originally was because of the capability problems between browsers and the pathological game theory as we called it. All of the engine-makers have an interest, especially in things like this, where what the decision is matters much less than whether there’s agreement. We all have an interest in having the engines agree so that the programs that are out there, that were – tested against some engines work on other engines. We have a normative spec. We don’t have adequate test 262 coverage to flag those in error and has a simple solution to, add the test 262 tests to bring everyone in conformance with the normive spec, and in any case, over time, because of this – the capability issue, it more important than the particular resolution of how strings print, we should be seeking to be more deterministic in the perspective, not less deterministic. -RGN: I am extremely sympathetic to this point. And if there is appetite to nail down not just what goes between the function token and the left parenthesis, I would be willing to pursue not just that, but also the whitespace. +RGN: I am extremely sympathetic to this point. And if there is appetite to nail down not just what goes between the function token and the left parenthesis, I would be willing to pursue not just that, but also the whitespace. -MM: I would appreciate that. I would certainly support that. +MM: I would appreciate that. I would certainly support that. -BT: okay. JHD is on the queue next +BT: okay. JHD is on the queue next -JHD: yeah. So I asked in matrix, but I haven’t got an issue yet. In safari, I pulled up this string prototype dot symbol dot Iterator, and the dot name is the – symbol dot iterator and brackets and that’s fine. Two spring also contains the same thing. Is that currently valid or is your PR – what would be required to make it valid? +JHD: yeah. So I asked in matrix, but I haven’t got an issue yet. In safari, I pulled up this string prototype dot symbol dot Iterator, and the dot name is the – symbol dot iterator and brackets and that’s fine. Two spring also contains the same thing. Is that currently valid or is your PR – what would be required to make it valid? -RGN: what does it look like right now? +RGN: what does it look like right now? -JHD: the . . . function space brackets surrounding symbol dot iterator, parenthesis and the rest of it. +JHD: the . . . function space brackets surrounding symbol dot iterator, parenthesis and the rest of it. RGN: it looks like this one, but in place here is symbol dot iterator? (“`function [Symbol.iterator]() { [native code] }`”) JHD: correct -RGN: that is valid right now. And remains valid +RGN: that is valid right now. And remains valid -JHD: would it be then correct to say that your changes only allow a string literal inside the brackets. That’s the addition? +JHD: would it be then correct to say that your changes only allow a string literal inside the brackets. That’s the addition? -RGN: it has to deal with this, between the `function` token and the left parenthesis `(`. The current spec allows any property name in the grammar, identifier or computed, but has an extra constraint in the algorithm for a built-in function. The one that would be changing, if we adapted this, is . . . this part. <> So the reason why the bracket symbol dot iterator is valid, not just valid but required, is because that’s what the [[InitialName]] slot of the function holds. So it would not be valid right now to have `function ["[Symbol.iterator]"]`. That would not be valid because [[InitialName]] itself starts with bracket S, not bracket quote. If you take it and drop it in, that’s valid now and remains valid. What is not valid is wrapping it in double quotes to make a string and wrapping that in brackets to make it computed. +RGN: it has to deal with this, between the `function` token and the left parenthesis `(`. The current spec allows any property name in the grammar, identifier or computed, but has an extra constraint in the algorithm for a built-in function. The one that would be changing, if we adapted this, is . . . this part. <> So the reason why the bracket symbol dot iterator is valid, not just valid but required, is because that’s what the [[InitialName]] slot of the function holds. So it would not be valid right now to have `function ["[Symbol.iterator]"]`. That would not be valid because [[InitialName]] itself starts with bracket S, not bracket quote. If you take it and drop it in, that’s valid now and remains valid. What is not valid is wrapping it in double quotes to make a string and wrapping that in brackets to make it computed. JHD: with your change, do the contents of the quote still have to match the initial name slot -RGN: yeah. If we go this way. The – it introduces a NativeFunctionName function, which says, for anything that is a computed property name where the value is a string literal, just evaluate the string and make your checks against the result. So . . . it would treat as equivalent this from XS, where the computed property name is a string containing bracketed contents, with this from JavaScriptCore, where the computed property name is a symbol. +RGN: yeah. If we go this way. The – it introduces a NativeFunctionName function, which says, for anything that is a computed property name where the value is a string literal, just evaluate the string and make your checks against the result. So . . . it would treat as equivalent this from XS, where the computed property name is a string containing bracketed contents, with this from JavaScriptCore, where the computed property name is a symbol. -JHD: Thank you. That clarifies it for me. +JHD: Thank you. That clarifies it for me. -BT: we have MF in the queue again. 9 minutes left on this top +BT: we have MF in the queue again. 9 minutes left on this top MF: have you spoken to XS about why they made this implementation -RGN: I have not, and there are issues around it anyway that need to be fixed no matter what. XS was just the discovery point for this particular weirdness. +RGN: I have not, and there are issues around it anyway that need to be fixed no matter what. XS was just the discovery point for this particular weirdness. -MF: it would make me more comfortable hearing if we have heard from XS about whether this change was intentional or easy for them to match the spec as-is. I see they see value in choosing the more descriptive name, but I can’t speculate beyond that +MF: it would make me more comfortable hearing if we have heard from XS about whether this change was intentional or easy for them to match the spec as-is. I see they see value in choosing the more descriptive name, but I can’t speculate beyond that RGN: Peter is not with us today, or at least not right now -MF: okay. +MF: okay. -RGN: honestly, though, my preferred outcome would be what MM suggested, which is more determinism that nails everything down even further. So I mean, I am willing to have anything that is better specified, but that can be editorial or normative. And if implementations are willing, I am willing to put in the spec work to nail things down either further, rather than going in this direction of more flexibility. But really, I want to know how the committee as a whole, feels about this area and any changes that might be made. +RGN: honestly, though, my preferred outcome would be what MM suggested, which is more determinism that nails everything down even further. So I mean, I am willing to have anything that is better specified, but that can be editorial or normative. And if implementations are willing, I am willing to put in the spec work to nail things down either further, rather than going in this direction of more flexibility. But really, I want to know how the committee as a whole, feels about this area and any changes that might be made. -BT: all right. MM.. +BT: all right. MM.. -MM: you said normative or non-normative in term of pinning things down. It really needs to be normative. +MM: you said normative or non-normative in term of pinning things down. It really needs to be normative. -RGN: if I said that, I misspoke. There’s an editorial aspect to clean up toString, which I separated out into #2828 and that has not landed yet. But no, I am specifically talking in this discussion, about the normative changes. +RGN: if I said that, I misspoke. There’s an editorial aspect to clean up toString, which I separated out into #2828 and that has not landed yet. But no, I am specifically talking in this discussion, about the normative changes. -MM: okay. So I certainly object to anything that is more permissive than the current spec. And I certainly support a normative spec as deterministic as possible and deterministic, if that’s possible. +MM: okay. So I certainly object to anything that is more permissive than the current spec. And I certainly support a normative spec as deterministic as possible and deterministic, if that’s possible. -RGN: okay. I think we are willing to pursue it regardless. So maybe the question around that now would be, if it’s better to do so as a needs-consensus pull request, or series of pull requests, or instead as a proposal? +RGN: okay. I think we are willing to pursue it regardless. So maybe the question around that now would be, if it’s better to do so as a needs-consensus pull request, or series of pull requests, or instead as a proposal? -MM: I think it needs – my sense; it should be a proposal. It can be – proposals that don’t generate a lot of controversy can go through the process quickly. But I think it should be a proposal. +MM: I think it needs – my sense; it should be a proposal. It can be – proposals that don’t generate a lot of controversy can go through the process quickly. But I think it should be a proposal. -RNG: ok. With that, I think I would like to take it in that direction. Given that there’s no – no proposal yet for it, but I am willing to do that, you know, essentially immediately, how does the committee feel about giving it stage 1? +RNG: ok. With that, I think I would like to take it in that direction. Given that there’s no – no proposal yet for it, but I am willing to do that, you know, essentially immediately, how does the committee feel about giving it stage 1? -MM: I am obviously in favour of that. +MM: I am obviously in favour of that. BT: Are we sure what the proposal is. Is it what is in this PR -RGN: The proposal will be “reduce flexibility in `Function.prototype.toString`”. +RGN: The proposal will be “reduce flexibility in `Function.prototype.toString`”. -BT: but there isn’t a proposal written up? +BT: but there isn’t a proposal written up? -RGN: no. Because it’s actually a reverse course from this pull request. Not only not do this, but also, go even harder in the other direction. +RGN: no. Because it’s actually a reverse course from this pull request. Not only not do this, but also, go even harder in the other direction. -BT: okay. If there’s no reason for a rush here, just write a proposal and come back. We can, you know, move fairly quickly through the stages of it. +BT: okay. If there’s no reason for a rush here, just write a proposal and come back. We can, you know, move fairly quickly through the stages of it. -RGN: okay. +RGN: okay. -BT: in fact, if you have something that you can put to paper, by the end of the meeting, we may have some additional time. +BT: in fact, if you have something that you can put to paper, by the end of the meeting, we may have some additional time. -RGN: yeah. I think – I think I can do that. I will definitely give it a shot. As far as this topic before it’s ready, is there any other feedback? +RGN: yeah. I think – I think I can do that. I will definitely give it a shot. As far as this topic before it’s ready, is there any other feedback? -BT: the queue is empty. +BT: the queue is empty. -RGN: okay. Well, thanks, then. Expect to hear more on this in the near future +RGN: okay. Well, thanks, then. Expect to hear more on this in the near future -BT: all right. Thank you. +BT: all right. Thank you. ### Conclusion/Decision @@ -464,95 +463,95 @@ Presenter: Allen Wirfs-Brock (AWB) - [slides](https://docs.google.com/presentation/d/1eNKAGEoa6WGgg9IEhTBHdHj1jA7aJFPO5PIsYminP3E/edit) - All right. Next up, we have Allen. Allen, welcome back. + All right. Next up, we have Allen. Allen, welcome back. -AWB: hey. You guys hear me? +AWB: hey. You guys hear me? -BT: yeah. You sound good +BT: yeah. You sound good -AWB: cool. Let’s see if I can . . . figure out here how to make my slides visible. And there it is. Hello, everybody. Most of you probably don’t know me, but I am Allen, and I was the project editor for ES5 and ES6. I’ve been working with IS, the secretary in a bit to look at PDF issues and stuff. I wanted to fill you in, on what I have done. And present to you some things that you need to consider for TC39 is going to do going forward in this area. So let’s get started. +AWB: cool. Let’s see if I can . . . figure out here how to make my slides visible. And there it is. Hello, everybody. Most of you probably don’t know me, but I am Allen, and I was the project editor for ES5 and ES6. I’ve been working with IS, the secretary in a bit to look at PDF issues and stuff. I wanted to fill you in, on what I have done. And present to you some things that you need to consider for TC39 is going to do going forward in this area. So let’s get started. -AWB: So Ecma always had a standard for formatting standards as a book. And traditionally standards, of course, were published essentially as books. They are documents on the Ecma’s website, you can go to that in great detail, describe the formatting requirements of these book-like standards. And they, themselves, that description, I believe, is derived from even more detailed standards for what a standard is supposed to look like. And there is a Microsoft word template, a docx file that lays out the matrix for such a format. Up through 2015, the standards were developed exclusively in Microsoft word using that template. They generated nice printed documents. But, you know, by the time ES6 was done, it was pretty clear that most users of the spec were using it on the web and make a much better experience with a presentation of the standard that was really authored to be used on the web. And so you have got what we have today. And it’s a great format for online presentation. And we are authored using the ECmarkup tools. And so people don’t have to learn word and just as importantly, the standard is really now too big to work with Word for some several reasons we discovered. +AWB: So Ecma always had a standard for formatting standards as a book. And traditionally standards, of course, were published essentially as books. They are documents on the Ecma’s website, you can go to that in great detail, describe the formatting requirements of these book-like standards. And they, themselves, that description, I believe, is derived from even more detailed standards for what a standard is supposed to look like. And there is a Microsoft word template, a docx file that lays out the matrix for such a format. Up through 2015, the standards were developed exclusively in Microsoft word using that template. They generated nice printed documents. But, you know, by the time ES6 was done, it was pretty clear that most users of the spec were using it on the web and make a much better experience with a presentation of the standard that was really authored to be used on the web. And so you have got what we have today. And it’s a great format for online presentation. And we are authored using the ECmarkup tools. And so people don’t have to learn word and just as importantly, the standard is really now too big to work with Word for some several reasons we discovered. -AWB: But there’s been a problem since then, and how do you turn this into a book? Because there are important constituents of Ecma that have an expectation that important standards are published as nice books. And in particular libraries and archives, national archives and other standards of bodies have an expectation for these book-like standards. And so this has been quite a pain for the Ecma and the secretriat since then. So in . . . in 2016, when we were going to to the new process for authoring the spec – let’s see here. Well, okay. Let me give you an example here. So this an example. So basically, the best that could be done in 2016 was to – okay – load the HTML version into a browser forcing the table of contents to be linearized at the front of the doc instead of in the side – interactive side bar, and then telling the browser to print. And you got this long continuous thing with arbitrary break pages. And various other issues like anything that required horizontal scrolling would get truncated and you ended up with . . . you see something at the front of the document like on the left here . . . when the expectation of these external customers is something on the right. And by the way, this thing on the right is now how it looks, until the work I recently did. So back then . . . okay. +AWB: But there’s been a problem since then, and how do you turn this into a book? Because there are important constituents of Ecma that have an expectation that important standards are published as nice books. And in particular libraries and archives, national archives and other standards of bodies have an expectation for these book-like standards. And so this has been quite a pain for the Ecma and the secretriat since then. So in . . . in 2016, when we were going to to the new process for authoring the spec – let’s see here. Well, okay. Let me give you an example here. So this an example. So basically, the best that could be done in 2016 was to – okay – load the HTML version into a browser forcing the table of contents to be linearized at the front of the doc instead of in the side – interactive side bar, and then telling the browser to print. And you got this long continuous thing with arbitrary break pages. And various other issues like anything that required horizontal scrolling would get truncated and you ended up with . . . you see something at the front of the document like on the left here . . . when the expectation of these external customers is something on the right. And by the way, this thing on the right is now how it looks, until the work I recently did. So back then . . . okay. -AWB: So back in 2016, Brian, I think, who did a lot of the – this original work in migrating the EC markup and creating it, you know, we talked about this problem of printing. And I remember him saying, he thought that CSS page media support, whatever that was, was a likely solution. But I seem to recall he went off and he talked to some of the browser people at Microsoft, and he came back and said, basically, well, unfortunately, browser don’t support any of this. So while there’s all these great features in theory in CSS, for doing exactly what we want to do, is producing nicely-formatted books, browsers just don’t support them. We just kind of moved ahead and the best we could do, and kind of held off Ecma saying we can’t do any better. +AWB: So back in 2016, Brian, I think, who did a lot of the – this original work in migrating the EC markup and creating it, you know, we talked about this problem of printing. And I remember him saying, he thought that CSS page media support, whatever that was, was a likely solution. But I seem to recall he went off and he talked to some of the browser people at Microsoft, and he came back and said, basically, well, unfortunately, browser don’t support any of this. So while there’s all these great features in theory in CSS, for doing exactly what we want to do, is producing nicely-formatted books, browsers just don’t support them. We just kind of moved ahead and the best we could do, and kind of held off Ecma saying we can’t do any better. -AWB: So actually almost a year ago, I was talking to them he said, can’t we do better? And I looked around a bit more at – at the CSS page media and stuff and I discovered that for me, I discovered that the fact, while browsers can’t handle this, there are a number of nonbrowser renders that do. So . . . you know, you take your HTML and out comes a PDF that is nicely formatted, if you’re HTML has the right imputing and stuff. Most of these are commercial products, some of them are quite expensive, and some of them have – have – you know, quite restrictive licencing. You could only one on CPR with most 4 cores and things like that. And but there are solutions out there, and there are people who use these to create very sophisticated book-like things using CSS and HTML. But more recently, I noticed that there is this one open-source project called “paged.JS’ it’s a polyfill of this media support and it works – it was intended to work primarily in chrome-based browsers. Is itize a polyfill. It’s a polyfill for the CSS constructs that support paged media and it does it in the browser by making the Dong, interpreting the both what the layout – the browser has initially created and then looking at the CSS paged media property that is are in there, and figuring out how big each page will be and creating a bunch of individual containers and flowing it into them. In the browser, you get a paged view of the document, and then you can use this a regular print function of – of a browser, and chrome is smart unlawful that you get nice, paged output with all the footers and – prior to the GA approval of the 2002 standards headers that are described using the CSS paged media constructs . . . +AWB: So actually almost a year ago, I was talking to them he said, can’t we do better? And I looked around a bit more at – at the CSS page media and stuff and I discovered that for me, I discovered that the fact, while browsers can’t handle this, there are a number of nonbrowser renders that do. So . . . you know, you take your HTML and out comes a PDF that is nicely formatted, if you’re HTML has the right imputing and stuff. Most of these are commercial products, some of them are quite expensive, and some of them have – have – you know, quite restrictive licencing. You could only one on CPR with most 4 cores and things like that. And but there are solutions out there, and there are people who use these to create very sophisticated book-like things using CSS and HTML. But more recently, I noticed that there is this one open-source project called “paged.JS’ it’s a polyfill of this media support and it works – it was intended to work primarily in chrome-based browsers. Is itize a polyfill. It’s a polyfill for the CSS constructs that support paged media and it does it in the browser by making the Dong, interpreting the both what the layout – the browser has initially created and then looking at the CSS paged media property that is are in there, and figuring out how big each page will be and creating a bunch of individual containers and flowing it into them. In the browser, you get a paged view of the document, and then you can use this a regular print function of – of a browser, and chrome is smart unlawful that you get nice, paged output with all the footers and – prior to the GA approval of the 2002 standards headers that are described using the CSS paged media constructs . . . -AWB: so that’s what I did. I took the – I took a week investigating this process. What it would take to do this with the 2022 versions of the standard. And what I do, the basic idea was, to . . . start with the output of AC markup. Basically, I started with the – the HTML output, and the CSS file for EC markup. And I started just running it through this paged JS polyfill in a chromium-based browser to see what would happen. And I discovered various things that was in the markup that would cause it to crash, cause paged JS to crash or not render appropriately. And so, you know, I didi an iterative process basically of fixing things, going back, editing the HTML file, running it through the process again. And initially, I did this experimenting with the 2021 education of Ecma 626. And I got far enough with it to say to the – to them that I think we can do this for the – for the new standards. And so . . . we talked about that and I said, okay. Let’s go ahead and do it. And so as soon as GA approved the 2022 editions, I sat down and applied this process to both 262 and 402. And it took me about a week and a half to do the two of them. And they’re now up on the Ecma website as the PDFs. +AWB: so that’s what I did. I took the – I took a week investigating this process. What it would take to do this with the 2022 versions of the standard. And what I do, the basic idea was, to . . . start with the output of AC markup. Basically, I started with the – the HTML output, and the CSS file for EC markup. And I started just running it through this paged JS polyfill in a chromium-based browser to see what would happen. And I discovered various things that was in the markup that would cause it to crash, cause paged JS to crash or not render appropriately. And so, you know, I didi an iterative process basically of fixing things, going back, editing the HTML file, running it through the process again. And initially, I did this experimenting with the 2021 education of Ecma 626. And I got far enough with it to say to the – to them that I think we can do this for the – for the new standards. And so . . . we talked about that and I said, okay. Let’s go ahead and do it. And so as soon as GA approved the 2022 editions, I sat down and applied this process to both 262 and 402. And it took me about a week and a half to do the two of them. And they’re now up on the Ecma website as the PDFs. -AWB: There’s a number of things I had to do. To do this, you have to modify the CSS, so it has, you know, has the page metrics or what – the dimensions, and line and page layouts, where the headers and footers are. And the Ecma standard has things like, you know, where the page number is, flips from side to side. And the front side has Roman numerals. So that’s all specified there in the CSS file. And the text is justified. And there’s a number of convenient classes, convenience CSS classes that are created. So you can make it easy at the HTML level to specify if I need a page break or I don’t want a page break here and things like that. So anyway, I did that. It was a bunch of manual work involved. But it didn’t actually take all that long. +AWB: There’s a number of things I had to do. To do this, you have to modify the CSS, so it has, you know, has the page metrics or what – the dimensions, and line and page layouts, where the headers and footers are. And the Ecma standard has things like, you know, where the page number is, flips from side to side. And the front side has Roman numerals. So that’s all specified there in the CSS file. And the text is justified. And there’s a number of convenient classes, convenience CSS classes that are created. So you can make it easy at the HTML level to specify if I need a page break or I don’t want a page break here and things like that. So anyway, I did that. It was a bunch of manual work involved. But it didn’t actually take all that long. -AWB: And the real question for you guys, I think, to consider is, I don’t think – I would hope nobody thinks that it would be a good idea to go back to the really bad PDFs being produced before and I am sure the secretriat wouldn’t like that, the standards, the peers would hold their nose again. So I think as a group, or maybe it’s the editors and chairs as a group, but you know, TC39 needs to think about what they want to do going forward. And so I just wanted to layout a couple of options for you guys to think about and talk about and eventually make some decisions on. One of which is, assuming you buy into that CSS page media applied to the – the core HTML documents that are produced by TC39, is the way to proceed? One decision is whether to continue to use paged JS or switch to a commercial project. I suggest stick with paged JS. Some of the commercial products have some more capabilities, arguably they may be more stage in the long run, the commercial projects, paged JS is a pretty young, open-sourced project. And – and they keep work on producing – they definitely have people working on it and they have plans and a road map for their future. But it – you know, whether in the long term, it survives or not, maybe really knowses. For sure – +AWB: And the real question for you guys, I think, to consider is, I don’t think – I would hope nobody thinks that it would be a good idea to go back to the really bad PDFs being produced before and I am sure the secretriat wouldn’t like that, the standards, the peers would hold their nose again. So I think as a group, or maybe it’s the editors and chairs as a group, but you know, TC39 needs to think about what they want to do going forward. And so I just wanted to layout a couple of options for you guys to think about and talk about and eventually make some decisions on. One of which is, assuming you buy into that CSS page media applied to the – the core HTML documents that are produced by TC39, is the way to proceed? One decision is whether to continue to use paged JS or switch to a commercial project. I suggest stick with paged JS. Some of the commercial products have some more capabilities, arguably they may be more stage in the long run, the commercial projects, paged JS is a pretty young, open-sourced project. And – and they keep work on producing – they definitely have people working on it and they have plans and a road map for their future. But it – you know, whether in the long term, it survives or not, maybe really knowses. For sure – -BT: sorry to interpret, there’s quite a few people in the queue and 4 minutes left in the topic. We have to go through it pretty quick. +BT: sorry to interpret, there’s quite a few people in the queue and 4 minutes left in the topic. We have to go through it pretty quick. -AWB: I am almost done. Anyway, so the question is, assuming you’re going to stay with this general approach, how do you want to do it? Do you want to repeat the 2022 process it’s not that hard to do. And I have a document and work to do, to document that process. Something I have noticed that – another one is the simply diff, the HTML level from 2022 to 2023. And so carried forward the 2022 augmented HTML, if you will, that is driving this process, and just applying this. It’s probably less work. And the third one is, trying to automate this process so it’s done easier and detectly from the original EC markup source. And one important consideration I wanted to mention here was that good pagination requires aesthetic decisions and so some human has to look at it, to really, you know, decide that the page breaks are right. At the very least, you need a PDF editor important for that. +AWB: I am almost done. Anyway, so the question is, assuming you’re going to stay with this general approach, how do you want to do it? Do you want to repeat the 2022 process it’s not that hard to do. And I have a document and work to do, to document that process. Something I have noticed that – another one is the simply diff, the HTML level from 2022 to 2023. And so carried forward the 2022 augmented HTML, if you will, that is driving this process, and just applying this. It’s probably less work. And the third one is, trying to automate this process so it’s done easier and detectly from the original EC markup source. And one important consideration I wanted to mention here was that good pagination requires aesthetic decisions and so some human has to look at it, to really, you know, decide that the page breaks are right. At the very least, you need a PDF editor important for that. -AWB: Here is how you might automate the work flow. Currently, the source files, go through the EC markup program, CSS. An option that says, well, generator HTML with some variation specifically that support the page markup and use as the alternative CSS file. Run that through the process – run that through the processing. If you need manual edits, make them at the EC markup level. Add classes that are specific to the PDF presentation. And it won’t effect the web presentation and spit it out. And so that’s basically it. So . . . it’ your decision to make and think about. I can answer any questions anyone else in the 30 seconds remaining or whatever +AWB: Here is how you might automate the work flow. Currently, the source files, go through the EC markup program, CSS. An option that says, well, generator HTML with some variation specifically that support the page markup and use as the alternative CSS file. Run that through the process – run that through the processing. If you need manual edits, make them at the EC markup level. Add classes that are specific to the PDF presentation. And it won’t effect the web presentation and spit it out. And so that’s basically it. So . . . it’ your decision to make and think about. I can answer any questions anyone else in the 30 seconds remaining or whatever -BT: we have like a minute and a half. +BT: we have like a minute and a half. AWB: okay -BT: Let’s go quick. SFC wants to know if you resolved text searching in the PDF +BT: Let’s go quick. SFC wants to know if you resolved text searching in the PDF -AWB: yes. That was resolved. Don’t use a MAC preview to concatenate +AWB: yes. That was resolved. Don’t use a MAC preview to concatenate -BT: DE is on the queue next. +BT: DE is on the queue next. -DE: yes. As we have been discussing in TC39 delegates, I think we previously agreed as a committee to ask the secretariat for a contractor to help with this task. If you’re saying you don’t want to do this task forever, which makes perfect sense, maybe we work on boarding a contractor for this work. I am not sure we have volunteers in TC39 to do this manual work that you talked about +DE: yes. As we have been discussing in TC39 delegates, I think we previously agreed as a committee to ask the secretariat for a contractor to help with this task. If you’re saying you don’t want to do this task forever, which makes perfect sense, maybe we work on boarding a contractor for this work. I am not sure we have volunteers in TC39 to do this manual work that you talked about -AWB: so I guess two points I want to make: to do a good job, these are aesthetic decisions where understanding the materials are important. For example, where to make sense to break an HTML production or not. So that’s one consideration. And so just a front-end HTML-CSS contractor. I don’t know how well they will do on that. Cumulatively, this is a couple weeks of work for a year. It looks like . . . and so I would think it’s – it would make sense for TC39 to try to find somebody who can commit to that level of work. It’s not – it’s not a lot. It’s not a big job. +AWB: so I guess two points I want to make: to do a good job, these are aesthetic decisions where understanding the materials are important. For example, where to make sense to break an HTML production or not. So that’s one consideration. And so just a front-end HTML-CSS contractor. I don’t know how well they will do on that. Cumulatively, this is a couple weeks of work for a year. It looks like . . . and so I would think it’s – it would make sense for TC39 to try to find somebody who can commit to that level of work. It’s not – it’s not a lot. It’s not a big job. -DE: yeah. It seems reasonable to look for someone who can do a couple of weeks of work. If we can’t find them, search for a contractor. +DE: yeah. It seems reasonable to look for someone who can do a couple of weeks of work. If we can’t find them, search for a contractor. -BT: we are over our timebox. We do have some – +BT: we are over our timebox. We do have some – -IS: it’s very, very quick. +IS: it’s very, very quick. -BT: we have some spare time, is there any observation to extending this item up to 15 minutes? All right. Let’s go to 11:45 at the latest. Go ahead. +BT: we have some spare time, is there any observation to extending this item up to 15 minutes? All right. Let’s go to 11:45 at the latest. Go ahead. -IS: okay. I am from Ecma. So basically, when we started this project we tried to find a “contractor''. And actually, we had close contact with one of the professional companies (PDFreactor) who had a leading professional renderer program. And the company which I have in contact with, which had the best marks also in Allen’s presentation, what it could do; and we worked with them quite a lot because they had quite a large number of professional user companies “on the hook”. PDFreactor, and especially one gentleman in that company was very helpful to us. But unfortunately, we (or better he) was not able to find anybody among their good contacts, and so that was the point, you know, when we – when we went back in the discussion to AWB what to do and how to progress with the project. I mean, AWB was involved in the entire process, of course. And then I told AWB that unfortunately we couldn’t get any “contractor” we have to carry on with the project ourselves. So we tried to get them, with the help of this company, but it was not successful. It doesn’t mean, of course, you know, if somebody else tries to look for it and then they have more luck. And then you find somebody. But we also tried that other avenue and in the end, we came up with the current solution, that we make a “quick and dirty” solution for ES2022 internally, and then we have one more year time to work out a more professional solution for ES2023 and beyond. So this the full story behind it. Thank you. +IS: okay. I am from Ecma. So basically, when we started this project we tried to find a “contractor''. And actually, we had close contact with one of the professional companies (PDFreactor) who had a leading professional renderer program. And the company which I have in contact with, which had the best marks also in Allen’s presentation, what it could do; and we worked with them quite a lot because they had quite a large number of professional user companies “on the hook”. PDFreactor, and especially one gentleman in that company was very helpful to us. But unfortunately, we (or better he) was not able to find anybody among their good contacts, and so that was the point, you know, when we – when we went back in the discussion to AWB what to do and how to progress with the project. I mean, AWB was involved in the entire process, of course. And then I told AWB that unfortunately we couldn’t get any “contractor” we have to carry on with the project ourselves. So we tried to get them, with the help of this company, but it was not successful. It doesn’t mean, of course, you know, if somebody else tries to look for it and then they have more luck. And then you find somebody. But we also tried that other avenue and in the end, we came up with the current solution, that we make a “quick and dirty” solution for ES2022 internally, and then we have one more year time to work out a more professional solution for ES2023 and beyond. So this the full story behind it. Thank you. USA:: next, we have WH -WH: Looking at past ECMAScript PDFs, the older ones such as Edition 5 have a nice sidebar with the entire table of contents as an outline you can click on and go to the section. The newest one doesn’t support that. Do any of these tools support that? What’s required to make it work? +WH: Looking at past ECMAScript PDFs, the older ones such as Edition 5 have a nice sidebar with the entire table of contents as an outline you can click on and go to the section. The newest one doesn’t support that. Do any of these tools support that? What’s required to make it work? -AWB: sorry. Yeah. You’re talking about PDF bookmarks, basically. That is one feature that paged JS doesn’t currently support, which is the generation of PDF bookmarks, which is what is used, the PDF side bar and such. Some of the other commercial packages do. The table of contents, in the PDF, does work. You click on a page and it takes you to that page or whatever. There isn’t a side bar. At least my rationalization for now is that the primary purpose of this document, this PDF isn’t for onscreen viewing because the HTML is vastly superior for that, but it’s for printing books and printed books you don’t have that side bar. So . . . but that could go into the decision of what to use for the . . . for the renderer. +AWB: sorry. Yeah. You’re talking about PDF bookmarks, basically. That is one feature that paged JS doesn’t currently support, which is the generation of PDF bookmarks, which is what is used, the PDF side bar and such. Some of the other commercial packages do. The table of contents, in the PDF, does work. You click on a page and it takes you to that page or whatever. There isn’t a side bar. At least my rationalization for now is that the primary purpose of this document, this PDF isn’t for onscreen viewing because the HTML is vastly superior for that, but it’s for printing books and printed books you don’t have that side bar. So . . . but that could go into the decision of what to use for the . . . for the renderer. -WH: One of the critical uses is archiving. PDFs are self-contained. With HTML, you never know when you got the whole bundle or have some missing resources. +WH: One of the critical uses is archiving. PDFs are self-contained. With HTML, you never know when you got the whole bundle or have some missing resources. -AWB: right. +AWB: right. -USA: okay. Next up, we have Michael +USA: okay. Next up, we have Michael -MF: hi. Can you give us an idea of what portion of the – what changes you have done or changes to the kind of like infrastructural parts and what portion of the changes were changes to like content parts. Because the infrastructural starts, I would like to ingrate into Ecma, so you don’t have to repeat year after year +MF: hi. Can you give us an idea of what portion of the – what changes you have done or changes to the kind of like infrastructural parts and what portion of the changes were changes to like content parts. Because the infrastructural starts, I would like to ingrate into Ecma, so you don’t have to repeat year after year -AWB: it’s almost all structural changes. Some of the customs like ECU grammar would break sometimes. It just – paged JS has a hard time with custom elements. And so the solution to that is to turn them into DIVs. And that’s one of the things, if we had this alternative paths with PDF output option is turn those into DIVs. In terms of the actual content, the only content change I had to make was, there was some wide tables relating to modules in the Ecma 262. There’s no way they would fit on the page. If I pivot them, switch the columns and the row, they would fit nicely on page. And I would recommend that in the master document because that presentation is just as nice as one that is currently there. The only other content-like change I had to do was, I manually – I semi-manually created the table of contents. And again, I think that’s something that could be done within EC markup. +AWB: it’s almost all structural changes. Some of the customs like ECU grammar would break sometimes. It just – paged JS has a hard time with custom elements. And so the solution to that is to turn them into DIVs. And that’s one of the things, if we had this alternative paths with PDF output option is turn those into DIVs. In terms of the actual content, the only content change I had to make was, there was some wide tables relating to modules in the Ecma 262. There’s no way they would fit on the page. If I pivot them, switch the columns and the row, they would fit nicely on page. And I would recommend that in the master document because that presentation is just as nice as one that is currently there. The only other content-like change I had to do was, I manually – I semi-manually created the table of contents. And again, I think that’s something that could be done within EC markup. -MF: okay. I am looking forward to seeing your writeup, then. +MF: okay. I am looking forward to seeing your writeup, then. -AWB: yeah. Thank you +AWB: yeah. Thank you -USA: next we have Shane. +USA: next we have Shane. -SFC: yeah. Thank you for your work on this. I think that we would all agree that, you know, in the long term, it would be good to have be an automated process. And in terms of the ways to get to that, I think that it would be helpful for us to establish a list of requirements that we want out of the generated PDFs to evaluate one of the other solutions for that. And then hopefully, we can get to a point where we run this as a GitHub action, the PDF is generated. Then you download it off the GitHub actions. We already do that for documentation and for the HTML version of the perspective. Why not for the PDF version of the perspective? I think that, you know, these are the kinds of problems that we should be able to solve. You know, and . . . yeah. I just wanted to start that conversation. That’s all. Thank you. +SFC: yeah. Thank you for your work on this. I think that we would all agree that, you know, in the long term, it would be good to have be an automated process. And in terms of the ways to get to that, I think that it would be helpful for us to establish a list of requirements that we want out of the generated PDFs to evaluate one of the other solutions for that. And then hopefully, we can get to a point where we run this as a GitHub action, the PDF is generated. Then you download it off the GitHub actions. We already do that for documentation and for the HTML version of the perspective. Why not for the PDF version of the perspective? I think that, you know, these are the kinds of problems that we should be able to solve. You know, and . . . yeah. I just wanted to start that conversation. That’s all. Thank you. -AWB: yeah. I understand. Yeah. I agree. I think that’s the ideal forum, is that edits are done in terms of the EC markup level and runs through the process. The one caveat I throw it, you don’t have to do this on a weekly or daily or . . . but before final publication, a human is going to have to take some time and go through the documents and say, hey. Now that’s a bad place to do a page break. Let’s tweak the class annotations here and get a better page break. +AWB: yeah. I understand. Yeah. I agree. I think that’s the ideal forum, is that edits are done in terms of the EC markup level and runs through the process. The one caveat I throw it, you don’t have to do this on a weekly or daily or . . . but before final publication, a human is going to have to take some time and go through the documents and say, hey. Now that’s a bad place to do a page break. Let’s tweak the class annotations here and get a better page break. -SFC: rather than having like a contractor to do this every year, manually, it would be better to have a contractor automate the process once and we don’t have to have a contractor anymore because, then, you know, we just – you know, because the PDFs are always generated in the exact same way every time and if there’s a page break in a weird place, that’s one, small change to make upstream in the EC markup and that problem is fixed. But we don’t need to have, like, a high-skill contractor for that +SFC: rather than having like a contractor to do this every year, manually, it would be better to have a contractor automate the process once and we don’t have to have a contractor anymore because, then, you know, we just – you know, because the PDFs are always generated in the exact same way every time and if there’s a page break in a weird place, that’s one, small change to make upstream in the EC markup and that problem is fixed. But we don’t need to have, like, a high-skill contractor for that -AWB: yeah. I just – I would say I suspect most of the work in automated this is actually in the EC markup HTML generator. I don’t know if that is something you want to stick – a contractor or not. I don’t know which of you guys currently maintain, you know, that – that tool chain. +AWB: yeah. I just – I would say I suspect most of the work in automated this is actually in the EC markup HTML generator. I don’t know if that is something you want to stick – a contractor or not. I don’t know which of you guys currently maintain, you know, that – that tool chain. -DE: so to maybe draw a conclusion to this discussion for next steps. In the chat, there’s an interest in the editors in looking into this process and seeing if it can be made work well enough without too much work going forward. And if there’s more support needed, you know, MF did an extensive search for type setters, finding 4 different for quotes. Not for a software licence, but for a human. If we find that after a decent amount of looking into improving automation this we need significant human work, then we will probably be coming back to the secretriat asking for this professional support, unless somebody on committee can do this. This has been presented before and no volunteers, but maybe that will change. Does that capture a conclusion for this topic? +DE: so to maybe draw a conclusion to this discussion for next steps. In the chat, there’s an interest in the editors in looking into this process and seeing if it can be made work well enough without too much work going forward. And if there’s more support needed, you know, MF did an extensive search for type setters, finding 4 different for quotes. Not for a software licence, but for a human. If we find that after a decent amount of looking into improving automation this we need significant human work, then we will probably be coming back to the secretriat asking for this professional support, unless somebody on committee can do this. This has been presented before and no volunteers, but maybe that will change. Does that capture a conclusion for this topic? -BT?: Sounds right to me. +BT?: Sounds right to me. -AWB: okay. Let me end by saying, as you guys think about and you want to drill in it more, I am available to talk to people, and . . . and, you know, tell you a bit more about my experience as you get into it. So . . . so thanks. +AWB: okay. Let me end by saying, as you guys think about and you want to drill in it more, I am available to talk to people, and . . . and, you know, tell you a bit more about my experience as you get into it. So . . . so thanks. -BT: thank you. AWB, for this excellent work. +BT: thank you. AWB, for this excellent work. ### Conclusion/Decision @@ -566,17 +565,17 @@ Presenter: Jordan Harband (JHD) - [proposal](https://github.com/tc39/proposal-hashbang) -JHD: all righty. Hi, everyone. We have had this proposal for hash bang comments for a while. It was championd by BFS who is no longer in the committee. Stage 3 shipping and chrome, firefox and chakra core and node JS and safari and XS. I will update this before this is archived. My hope is that something that stage 3 in the shipping virtually everywhere and has an open perspective PR approved by an editor is acceptable for stage 4. So any observations to stage 4? Or consensus for stage 4? +JHD: all righty. Hi, everyone. We have had this proposal for hash bang comments for a while. It was championd by BFS who is no longer in the committee. Stage 3 shipping and chrome, firefox and chakra core and node JS and safari and XS. I will update this before this is archived. My hope is that something that stage 3 in the shipping virtually everywhere and has an open perspective PR approved by an editor is acceptable for stage 4. So any observations to stage 4? Or consensus for stage 4? BT: let’s give folks a couple of seconds JHD: Of course -BT: In the room, we have thumb’s up. +BT: In the room, we have thumb’s up. -BT: The queue remains empty. I think we have stage 4 consensus. +BT: The queue remains empty. I think we have stage 4 consensus. -BT: awesome. Congratulations, BFS. +BT: awesome. Congratulations, BFS. ### Conclusion/Decision @@ -586,55 +585,55 @@ BT: awesome. Congratulations, BFS. Presenter: Jordan Harband (JHD) -JHD: all right. Is that on? All righty. Okay. So there is a symbol inside the Ecma 402 spec called the fall back symbol. This is the – the way you get it, this is my code from the package that gets it. Call into format like this. And then you get the symbols off the object, and you look for the one that has the appropriate description, depending on which browser you’re in. This one is the [inaudible] so the symbol in the specification is a same realm symbol. A standard unique symbol. Symbol parenthesis and to fall back string? As the description. However, V8, I believe, implements it’s a well known symbol. In other words, the same value across different realms. The choices – so there’s a deviation between some implications and the spec. Other implications, all implemented as same-realm symbol. So technically, we don’t have to do anything. We could make sure the test 262 tests are up to date and V8 will decide if or when to make their implication match the specification. But it seemed a good idea to get affirmative, to reaffirm consensus of that’s what we want. I don’t have any strong opinion whether it’s same or cross-realm. There are kind of – I believe there are arguments in favour of either course or against either course. There are tradeoffs no matter which way you go. One of the important reasons to answer the question is that there are – is another proposal later in the agenda to answer if something is a well-known symbol or not, so this is an open question that I would want resolved. There’s the issue that – if it’s a well known symbol, there is cross-realm instrickensic, which complicates things for the lockdown environments. So, yeah. I am sure there are folks on the queue to add context to those sorts of concerns. But my hope is that we end this agenda topic with either yes, it is same realm and we will make sure test matches that or no, change to be cross-realm and then we can discuss it. +JHD: all right. Is that on? All righty. Okay. So there is a symbol inside the Ecma 402 spec called the fall back symbol. This is the – the way you get it, this is my code from the package that gets it. Call into format like this. And then you get the symbols off the object, and you look for the one that has the appropriate description, depending on which browser you’re in. This one is the [inaudible] so the symbol in the specification is a same realm symbol. A standard unique symbol. Symbol parenthesis and to fall back string? As the description. However, V8, I believe, implements it’s a well known symbol. In other words, the same value across different realms. The choices – so there’s a deviation between some implications and the spec. Other implications, all implemented as same-realm symbol. So technically, we don’t have to do anything. We could make sure the test 262 tests are up to date and V8 will decide if or when to make their implication match the specification. But it seemed a good idea to get affirmative, to reaffirm consensus of that’s what we want. I don’t have any strong opinion whether it’s same or cross-realm. There are kind of – I believe there are arguments in favour of either course or against either course. There are tradeoffs no matter which way you go. One of the important reasons to answer the question is that there are – is another proposal later in the agenda to answer if something is a well-known symbol or not, so this is an open question that I would want resolved. There’s the issue that – if it’s a well known symbol, there is cross-realm instrickensic, which complicates things for the lockdown environments. So, yeah. I am sure there are folks on the queue to add context to those sorts of concerns. But my hope is that we end this agenda topic with either yes, it is same realm and we will make sure test matches that or no, change to be cross-realm and then we can discuss it. -BT: Shane. +BT: Shane. -SFC: yes. So I – thanks for bringing this to my attention yesterday during lunch. I am reviewing the specification for this symbol. My understanding is the symbol is used as a way to – for users to detect when fallback has occurred or not, by comparing the results of an operation to this symbol. So it seems to make sense that the intent is basically – it could be a comparison to a string literal. But it uses symbols, so you compare to a symbol, I think it’s odd – I think it would be the expected behaviour to essentially have it be a well known symbol because that’s the expectation of a string. Because if you compare a string to a string, it’s fine. If it cross-realm, they have the same comparison. Having the symbol also be cross-realm makes the most sense intuitively for the use case. I see there’s some more agenda items in the queue. Why don’t we go through those first +SFC: yes. So I – thanks for bringing this to my attention yesterday during lunch. I am reviewing the specification for this symbol. My understanding is the symbol is used as a way to – for users to detect when fallback has occurred or not, by comparing the results of an operation to this symbol. So it seems to make sense that the intent is basically – it could be a comparison to a string literal. But it uses symbols, so you compare to a symbol, I think it’s odd – I think it would be the expected behaviour to essentially have it be a well known symbol because that’s the expectation of a string. Because if you compare a string to a string, it’s fine. If it cross-realm, they have the same comparison. Having the symbol also be cross-realm makes the most sense intuitively for the use case. I see there’s some more agenda items in the queue. Why don’t we go through those first -BT: Dan on this topic? +BT: Dan on this topic? -DE: yeah. To explain the rationale for Intl.FallbackSymfinebol. It’s completely a hack. Ecma402 was designed – Allen, if you’re on the call, can confirm because he was there and I wasn’t . . . or some other people here. When it came out, before ES6 classes were fully defined and it was attempted to be consistent with what we thought classes were going to be. Including what we thought calling a constructor would do. That turned out to not be the case. There were some late changes, making sure the internal slots were available rather than adding internal slots later. Which were good changes. But it meant that some of Ecma 402 semantics made it to initiallyize an existing object as an Intl object. When ES6 came out, there was an attempt to say, we don’t need this anymore. We can use them as, you know, make them be what classes ended up finally. And Rick is raising his hand. +DE: yeah. To explain the rationale for Intl.FallbackSymfinebol. It’s completely a hack. Ecma402 was designed – Allen, if you’re on the call, can confirm because he was there and I wasn’t . . . or some other people here. When it came out, before ES6 classes were fully defined and it was attempted to be consistent with what we thought classes were going to be. Including what we thought calling a constructor would do. That turned out to not be the case. There were some late changes, making sure the internal slots were available rather than adding internal slots later. Which were good changes. But it meant that some of Ecma 402 semantics made it to initiallyize an existing object as an Intl object. When ES6 came out, there was an attempt to say, we don’t need this anymore. We can use them as, you know, make them be what classes ended up finally. And Rick is raising his hand. -RW: yeah. I want to confirm your story because I was actually the one that did all this work you’re describing and you’re 100% correct. +RW: yeah. I want to confirm your story because I was actually the one that did all this work you’re describing and you’re 100% correct. -DE: Okay. I did the worse part, trying to implement and ship this or working with Caitlin potter, I think she was the one that implemented it in V8. It wasn’t web compatible because this were libraries using this older pattern for caching. It wasn’t introduced to detect anything in particular. It’s the mechanism. We have symbol, but didn’t add internal slots. We didn’t want to go back to that. +DE: Okay. I did the worse part, trying to implement and ship this or working with Caitlin potter, I think she was the one that implemented it in V8. It wasn’t web compatible because this were libraries using this older pattern for caching. It wasn’t introduced to detect anything in particular. It’s the mechanism. We have symbol, but didn’t add internal slots. We didn’t want to go back to that. -RW: I’m sorry. You had to do that. +RW: I’m sorry. You had to do that. -DE: it’s okay. I liked the – the effort. +DE: it’s okay. I liked the – the effort. -BT: we have MM on the queue next. +BT: we have MM on the queue next. -MM: there a distinction that seems to be missing from the discussion, which is, well known symbol, and often seems to be taken into the discussion to be synonymous with cross-realm symbol. Registered symbols are cross-realm. It was a mistake that we made that well known symbols were not registered. To have – there’s basically 3 categories of symbol. And there should have been 2. There’s registered symbols, which are cross-realm. Well known symbols, not registered, still cross-realm. And non-registered sim bottoms, which are not is well known and per realm. If we want something to be – if we want to introduce a new cross-realm symbol for some reason, I think any – any new one that we add beyond what is already in the mistaken category of well known symbol should be added as a registered symbol. And the conflict any work that assumes that can discover all of the well known symbols by looking at the – the string named property – string named symbol valued of the constructor, I have written codes several times because I have needed it, I suspect many other people have. So even if I reform all the code that I have written, I think many things out there in the ecosystem that will break that may not be detected for a long time because of the assumption because if it’s – if it’s not registered and cross-realm, then it’s on the symbol constructor. So I just want everyone to keep in mind that there is this third category, with regard to the fall back symbol. I don’t care whether it stays per realm or whether it becomes a registered symbol or whether it becomes simply a string. But I certainly do not want it to become a new well known symbol. +MM: there a distinction that seems to be missing from the discussion, which is, well known symbol, and often seems to be taken into the discussion to be synonymous with cross-realm symbol. Registered symbols are cross-realm. It was a mistake that we made that well known symbols were not registered. To have – there’s basically 3 categories of symbol. And there should have been 2. There’s registered symbols, which are cross-realm. Well known symbols, not registered, still cross-realm. And non-registered sim bottoms, which are not is well known and per realm. If we want something to be – if we want to introduce a new cross-realm symbol for some reason, I think any – any new one that we add beyond what is already in the mistaken category of well known symbol should be added as a registered symbol. And the conflict any work that assumes that can discover all of the well known symbols by looking at the – the string named property – string named symbol valued of the constructor, I have written codes several times because I have needed it, I suspect many other people have. So even if I reform all the code that I have written, I think many things out there in the ecosystem that will break that may not be detected for a long time because of the assumption because if it’s – if it’s not registered and cross-realm, then it’s on the symbol constructor. So I just want everyone to keep in mind that there is this third category, with regard to the fall back symbol. I don’t care whether it stays per realm or whether it becomes a registered symbol or whether it becomes simply a string. But I certainly do not want it to become a new well known symbol. -BT: thank you, MM. A lot of discussion on the queue and we have 7 minutes left. Dan is next. +BT: thank you, MM. A lot of discussion on the queue and we have 7 minutes left. Dan is next. -DE: so I think being per realm just makes sense for simplicity, I don’t want to get into the issues that mark raised. I think when it was implemented initially, it was in V8 per realm, it might have been part of the conversion, C + + where it happened to be cross-realm. And yeah. I think it would – if V8 finds it high priority to fix, then it seems okay. But I couldn’t make that case for them either. +DE: so I think being per realm just makes sense for simplicity, I don’t want to get into the issues that mark raised. I think when it was implemented initially, it was in V8 per realm, it might have been part of the conversion, C + + where it happened to be cross-realm. And yeah. I think it would – if V8 finds it high priority to fix, then it seems okay. But I couldn’t make that case for them either. -BT: frank has a discussion. +BT: frank has a discussion. -FYT: yeah. I have the current container for that part of the V8. Whatever we decide, I will try to implement, in V8, as long as we have good enough unit tasks . . . don’t worry about that part. This is some part we didn’t somehow pay attention to in the past. Apologies for that +FYT: yeah. I have the current container for that part of the V8. Whatever we decide, I will try to implement, in V8, as long as we have good enough unit tasks . . . don’t worry about that part. This is some part we didn’t somehow pay attention to in the past. Apologies for that -BT: all right. Thank you, frank. SFC is next. +BT: all right. Thank you, frank. SFC is next. -SFC: yeah. I have two agenda items. The first is, thank you, DE for your explanation. I noticed in the specification that the fall back symbol is only set in sections that are labeled normative optional. So it’s fairly clear that this symbol is, like – if not deprecated, then legacy. So I think the option that is – I don’t exactly know the implications side, but whichever is least intrusive is the definitely the one we should do. Let’s see. No responses to that, I will go to the next agenda item. I know, for example, RGN has been working on the pull request to clarify what types of things Ecma 402 is able to specify. If registering a new symbol is in the area of things that Ecma 402 was allowed to specify. If 262 has a list of registered symbols, is Ecma allowed to add to that list? Is that legal to do? Or is that not compatible for 262 implementation is 402 adds more symbol +SFC: yeah. I have two agenda items. The first is, thank you, DE for your explanation. I noticed in the specification that the fall back symbol is only set in sections that are labeled normative optional. So it’s fairly clear that this symbol is, like – if not deprecated, then legacy. So I think the option that is – I don’t exactly know the implications side, but whichever is least intrusive is the definitely the one we should do. Let’s see. No responses to that, I will go to the next agenda item. I know, for example, RGN has been working on the pull request to clarify what types of things Ecma 402 is able to specify. If registering a new symbol is in the area of things that Ecma 402 was allowed to specify. If 262 has a list of registered symbols, is Ecma allowed to add to that list? Is that legal to do? Or is that not compatible for 262 implementation is 402 adds more symbol -JHD: from a spec perspective point of view, we don’t have anything that – like, it would be correct for 402, to patch the wellknown symbols table in order to make a well known symbol editorially. There isn’t yet that cares about in the table, but there might be based on current proposals, whether they are allowed to do it or not, I think with consensus, they would be. Whether we would want to give that consensus is obviously a different discussion. And then, yeah. Sorry. I will let MM answer. +JHD: from a spec perspective point of view, we don’t have anything that – like, it would be correct for 402, to patch the wellknown symbols table in order to make a well known symbol editorially. There isn’t yet that cares about in the table, but there might be based on current proposals, whether they are allowed to do it or not, I think with consensus, they would be. Whether we would want to give that consensus is obviously a different discussion. And then, yeah. Sorry. I will let MM answer. -MM: so the – I think the question steps right into this ambiguity, which is the question that you stated was, allowed to register new symbols, which makes me think about registering symbols, but the question is, as I stated verbally, was about well known symbols. I would certainly say that – I would certainly say that creating any well known symbols is something that I would like to resist and that should not be done lightly. It should be brough to the attention of the committee as something to examine whether there’s a good case for a well known symbol. With regard to creating new registered symbols, I would – obviously, it’s still a thing that needs consensus and should be examined, but I would be very relaxed about registering new symbols. I wouldn’t have a problem with that. +MM: so the – I think the question steps right into this ambiguity, which is the question that you stated was, allowed to register new symbols, which makes me think about registering symbols, but the question is, as I stated verbally, was about well known symbols. I would certainly say that – I would certainly say that creating any well known symbols is something that I would like to resist and that should not be done lightly. It should be brough to the attention of the committee as something to examine whether there’s a good case for a well known symbol. With regard to creating new registered symbols, I would – obviously, it’s still a thing that needs consensus and should be examined, but I would be very relaxed about registering new symbols. I wouldn’t have a problem with that. -BT: all right. Thank you with that. One more item in the queue, which is Ashley. +BT: all right. Thank you with that. One more item in the queue, which is Ashley. -ACE: so . . . my understanding – when I first saw symbols, my understanding is the great thing about them is they are guaranteed impossibility to clash with anything on the web. You get web capability for for free. My concern and maybe it’s not a real concern and in actually a pragmatic would remind, if we started using register symbols for things in general, fall back symbol is an exception, is that we, then, go back to having web compatibility issues. You take a random example like the double ended iterators. People go, and start using this, you know, symbol dot fall double ended iterator. But then it doesn’t conform to what the proposal ends up being in stage 4. And yes, . It feels a shame we could lose that guaranteed web compatibility things of well known symbols not being registered. +ACE: so . . . my understanding – when I first saw symbols, my understanding is the great thing about them is they are guaranteed impossibility to clash with anything on the web. You get web capability for for free. My concern and maybe it’s not a real concern and in actually a pragmatic would remind, if we started using register symbols for things in general, fall back symbol is an exception, is that we, then, go back to having web compatibility issues. You take a random example like the double ended iterators. People go, and start using this, you know, symbol dot fall double ended iterator. But then it doesn’t conform to what the proposal ends up being in stage 4. And yes, . It feels a shame we could lose that guaranteed web compatibility things of well known symbols not being registered. -MM: I think that’s a good point. I think it’s – I think that it certainly a true, in theory, as you stated, I am not sure it is realistically. The flip side of it is that introducing new well known symbols, where there’s a shared cross-realm identity that is not discoverible is something that has some very severe problems. So these two – so I think we now have the pros and cons for both sides of this dilemma. And, therefore, each case should be talked about. It’s always the case as mentioned with the fall back symbol, a registered symbol and just using a string, logically, are not much different from each other. So any time it might be a registered symbol, we should also put on the table, as a possibility, should it just be a string? +MM: I think that’s a good point. I think it’s – I think that it certainly a true, in theory, as you stated, I am not sure it is realistically. The flip side of it is that introducing new well known symbols, where there’s a shared cross-realm identity that is not discoverible is something that has some very severe problems. So these two – so I think we now have the pros and cons for both sides of this dilemma. And, therefore, each case should be talked about. It’s always the case as mentioned with the fall back symbol, a registered symbol and just using a string, logically, are not much different from each other. So any time it might be a registered symbol, we should also put on the table, as a possibility, should it just be a string? -BT: all right. SFC . . . we are slightly over, SFC. If you can talk in less than 30 seconds or so +BT: all right. SFC . . . we are slightly over, SFC. If you can talk in less than 30 seconds or so -SFC: yeah. My comment is, I don’t know if it was clear what the web reality is here, but I think we should just do what the web reality is. It doesn’t seem like something spending a lot of time on. +SFC: yeah. My comment is, I don’t know if it was clear what the web reality is here, but I think we should just do what the web reality is. It doesn’t seem like something spending a lot of time on. -JHD: right. I mean, web reality in terms of number of implementations would be it’s the same realm, unique symbol or or pro realm unique symbol. Which is what the spec currently says. We can go for that, unless someone has a desire for change. And I am not hearing that. Cool. Thank you +JHD: right. I mean, web reality in terms of number of implementations would be it’s the same realm, unique symbol or or pro realm unique symbol. Which is what the spec currently says. We can go for that, unless someone has a desire for change. And I am not hearing that. Cool. Thank you SFC: Does that mean we are keeping as single realm @@ -652,7 +651,7 @@ Presenter: Frank Yung-Fong Tang (FYT) - [PR100](https://github.com/tc39/proposal-intl-numberformat-v3/pull/100) - [slides](https://docs.google.com/presentation/d/1UUvbf3FFu9PGtrPAKPdMad9DZuVFLIvkAsAxyJZyvxM/edit) -FYT: hi everyone. I'm so sorry. I'm too lazy to drive to San Francisco. I’m FYTI work on the V8's internationalization team, with SYG and SFC. And today, I want to talk about issued and mainly I put here because the number format is in stage 3. But you also do address this, you should have another half. Have the touch actually is an ecma402 date-time format. So we found that that to thing probably would need to talk together, so why put it in together the thing? Because it's stage 3 already and SFC will to talk to you about other issue, about the V3 proposal. and number format. But the thing I think we should, we believe we should put together in talk one thing together. +FYT: hi everyone. I'm so sorry. I'm too lazy to drive to San Francisco. I’m FYTI work on the V8's internationalization team, with SYG and SFC. And today, I want to talk about issued and mainly I put here because the number format is in stage 3. But you also do address this, you should have another half. Have the touch actually is an ecma402 date-time format. So we found that that to thing probably would need to talk together, so why put it in together the thing? Because it's stage 3 already and SFC will to talk to you about other issue, about the V3 proposal. and number format. But the thing I think we should, we believe we should put together in talk one thing together. FYT: So what is the issue? The issue is they're currently in the Intl data format and the newly for an Intl number format V3, which is modeled after the date format, have the form that range method or formatRange two parts for different kind of output, which - that method takes two argument and supposedly to format a range of either type. I ordered a range of number in particular for the number format, be three. The number could be also represented by stringFormat and what happened right now is whenever we did originally did the date-time Format thing. We have a range check to say well if the first argument is larger than second one, they throw that range error. And at that time, we think probably. That's right thing to do and whenever we try to apply for number formatter and we figured out there are certain thing. It didn't really make much sense and we in particular I think with their soyuz case with thing will be blocked by this kind of constraint. So the status quo is that how if the x is greater than y, we will throw rangeerror. and we are proposing for both in the number Three. And also, I can afford to problem with the 402 pr normative, of course, to change it to no rain through and just return a string with the formatted value. And so, so basically there's a request to change that. @@ -725,7 +724,7 @@ PFC: (Slide 10) Now, we get into a real edge case. We want to put a limitation o PFC: (Slide 11) Another one, PR [#2267](https://github.com/tc39/proposal-temporal/pull/2267), we want to avoid repeated observable Get operations on the same property. This doesn't really change how Temporal works unless you are observing the property Get operations on your calendar object. So this is technically a normative change, but it doesn't change much, just a small optimization. -PFC: (Slide 12) In the same vein, PR [#2269](https://github.com/tc39/proposal-temporal/pull/2269), we want to avoid an unnecessary call to a `toString()` method where if you have a Temporal.ZonedDateTime object, you want to convert it to the string. But you use an option which means that you don't want to include the calendar name annotation, then there is no need to call `toString()` on the calendar object. So, we can avoid doing that. +PFC: (Slide 12) In the same vein, PR [#2269](https://github.com/tc39/proposal-temporal/pull/2269), we want to avoid an unnecessary call to a `toString()` method where if you have a Temporal.ZonedDateTime object, you want to convert it to the string. But you use an option which means that you don't want to include the calendar name annotation, then there is no need to call `toString()` on the calendar object. So, we can avoid doing that. PFC: (Slide 13) Next up, there are three pull requests ([#2284](https://github.com/tc39/proposal-temporal/pull/2284), [#2287](https://github.com/tc39/proposal-temporal/pull/2287), [#2345](https://github.com/tc39/proposal-temporal/pull/2345)) to clarify or fix edge cases in the grammar of ISO 8601 strings. A couple of places where we need to disambiguate things. In, I think it was in June or April, I can't remember which one, I had a another PR that did a bunch of disambiguation on PlainYearMonth versus PlainTime strings and these are a couple of stragglers left over from that. We also have one to accept a calendar annotation, in a string, that's being parsed as a Temporal.Instant, which previously was not accepted, but for consistency with the rest of the proposal should be accepted and ignored. Basically all the other ones are stragglers or mistakes in the grammar, disambiguation time strings from year-month strings. @@ -763,7 +762,7 @@ FYT: So first, I thank all the champions to address a lot that you should I eras PFC: Okay. The week of year, that's on my radar. I'm expecting that it doesn't need to be a normative change because we can just provide a reference to somewhere else where it is specified. And the precision issue, I think we need to find out if that all that needs is a clarification in the spec, in which case, it wouldn't be a normative change, that we need to present or if we need to change something. Thanks for mentioning those. Those are both on the road map. -FYT: I think my key point is just thinking I want to bring the reality to the rest of TC39. The champions did great work. I think the implementations are getting closer and closer but there are still a lot of things that kind of helped to shape out all the detail. I think André probably shared some of the same issues. I mean, I basically wrote every single line of the implementation in V8 myself. And there are issues, which is still very confusing but those are probably will be in, doesn't mean you to be clarified by the spec It just the organization wise and there's a lot of changing from the ground up, but the points that I think all the information really try very hard to meet that, but it's still a lot of work, at least in implementation. So you probably will see some requests for us to address. Of the car at cases. Yeah, but great work. We really, really appreciate your work. +FYT: I think my key point is just thinking I want to bring the reality to the rest of TC39. The champions did great work. I think the implementations are getting closer and closer but there are still a lot of things that kind of helped to shape out all the detail. I think André probably shared some of the same issues. I mean, I basically wrote every single line of the implementation in V8 myself. And there are issues, which is still very confusing but those are probably will be in, doesn't mean you to be clarified by the spec It just the organization wise and there's a lot of changing from the ground up, but the points that I think all the information really try very hard to meet that, but it's still a lot of work, at least in implementation. So you probably will see some requests for us to address. Of the car at cases. Yeah, but great work. We really, really appreciate your work. RKG: Yeah. I don't think that's really a negative point, the back and forth between implementations and spec has been very good and fruitful and active. @@ -821,7 +820,7 @@ Consensus on the following normative changes: ## NumberFormat V3 update -Presenter: Shane F.Carr (SFC) +Presenter: Shane F.Carr (SFC) - [proposal](https://github.com/tc39/proposal-intl-numberformat-v3) - [slides](https://docs.google.com/presentation/d/1C2FiBTcDBKOGORONHI6lV_rMWge3RAokb1kSlfw8igE/edit#slide=id.p) @@ -844,13 +843,13 @@ SFC: Rounding priority details. (#8) There's no proposed changes right now, alth SFC: Interpret strings as decimals, (#334) this is the part of the proposal where currently, if you have a string that has a lot of digits in its right now, now, the number get the string gets interpreted as a number which means that you lose Precision. This is quite important for cases such as currency formatting where currencies are often stored and very small units. But then if you have to display a large amount of them like, for example, if you are storing your numbers in 10^-6 units of currency but then you have to display a 10^9 amount of currency, you're going to run into this issue quite easily. the way that we solve this problem is by introducing a new type of the specification called the intl mathematical value. And basically allowing the string to carry its full precision throughout the number formatting stack. This turns out to have been a more complicated part on the implementations than I had anticipated. And part of this is resolved by removing the range check as Frank already Illustrated in his presentation, This afternoon, I do want to highlight that there is a pull request open for review. I believe it's a waiting on KG as well as others to finalize that review, I would really appreciate if we could get that PR approved and merged because I think it's been, you know, it's one of the last big changes and it's been in the way of some of the implementations finalizing and landing and shipping. -SFC: Rounding modes, (#419) nothing's changed here. I think people are happy with what we propose. We spent quite a bit of time before stage 3, debating these names and what they are what the behaviors are. So no changes here. +SFC: Rounding modes, (#419) nothing's changed here. I think people are happy with what we propose. We spent quite a bit of time before stage 3, debating these names and what they are what the behaviors are. So no changes here. SFC: Sign display (#17) - no changes here. This one is a fairly small uncontroversial part of the proposal, so, no changes here. SFC: And there are a few remaining open issues (#63, #96, #98) on the proposal repository. There's several editorial and documentation ones, which I'm not going to present here, but there are a few normative ones, or ones are potentially normative. They want to highlight here along with the proposed path on these. The first is to add source to the format to parts output. I think I mentioned this a little bit earlier. There's a bug in the specification that was found both by anba and then again by FYT separately, when they were implementing this proposal where the source field on the format, two parts is not being set up correctly, so that should be fixed. -SFC: #96, improve algorithm for resolving. Minimum Maximum Traction, digits, and rounding priority. This is a suggestion made by PFC which makes a lot of sense to we debated we discussed it in the TG2 meeting last month. And there's General agreement. During the group that PFC change proposed changes a positive, change the exhibit. the exact specification for that needs to be written and tested and that's an action item on myself that's coming up to tweak that part of the algorithm to be clear. That's only affects cases where minimum and maximum fraction to the star difference when rounding priority is applied. So it's a bit of an edge case, but it will make things more understandable and you're interested in the details, you can click Issue, #96 or I can also show it on the screen. Many people have questions about it, you can enter the queue. If you have any questions about any of these things, then we can dive into more details. +SFC: #96, improve algorithm for resolving. Minimum Maximum Traction, digits, and rounding priority. This is a suggestion made by PFC which makes a lot of sense to we debated we discussed it in the TG2 meeting last month. And there's General agreement. During the group that PFC change proposed changes a positive, change the exhibit. the exact specification for that needs to be written and tested and that's an action item on myself that's coming up to tweak that part of the algorithm to be clear. That's only affects cases where minimum and maximum fraction to the star difference when rounding priority is applied. So it's a bit of an edge case, but it will make things more understandable and you're interested in the details, you can click Issue, #96 or I can also show it on the screen. Many people have questions about it, you can enter the queue. If you have any questions about any of these things, then we can dive into more details. SFC: The third open issue is limit the the exponent and range implementing medical values. This is another concern that was brought up by implementers, so part of the idea of using intl mathematical values so that we can format things that are beyond the range of a double. However, we probably don't need to support formatting numbers that have 1 million digits in them, and by not having a limit, it does introduce some complications in implementations. So the proposal here is to set same limits, I believe we currently have discussed ten thousands, digits and 10,000 possible maximum minimum exponent, which is a limit where we were inspired by Temporal and new Date. The these limits already exists in elsewhere in 262. So we're not inventing them Completely new. Temporal for example has limits on the size of dates and that are related to 10,000 years or something like that. So, We're proposing to use those same limits, the exact details of that you can see in that issue 98 and how we arrived at those limits. And the pull request will be coming soon. @@ -876,7 +875,7 @@ KG: What I'm saying is that I am fine with having “min2” to have different b JHD: I mean, It seems like if you make any tooth E string except for the known values, the auto that means you can never again a node value web compatibly because someone will probably depend on meaning Auto, so, the only way you can ever add a new value is by explicitly, rejecting all unknown values. So that you can remove that exception later, one of the time, -KG: like I said, you said this came up but you had originally rejected unknown things, but it turned out people were passing the string "true". I think having the string "true" be an alias for auto, no problem with that, just not any truthy string. +KG: like I said, you said this came up but you had originally rejected unknown things, but it turned out people were passing the string "true". I think having the string "true" be an alias for auto, no problem with that, just not any truthy string. SFC: Moment, take that feedback. They can see again. Just make sure we're clear here. The status quo, is that any strings are interpreted in this way. So we already are doing the change where we are taking a string that currently means Auto and in effect mapping it to mean something else. So did this proposal already does that? So I do not necessarily know if I agree with JHD statement that well we can't do that again because we're doing it right now. That's exactly what the proposal is doing. I mean, it's already in stage 3, I'm saying that if two years from now you wanted to add min3, I suspect it would work in a similar way and be acceptable. This has been the state since 2012. @@ -904,7 +903,7 @@ SFC: I'm going to go ahead and just react to that one more time, which is I'm pr RPR: to remind that this 3 minutes, 3 minutes left on the time box. you don't have any queue items. -SFC: Let's just keep chatting about this for the next three minutes and hopefully get a conclusion. +SFC: Let's just keep chatting about this for the next three minutes and hopefully get a conclusion. KG: I acknowledge that the thing I am proposing is more of a risk of breaking code than just mapping everything to auto but I don't think just mapping everything to auto is reasonable behavior. I am not attached to this solution I proposed, I'm fine exploring other solutions, but just picking out specific strings and saying these are given specific behavior and every other string is auto, that is just not a reasonable API to work with. @@ -916,7 +915,7 @@ SFC: Okay, That's valid. I propose that we take this discussion offline and cont RPR: FYT is on the queue with resolved options. -FYT: It's not true. It will that passing min3 can't be distinguished from `min100`, because the result of `resolvedOptions` will have `auto`. +FYT: It's not true. It will that passing min3 can't be distinguished from `min100`, because the result of `resolvedOptions` will have `auto`. KG: It's not that you can't distinguish it. It's that you have to go out of your way to distinguish it. But if you just try to use it, it will look like it worked. @@ -928,7 +927,6 @@ SFC: Thank you very much, everyone. Thank you, ### Conclusion/Decision - ## Set Methods: how to access properties of the argument Presenter: Kevin Gibbons (KG) @@ -936,9 +934,9 @@ Presenter: Kevin Gibbons (KG) - [proposal](https://github.com/tc39/proposal-set-methods) - [slides](https://docs.google.com/presentation/d/19nCrwU5RkbIafW9zRDVDbGbPsiq7ct1IovsJLTU7p8Q/edit) -KG: Continuing my quest to work out all of those little details adding `Set.prototype.union` and friends, I have another question for the committee today. So just to recap, previously, we decided that methods like `Set.prototype.intersection` should use the internal slots on `this` but not on the arguments, so they should use the public API on the argument. So that, for example, you can pass a proxy for a Set as an argument to set.intersection and have that work. Or you can pass a user-defined set-like thing. But we didn't specify exactly what that public API should be. And in particular, should it be string based? Should it be accessing the regular `.has` method that a user would call, or should it be symbol based? And just to set the stage here, I think the most interesting example of one of these APIs is set dot intersection. Here I have written out an algorithm for intersection. The fact which is interesting about intersection is that it needs to, depending on which of the receiver or the argument is larger, you are either going to iterate the receiver and check membership in the argument, or you are going to iterate the argument and check membership in the receiver. If you just do one, you get worse big-O behavior. So you do really want to switch on which is larger. But in any case, you can end up accessing both the iterator and has or some equivalent of has; you need some way of checking membership in the arguments. So the obvious thing to do is to just use strings, to just call .has, but there's a problem: map has .has. `Map.prototype.has` checks for membership in the map. But that's not what Symbol.iterator does on maps. Symbol.iterator iterates the map, giving you entries. So in my example, where the algorithm used switches on the size of the argument, map would work as an argument to intersection only sometimes. It only works when map.size is larger than receiver.size. Because in that case you would be using the has method on map which works as if the map is a set as opposed to using the symbol dot iterator method on maps, which does not work as if the map is a set. I think that's bad. I would not like to have an API where if you pass a map, it works sometimes. +KG: Continuing my quest to work out all of those little details adding `Set.prototype.union` and friends, I have another question for the committee today. So just to recap, previously, we decided that methods like `Set.prototype.intersection` should use the internal slots on `this` but not on the arguments, so they should use the public API on the argument. So that, for example, you can pass a proxy for a Set as an argument to set.intersection and have that work. Or you can pass a user-defined set-like thing. But we didn't specify exactly what that public API should be. And in particular, should it be string based? Should it be accessing the regular `.has` method that a user would call, or should it be symbol based? And just to set the stage here, I think the most interesting example of one of these APIs is set dot intersection. Here I have written out an algorithm for intersection. The fact which is interesting about intersection is that it needs to, depending on which of the receiver or the argument is larger, you are either going to iterate the receiver and check membership in the argument, or you are going to iterate the argument and check membership in the receiver. If you just do one, you get worse big-O behavior. So you do really want to switch on which is larger. But in any case, you can end up accessing both the iterator and has or some equivalent of has; you need some way of checking membership in the arguments. So the obvious thing to do is to just use strings, to just call .has, but there's a problem: map has .has. `Map.prototype.has` checks for membership in the map. But that's not what Symbol.iterator does on maps. Symbol.iterator iterates the map, giving you entries. So in my example, where the algorithm used switches on the size of the argument, map would work as an argument to intersection only sometimes. It only works when map.size is larger than receiver.size. Because in that case you would be using the has method on map which works as if the map is a set as opposed to using the symbol dot iterator method on maps, which does not work as if the map is a set. I think that's bad. I would not like to have an API where if you pass a map, it works sometimes. -KG: So more generally. Do we want to accept things which happen to implement has, or do we want the arguments to set prototype intersection to be treated as sets only if they specifically opt in to being treated like sets, for example, by having a symbol which indicates that this is how you query set membership? And if you implement this symbol, then you are promising that the implementation for symbol.iterator is consistent with the implementation for Symbol.SetHas and perhaps the string name size or perhaps you have a new SetSize symbol, something like that. And you would only attempt to call this method; you would never call the string named `has`, method and the map of course would not implement this symbol. And in Set this symbol would just be an alias for the string named method `has`. If you have a set-like it could be whatever you want. I like this behavior personally but it would be a new thing, a new I suppose well-known symbol. Yeah. That's the question I would like to put before you today. And then if we do decide on symbols there's follow-up questions, but just this bit to start. +KG: So more generally. Do we want to accept things which happen to implement has, or do we want the arguments to set prototype intersection to be treated as sets only if they specifically opt in to being treated like sets, for example, by having a symbol which indicates that this is how you query set membership? And if you implement this symbol, then you are promising that the implementation for symbol.iterator is consistent with the implementation for Symbol.SetHas and perhaps the string name size or perhaps you have a new SetSize symbol, something like that. And you would only attempt to call this method; you would never call the string named `has`, method and the map of course would not implement this symbol. And in Set this symbol would just be an alias for the string named method `has`. If you have a set-like it could be whatever you want. I like this behavior personally but it would be a new thing, a new I suppose well-known symbol. Yeah. That's the question I would like to put before you today. And then if we do decide on symbols there's follow-up questions, but just this bit to start. MM: First of all understand the example, what point was being made the if you if instead of asking the Opera House, Iterative, if you instead it just skipped got puppies, that are rare. Then the the map will be set like in both of its has behavior and it's got keys Behavior. @@ -1010,7 +1008,7 @@ KG: So, it is true that a multiset is not set-like for the purposes of symmetric WH: You would get the incorrect result if you just run the algorithm — the difference won’t be symmetric. If the multiset is on the right side and contains an element twice, it will toggle each time it sees it while iterating through it. -KG: Depends on how you write that algorithm. But yes, I agree that that is potentially possible. I think that's kind of out of scope for the question I am asking here because that would be a problem equally with a symbol-based protocol or a string-based protocol. +KG: Depends on how you write that algorithm. But yes, I agree that that is potentially possible. I think that's kind of out of scope for the question I am asking here because that would be a problem equally with a symbol-based protocol or a string-based protocol. WH: It's very relevant to the notion of what it means to be set-like. @@ -1020,7 +1018,7 @@ WH: I recall that it was asserted earlier in today’s discussion that the resol KG: I don't think we require a complete resolution to the question of what is set-like in order to make progress on the specific question that I'm asking here. -ACE: So this won't be news to KG. As we've already talked about this. I thought I'd bring it up. up. I guess it's a slight move on from over is purely string or symbol. But then what what That symbol exactly means. guess, one more how many symbols were talking about So, you could have. this one method is a symbol, so then you've got set has symbol.iterator symbol, and then size string, or they could all three be symbols or strings. Or we have one symbol or one string which returns to you an object that then conforms the protocol akin to when you call symbol.iterator, the protocol isn't just so that the iterable protocol isn't just having - to returns something with a next method. So you could imagine Set having a symbol that is set like symbol dot set like bikeshed, name pending and then the expectation of that. Is it returns you something that has a has iterator and a size and then, for set, it could optimize not returning an object by just returning this, which conforms to accept its own protocol, that the benefit then being You can Implement multiple protocols that all want to use, Symbol.iterator, but for different meanings, maybe one iterators returning, pairs one, iterator goes backwards. Like you can have of protocols that all are all trying to grab it through the iterator symbol. But you'd only be able to implement one of the protocols. So in this case, you know, you could say, I'm going to be set. So I'm I'm going to add Set has and that means my symbol do iterator returns values. But in our means, you can't conform to any other. Protocols are also wants to use iterator, +ACE: So this won't be news to KG. As we've already talked about this. I thought I'd bring it up. up. I guess it's a slight move on from over is purely string or symbol. But then what what That symbol exactly means. guess, one more how many symbols were talking about So, you could have. this one method is a symbol, so then you've got set has symbol.iterator symbol, and then size string, or they could all three be symbols or strings. Or we have one symbol or one string which returns to you an object that then conforms the protocol akin to when you call symbol.iterator, the protocol isn't just so that the iterable protocol isn't just having - to returns something with a next method. So you could imagine Set having a symbol that is set like symbol dot set like bikeshed, name pending and then the expectation of that. Is it returns you something that has a has iterator and a size and then, for set, it could optimize not returning an object by just returning this, which conforms to accept its own protocol, that the benefit then being You can Implement multiple protocols that all want to use, Symbol.iterator, but for different meanings, maybe one iterators returning, pairs one, iterator goes backwards. Like you can have of protocols that all are all trying to grab it through the iterator symbol. But you'd only be able to implement one of the protocols. So in this case, you know, you could say, I'm going to be set. So I'm I'm going to add Set has and that means my symbol do iterator returns values. But in our means, you can't conform to any other. Protocols are also wants to use iterator, KG: So I think if we think that this is something which is likely to come up, that people might want to use one thing to be iterable in different ways depending on which protocol they are trying to implement, I think the easiest way to resolve that would be to have a symbol.setIterator or something like that, rather than having an additional object that gets returned in the middle. I think the additional object doesn't add much benefit, because unlike for iterators - iterators need to carry state, but this would not need to carry state. So I think I would prefer just having an additional symbol on this rather than getting an object and then looking at methods on that. Symbols are not expensive. That namespace is not contested. But that's a personal preference. @@ -1034,7 +1032,7 @@ RPR: DRO, you are still too quiet to hear. Maybe you could put your comments in GCL: Hello. so I'm sort of curious like from a more, I guess sort of like, flipping the question when something is a very obviously, not a set like it's an empty object. What would we consider like the point at, which we decide that this object is not useful? I don't like there's, I don't think we have an answer for that right now because we're sort of discussing whether a protocol should exist, but I'm just sort of curious like for example, in this, in this example, code on the slide here, right? It's doing, you know, bound function create, our got has, and if that's undefined that's like a spec assertion problem, because bound function create so, like, within the context of, like, whatever this algorithm is some point, There's like, I guess I'm trying to get to the question of like something that incidentally looks like a set for example, map versus something that is very obviously not a set. There's the there's the path we take where we say something has to very gbesi by like some sort of simple branding or something, but I'm just sort of curious of curious if this is a useful way to look at it. Just to sort of understand maybe what kind of check, we're actually trying to perform here. -KG: So I think we get whatever things we need to get and we check that they have the types that we expect them to have and throw if we were not able to get the things that we needed, and then we just proceed to blithely use them. And not like - like we're not going to check that the result of our `has` is a Boolean, that sort of thing. So yeah. And I would expect it to be like that. You get everything up front, if anything wasn't a function or whatever you throw and then you just proceed. +KG: So I think we get whatever things we need to get and we check that they have the types that we expect them to have and throw if we were not able to get the things that we needed, and then we just proceed to blithely use them. And not like - like we're not going to check that the result of our `has` is a Boolean, that sort of thing. So yeah. And I would expect it to be like that. You get everything up front, if anything wasn't a function or whatever you throw and then you just proceed. SFC: Yeah, so I just wanted to raise the the idea that maybe we should be thinking about these function names as or these protocols as essentially being part of global namespace. Like, when we give the way we need, when we name functions, we should be thinking about how functions with the same names this behave in other, you know, in built-in objects. and if we think about the string names of these functions as having, you know, know, consistent Behavior doesn't need to necessarily be something that's formal, I think it could help resolve some of these issues like the one that's that KG is showing here where like dot has method is a different Behavior because it has different semantics into places of we have to have a function with that name concretely. I think one thing we may want to consider which I'd I didn't directly see in the presentation is that we actually use a new string method called set has. I think the presentation suggest we added a string method with that name will not add a string method with that name that maps to a certain Behavior made duplicate the behavior of other functions, but then also becomes that, you know, Global namespace, and then we use strings because we I do want to echo, PFC’s point from earlier that we discussed this quite at length, with Temporal and decided to use string functions. So I'm kind of hesitant with the idea of basically saying that we should use simple functions here. Like I think that's a much bigger discussion, and I don't know if we have consensus on that, but if we're already using string, function names basically everywhere. Why not just continue to use them here as well? @@ -1044,7 +1042,7 @@ SFC: yeah, I guess my response to that is I don't necessarily think that's neces MM: Okay, I'll make it very quick. I appreciate the question. I can see both sides of it. You asked for a preference. I'm going to express a preference but I'm still open to arguments on the other side. My preference is very much along the lines of what SFC was talking about because of the nature of JavaScript as language, JavaScript like small talk like python is loose polymorphism. and we've been treating it that way in design and the example that you're examining to to provoke our intuition, Perfect example. why does set have keys and values and entries? It's exactly because we had in mind some abstract polymorphism between set and map, so they could be used in similar consumption contexts. -RBN: I brought this up in the delegates chat. That is there a reason that we couldn't depend on a more generic implementation of intersection that uses set has and something like either entries or Keys as opposed to trying to grab the iterator directly off of that because you mentioned intersecting with a map with sometimes work. Actually, I don't think would ever work because the things you iterate on the map would be the entries, because that's the default iterator for a map and a map you've never have those. The entry as a key. So you can theoretically supported a few actually used the string Methods, that that matched. +RBN: I brought this up in the delegates chat. That is there a reason that we couldn't depend on a more generic implementation of intersection that uses set has and something like either entries or Keys as opposed to trying to grab the iterator directly off of that because you mentioned intersecting with a map with sometimes work. Actually, I don't think would ever work because the things you iterate on the map would be the entries, because that's the default iterator for a map and a map you've never have those. The entry as a key. So you can theoretically supported a few actually used the string Methods, that that matched. KG: So the reason that it works sometimes with a Map is that this algorithm only iterates the Map sometimes. Sometimes it calls `has` on the map. And yes it is true that in the case where you actually iterate the map, that you would always get the empty set as the intersection. But the algorithm doesn't always iterate the map. @@ -1066,7 +1064,7 @@ Presenter: Kevin Gibbons (KG) - [proposal](https://github.com/tc39/proposal-duplicate-named-capturing-groups) -KG: Okay, this will hopefully have significantly less discussion necessary than the previous item. So I presented this at the last meeting for stage 2. Since then I got some useful feedback - or I suppose during the meeting - I got some useful feedback that as specified the iteration order of properties was inconsistent. I'll get to that later. But there's only been that one small change. So I'm here today asking for stage 3. +KG: Okay, this will hopefully have significantly less discussion necessary than the previous item. So I presented this at the last meeting for stage 2. Since then I got some useful feedback - or I suppose during the meeting - I got some useful feedback that as specified the iteration order of properties was inconsistent. I'll get to that later. But there's only been that one small change. So I'm here today asking for stage 3. KG: So the proposal is, in regular expressions, you can have these named capture groups, which are very helpful. You don't have to remember exactly the order of all of your capture groups and use \1 and so on, you can just give them useful names. The problem is, or at least a problem is, that sometimes you have multiple different ways of writing things, where you don't actually care which way was written. So for example, you might have year dash month or month dash year, as in this example. And you just want to extract the year from this. But right now we have a restriction that says you can only use a given capturing group name once in a regular expression, so the names must be globally unique. So this currently is a syntax error. The proposal is that we relax this requirement, so that if the capturing group is reused across alternatives - so, where you have a pipe between the two parts that have the named capture group - the capture group names can be reused. It continues to be an error to reuse a name within an alternative, but if they are in separate alternatives, then it is impossible for them to both participate in the match. Modulo some details about repetition. @@ -1107,7 +1105,7 @@ Presenter: Robin Riccard (RRD) Jordan Harband (JHD) RRD: Okay. So this is about symbol predicates. We're going to show what we are introducing just after, and we're going to present it for stage one and maybe stage two. So the main motivation dates back from our last presentation with record and tuple champion group for symbols as weakmap keys thaat we presented the last meeting. So when we went for stage three with symbols as weakmap keys. We discussed the possibility of a new proposal. That was symbol predicates. And the goal of this was mostly to able to extract a way to find out what would be or not be a valid WeakMap Keys. So the main thing that we needed at this is being able to Define if symbol would be registered or being well-known. -JHD: Right. Yeah, so it's relatively straightforward, it's two predicates, it uses the thisSymbolValue abstract operation to figure out if a symbol or not and returns true or false. That's all it does. One of the nice things that this will provide is the - so for isRegistered. You can do this in user land with you know typeof symbol.keyFor `=== 'string' and wrapping that in a try-catch for when you pass a symbol in there, but this is much more straightforward and ergonomic. Then for the is WellKnown, the only way to do that is to enumerate all of the existing well known symbols and cache them during a time when you know that the realm is is not messed with, and then, at any future time, you can check that list. So this is also much more ergonomic to do. +JHD: Right. Yeah, so it's relatively straightforward, it's two predicates, it uses the thisSymbolValue abstract operation to figure out if a symbol or not and returns true or false. That's all it does. One of the nice things that this will provide is the - so for isRegistered. You can do this in user land with you know typeof symbol.keyFor `=== 'string' and wrapping that in a try-catch for when you pass a symbol in there, but this is much more straightforward and ergonomic. Then for the is WellKnown, the only way to do that is to enumerate all of the existing well known symbols and cache them during a time when you know that the realm is is not messed with, and then, at any future time, you can check that list. So this is also much more ergonomic to do. RRD: That's it so that we already answered earlier. So, Yes, into phobic symbol is not going to be a volunteer program. So I'm looking at the queue seems empty, we're then going to ask for a consensus for stage 1 @@ -1129,7 +1127,7 @@ SFC: We can keep going through the queue. I added another item later. MM: Okay, so I'm perfectly fine with this going to stage one where we're investigating the question. I have quibble with this going to stage two, aside from the issue of naming and categorization, just in terms of the logic of the API. A question that has repeatedly come up is whether to relax the existing spec invariant that the well-known known symbols are exactly the symbol valued string static properties of the symbol Constructor. The reason why that invariant is currently useful and used if they're what if we introduce the API such that I can get the same utility from a more principled API that would be interesting. The utility that I need is to know for a given well-known symbol, let's say you're writing a remote object protocol between to address spaces and a well-known symbol hits the boundary. You'd like to know not just is it a well-known symbol but which well-known symbol it is, so that you could then serialize the identification of which one it is to the other side. So the other side can then use this API to look up the same, well-known symbol, and this probably why stresses that the committee made a huge mistake in not having the well known symbols beer. Registered because of all the well-known symbols were simply registered. Then the inter inter the remote object treatment of registered symbols were just apply without needing any more cases. -JHD: To answer your question, Mark. I think that the - during the presentations about the get intrinsics proposal, we've discussed extending it before I come with a enumeration. in other words with some mechanisms so that you could get all the intrinsics. At which point you could filter with these well-known predicate and get the list, you want regardless of where they live and it is absolutely something I was thinking about that, perhaps one the iteration forms would give you a way to decide, which one it was in addition to the ability that you could just look at the dot description on it, but like, that's something I'd love to love to discuss separately. But I think that I would like to relax the requirement that they be on symbol for other protocol proposals, and I think that that would be a path to it, but I think that is orthogonal to this proposal. +JHD: To answer your question, Mark. I think that the - during the presentations about the get intrinsics proposal, we've discussed extending it before I come with a enumeration. in other words with some mechanisms so that you could get all the intrinsics. At which point you could filter with these well-known predicate and get the list, you want regardless of where they live and it is absolutely something I was thinking about that, perhaps one the iteration forms would give you a way to decide, which one it was in addition to the ability that you could just look at the dot description on it, but like, that's something I'd love to love to discuss separately. But I think that I would like to relax the requirement that they be on symbol for other protocol proposals, and I think that that would be a path to it, but I think that is orthogonal to this proposal. MM: Well, it's not orthogonal in the sense that the `symbol.keyFor` and and `symbol.for` is already can be used as a predicate, if you wish to as to whether a simple as So similarly a corresponding thing that named well-known symbols could do likewise, and right now essentially the Symbol Constructor is used for that name, lookup, which is terrible, which I agree, is terrible. The get intrinsics proposal as the lever for solving this I had not considered, and that's interesting. So I think that that's a good discussion to have during stage 1, so on the basis of that surprising suggestion from you. I continue to support this for stage one, and not yet for stage 2. @@ -1191,7 +1189,7 @@ KKL: If I recall, from my last read of this, there was text expressly precluding KG: Yes. -KKL: Right now what it says, which is to say that there should be if that if that Text persists in this. There should be a corresponding relaxation of that with such that one if we were to propose virtualizable behavior in the future that it would be clear that this was not intended to preclude that possibility. +KKL: Right now what it says, which is to say that there should be if that if that Text persists in this. There should be a corresponding relaxation of that with such that one if we were to propose virtualizable behavior in the future that it would be clear that this was not intended to preclude that possibility. KG: I mean, we would just remove this paragraph if we wanted that. @@ -1277,7 +1275,7 @@ KG: I'm going to assume that's consensus having previously gotten support from S RPR: I'm seeing nods from SFC in the room and PHE is nothing as well, okay? So I hope - -JHD: So just the specific use case of browsers, preventing private Fields being added to the window. Proxy is like a no-brainer for me. Like obviously, we want to prevent that, there's no valid use case for it taste for it. I'm on board and there's there's been some, know, rumbling about maybe the location object to and I'm probably fine with that also. But the even the with the web browser limit, this is sort of a broad brush and who knows, you know, maybe they'll the web will start not permitting private fields on all sorts of random stuff and while that's fine, if there's no case for it, like no one will notice. it's also a capability that it's hard to virtualize and can't be mimicked and userland so, It feel Useful to me to If possible constrain this as tightly as possible because if it turns out that there is a demand for it then probably be want to expose it to And the we should probably find out about that before were locked into some design because like a web browser. Host. let's say just chose to go nuts with the host hook +JHD: So just the specific use case of browsers, preventing private Fields being added to the window. Proxy is like a no-brainer for me. Like obviously, we want to prevent that, there's no valid use case for it taste for it. I'm on board and there's there's been some, know, rumbling about maybe the location object to and I'm probably fine with that also. But the even the with the web browser limit, this is sort of a broad brush and who knows, you know, maybe they'll the web will start not permitting private fields on all sorts of random stuff and while that's fine, if there's no case for it, like no one will notice. it's also a capability that it's hard to virtualize and can't be mimicked and userland so, It feel Useful to me to If possible constrain this as tightly as possible because if it turns out that there is a demand for it then probably be want to expose it to And the we should probably find out about that before were locked into some design because like a web browser. Host. let's say just chose to go nuts with the host hook KG: so I don't think we can reasonably write down more constraints than this, but I'm quite happy to go back to the HTML thread and say this TC39 approved of this with general language, but specifically for use with window and/or location, if you actually do want to start doing it on more stuff, please talk to us before doing that. Or something to that effect. @@ -1293,7 +1291,7 @@ WH: Okay, sounds good. DE: I want to encourage us to have a kind of positive collaborative relationship with host where we can work based on trust. I worry a little bit about the kind of discourse here about if we let them do this then they'll try to do that. We're all just on the same team there in Open Standards process as well. Also, governed by a code of conduct so let's just keep that in mind. -CP: It's clear that this is going to be a problem for virtualization because you will not be able to create something that looks like a window, it seems problematic. You will always be able to test if it is a real window or not. So I wonder if we should look into alternative solutions where an object can expose some information that can be used to prevent the addition of new private fields. That way we can virtualize those objects, maybe a well-known symbol added into the object or a new API, that prevents the expansion of the object with private fields associated with it. And the same tricks can be needed for window practices. +CP: It's clear that this is going to be a problem for virtualization because you will not be able to create something that looks like a window, it seems problematic. You will always be able to test if it is a real window or not. So I wonder if we should look into alternative solutions where an object can expose some information that can be used to prevent the addition of new private fields. That way we can virtualize those objects, maybe a well-known symbol added into the object or a new API, that prevents the expansion of the object with private fields associated with it. And the same tricks can be needed for window practices. KG: So we're over time. So SFC brought up that idea and previously WH objected to that idea. My response when it was brought up previously is that I think that's a thing that we could usefully explore in the future, but I do not consider it in scope for this PR. We could certainly build such an API on top of this PR once this was landed but I would really like to just do the small change right now and discuss possible virtualization for this change and expansion to user code at a later date rather than 5 minutes over our timebox, or 15 minutes, whatever it is. @@ -1301,7 +1299,7 @@ JHD: I said earlier that putting private fields is the same as putting a key in KG: Yes, you can. The implementations are very different. So while the answer is yes, you can put a window proxy in a WeakMap, implementations in engines for WeakMaps are very different from the implementations for private fields, at least in some browsers. -JHD: that makes sense. Like I'm not saying that they have to be linked but like from a mental model is it worth after this PR considering making the same host hook prevent something from being a WeakMap key? +JHD: that makes sense. Like I'm not saying that they have to be linked but like from a mental model is it worth after this PR considering making the same host hook prevent something from being a WeakMap key? KG: No. I'm just going to say no. diff --git a/meetings/2022-07/jul-21.md b/meetings/2022-07/jul-21.md index eed62919..1e9dca52 100644 --- a/meetings/2022-07/jul-21.md +++ b/meetings/2022-07/jul-21.md @@ -2,7 +2,7 @@ ----- -**In-person attendees:** +**In-person attendees:** | Name | Abbreviation | Organization | | -------------------- | -------------- | ------------------ | @@ -62,7 +62,7 @@ MM: Okay, frankly, I did not follow all of that. Sorry, I missed a bit. MF: Sorry, I went too fast. -MM: No, that's okay. The main thing was probing for is if there was a good enough rationale for omitting it at the stage, and I'm I'm satisfied that there is. So I'm fine with your decision to omit it for now. It's beyond just the straightforward inclusion of everything analogous with no hard problems. The other - while I've got the floor, if you don't mind, I'll just ask quickly another similar question, which is, the omission of the second argument to the predicates and rather putting in a separate `.indexed`. What's the reason for omitting it? I don't mind the indexed. But what is the reason for emitting? The second argument for the normal, you know, map and filter and all those? +MM: No, that's okay. The main thing was probing for is if there was a good enough rationale for omitting it at the stage, and I'm I'm satisfied that there is. So I'm fine with your decision to omit it for now. It's beyond just the straightforward inclusion of everything analogous with no hard problems. The other - while I've got the floor, if you don't mind, I'll just ask quickly another similar question, which is, the omission of the second argument to the predicates and rather putting in a separate `.indexed`. What's the reason for omitting it? I don't mind the indexed. But what is the reason for emitting? The second argument for the normal, you know, map and filter and all those? MF: So it's the second and third arguments, we omit both of those. We only pass the element and the reason is that it doesn't make as much sense as arrays, because you're not indexing into an iterator. That it's a kind of strange operation to try to emulate. Arrays are indexable collections so that index can be used for, for example, looking around you during the iteration, where that isn't useful for iterators. Additionally, as you advance iterators, that index is now relative, its meaning changes. So again those familiar patterns are not as useful. @@ -94,7 +94,7 @@ MF: No, I explained why the usage of a parameter and the different cases of that DE: So, I think an issue for both flat() and flatMap() is, why was the decision made that this would be about a nested iterator rather than a nested array. Because I guess that's where the generalization falls through. I'm obviously we're about, we're talking about an outer iterator, but why are the elements that were flattening since generalized to iterators, rather than arrays? -MF: I wouldn't call it a generalization. They're just different structures. You could think of it as a generalization if they were iterables, which we certainly don't want. When we talked about - when we designed Array.prototype.flat, you can read in the conclusion of the meeting where we decided that that X.prototype.flat should flatten X's. This was when we were considering IsConcatSpreadable as a possibility or Iterables, there were problems with doing that. So the model that we want to go forward with was for X.prototype.flat to flatten X’s. I'm continuing that here. +MF: I wouldn't call it a generalization. They're just different structures. You could think of it as a generalization if they were iterables, which we certainly don't want. When we talked about - when we designed Array.prototype.flat, you can read in the conclusion of the meeting where we decided that that X.prototype.flat should flatten X's. This was when we were considering IsConcatSpreadable as a possibility or Iterables, there were problems with doing that. So the model that we want to go forward with was for X.prototype.flat to flatten X’s. I'm continuing that here. WH: So, what is the issue with having a parameter of infinity? @@ -108,7 +108,7 @@ WH: Okay, I wasn't aware of that difference. It makes it very different from wha DE: Do you have any use cases in mind? The use case in the explainer, you know, is an array that then has values called on it. Do you have any use-cases where you would really want this to flatMap to be over iterators rather than arrays? -MF: Whenever your iterator produces iterators. +MF: Whenever your iterator produces iterators. DE: For example? like you gave an example which could have been handled. It were that @@ -122,7 +122,7 @@ RBN: Yeah, I just well, I understand that the purpose of indexed is a way to kin MF: Okay, I think I've clarified my opinion. -JSC: Continuing the discussion of indexed() and index() parameters to mapping functions. One thing I wanted to raise was, I am very sympathetic with the idea that because we are dealing with potentially lazy iterators, we don't have random access indexing, it doesn't make sense to try to randomly access specific indices and so like to see index numbers mean, something different with these potentially lazy iterators or randomly accessible array. Having said that. I'm not sure if I find the example of, like, being like, dropping the first five value from an iterator and then trying to map with an indexed ballot with in this index integers. I'm not sure if that would be particularly confusing to people given that we already have this with slice. We can, you know, we can drop the first five values from an array and then we map. If I remember correctly, I'm pretty sure that the Index integers given to the mapping function will be starting from the first thing in the slice. So with that said, I don't feel particularly strongly whether we should have `indexed` as a separate method versus providing index arguments to the mapping function, but I will say that, I think it does make sense to provide index integer arguments to mapping functions too, as well as the first closure actually, instead of providing a generic index function, it has a mapIndexed function. where it explicitly - like you can you can instead of index providing a function that creates index into arbitrary lazy iterators. It forces you to use it to provide a mapping function for that which is basically the same as like .map. And if you are function provides an gets an integer arguments because of the concern about creating a lot of garbage with intermediate array entries with Indices. Now I'm not necessarily that, that's what we gotta do and I'm fine with having a separate index method. But I do want to say that it's at least from my perspective, that actually it does make sense to provide index, integer arguments, to flat map. And in fact, maybe more, maybe more efficient in that you don't have to create a lot of intermediate array entries, etc. +JSC: Continuing the discussion of indexed() and index() parameters to mapping functions. One thing I wanted to raise was, I am very sympathetic with the idea that because we are dealing with potentially lazy iterators, we don't have random access indexing, it doesn't make sense to try to randomly access specific indices and so like to see index numbers mean, something different with these potentially lazy iterators or randomly accessible array. Having said that. I'm not sure if I find the example of, like, being like, dropping the first five value from an iterator and then trying to map with an indexed ballot with in this index integers. I'm not sure if that would be particularly confusing to people given that we already have this with slice. We can, you know, we can drop the first five values from an array and then we map. If I remember correctly, I'm pretty sure that the Index integers given to the mapping function will be starting from the first thing in the slice. So with that said, I don't feel particularly strongly whether we should have `indexed` as a separate method versus providing index arguments to the mapping function, but I will say that, I think it does make sense to provide index integer arguments to mapping functions too, as well as the first closure actually, instead of providing a generic index function, it has a mapIndexed function. where it explicitly - like you can you can instead of index providing a function that creates index into arbitrary lazy iterators. It forces you to use it to provide a mapping function for that which is basically the same as like .map. And if you are function provides an gets an integer arguments because of the concern about creating a lot of garbage with intermediate array entries with Indices. Now I'm not necessarily that, that's what we gotta do and I'm fine with having a separate index method. But I do want to say that it's at least from my perspective, that actually it does make sense to provide index, integer arguments, to flat map. And in fact, maybe more, maybe more efficient in that you don't have to create a lot of intermediate array entries, etc. MF: I would like to refute each of your points. So it is different to drop and then map() versus slice in that slice and then map. You are correct that the indexes do start relative to the slice, but map is also passed that slice, meaning that those indexes can be used for that lookaround indexing that I was talking about. @@ -146,7 +146,7 @@ MF: Okay, I will open a thread to follow up on this topic and we can have a conv MF: Okay, moving on. The next open question is should we have a toAsync() method on Iterator.prototype? This is simply a convenience but it's probably a pretty common convenience. So you can call AsyncIterator.from like we see on the right hand side here, then you have to kind of break your chaining usage and then you end up formatting it all weird. Should we have a toAsync() method on the iterator prototype to do that? -KG: I proposed this but I'm in favor. +KG: I proposed this but I'm in favor. JHX: (from queue) +1 @@ -160,7 +160,7 @@ MM: Okay, WH, can you expand on that? WH: Just looking at the slide on the screen: I would not want to turn this into a pipe. -MM: I don't mind toAsync(). Certainly the pipe is still awkward compared to the left and `toAsync()` seems pretty natural. Well, I do want to keep it down to a dull roar, to not use this argument to keep adding in methods for things that should just be that something operating from the outside. There's a modularity issue. It's for iterators to know about async iterators. It's not weird for async iterators to know that iterators. +MM: I don't mind toAsync(). Certainly the pipe is still awkward compared to the left and `toAsync()` seems pretty natural. Well, I do want to keep it down to a dull roar, to not use this argument to keep adding in methods for things that should just be that something operating from the outside. There's a modularity issue. It's for iterators to know about async iterators. It's not weird for async iterators to know that iterators. JHX: The pipeline operator has very different operator precedence so if you use pipeline operator here, it will need many parentheses. If we use the extension proposal, then this can be solved. But if we talk about pipeline, I think we have the precedence issue. So I'm +1 for toAsync. @@ -186,7 +186,7 @@ MF: I'm curious to see how you would describe that as lazy, but we can take it u MF: We are aware of a web compatibility issue (Issue 115) . This one is unfortunate. The toStringTag on Iterator.prototype is just like the toStringTag on every other prototype. It is non-writable but there is a library, a popular library out there, regenerator-runtime, that writes to Generator.prototype. Generator.prototype inherits from Iterator.prototype. And because of the override mistake, when we add Iterator.prototype toStringTag, it will try to make an assignment to that property. So in strict mode code, this will cause it to throw because it is doing that write to a non-writable property. I see two possible solutions here: either we make this `toStringTag` writable, or don't add toStringTag to Iterator.prototype. Maybe there are other options, but we'll have to do something to solve this problem. Any opinions? -MM: Why is it assigning? +MM: Why is it assigning? MF: It is assigning to do a defineProperty and assigning to properties that don't exist does that. This is, this is just what people do. @@ -253,9 +253,9 @@ JHX: There are some more consideration of the double-ended iterators, firstly, y JHX: There are also some other discussion about like the performance say and how engines can optimize or may use the method B internally for optimizing. For example, if we have range proposal in the future, a very large range actually, engines and my kids double-ended and and gets the last items but the user and it supposedly impossible to have to have such optimization. and note here, the mechanism Be with Mexican be developers can always convert to a known (?) to arrays. So method B has the his escape hatch. But if we choose method A it is impossible to get method B behavior. -JHX: We have many discussion on the record, they are some summary of the other discussions. example, it's always the developed due to (?) for example, in the reverses iterator, you always need to - it's the developer's duty to make sure to make sure that the normal iterator and the reverse iterative, they returned the same same sequence. and in the double-ended very interested, it's also the duty to make sure the next and the nextLast method eventually give the same sequence. and, and we also discuss to some like the it's better to treat building and using and its various Ecosystem or whether we should consider. If you need to accelerators ecosystem of, not everyone agreed that stuff. On the call, of course we have many arguments, but Eventually, it seems the people on the call generally like mechanism B. +JHX: We have many discussion on the record, they are some summary of the other discussions. example, it's always the developed due to (?) for example, in the reverses iterator, you always need to - it's the developer's duty to make sure to make sure that the normal iterator and the reverse iterative, they returned the same same sequence. and in the double-ended very interested, it's also the duty to make sure the next and the nextLast method eventually give the same sequence. and, and we also discuss to some like the it's better to treat building and using and its various Ecosystem or whether we should consider. If you need to accelerators ecosystem of, not everyone agreed that stuff. On the call, of course we have many arguments, but Eventually, it seems the people on the call generally like mechanism B. -JHX: be So here's a very simple summary of why I prefer method B. In simple case these two methods are just the same and in the worst case, method be give the power to the (? and the double-ended iterators by themselves can be used for can replace the reverse iterator. And can are some benefits to which they iterate help us, and the maximum has to escape hatch. So I was a champion. I'd like to go on with method B but we are going to stay you want. So so we always can revisit the design. Note that actually, it's possible support. both, that means if the iterator is double ended use method B. But if it's not to fall back to method B, but actually we can add a method A as fallback for method B anny time we want. So for now, I would like to focus on the method B and maybe we can revisit n the future. +JHX: be So here's a very simple summary of why I prefer method B. In simple case these two methods are just the same and in the worst case, method be give the power to the (? and the double-ended iterators by themselves can be used for can replace the reverse iterator. And can are some benefits to which they iterate help us, and the maximum has to escape hatch. So I was a champion. I'd like to go on with method B but we are going to stay you want. So so we always can revisit the design. Note that actually, it's possible support. both, that means if the iterator is double ended use method B. But if it's not to fall back to method B, but actually we can add a method A as fallback for method B anny time we want. So for now, I would like to focus on the method B and maybe we can revisit n the future. JHX: So this is first important with this now and then there are others topics. The first thing is we change the next back to next last. On the problem of the original version, using the next back. Look. The first problem is that confusion, the people always ask, that's how I and iterates could move back to previous status, but actually the double ended iterator does not go back to the previous step. But think the confusion may become from the word "back", the word back is I use that word because of Rust double ended iterators use nextBack, but the term back may come from the C++ method. It actually means just makes the lost. So I think we should use the nextLast. More consistent words in JavaScript and they are the prominent people feel the, the next mass of should always forward only. And, and of the important problem is that, since the new version rely on the passing, the protocol semantics of iterative helpers, but that is removed, we have to use separate method now. So why change it to the next class? I hope it will be. Clear. So we will not cause great confusion with bidirectional iterators, and the word last is consistent with current API and the symmetry of the next. And nextLast could makes it clear that the next is consumed of the first item and nextLast consumes the last item of the remaining items And and now we do not abuse, the custom iterates protocol and magic strings. @@ -267,7 +267,7 @@ JHX: So we have the normal iterators, the next only, and double-ended. The quest JHX: And how do it iterator helpers work on double ended iterators? For forEach and Map we just invoke the underlying iterator methods. [missed discussion of other helpers] -JHX: Yeah. The final thing is how to write double-ended, iterating, generator on in the original version, I use the function sounds, but because we now move to next class, so we cannot write the go and literally directly while generator spot. If you if you want, you can still use a generator with the help of a wrapper, or even a decorator. It could look like [slide]. +JHX: Yeah. The final thing is how to write double-ended, iterating, generator on in the original version, I use the function sounds, but because we now move to next class, so we cannot write the go and literally directly while generator spot. If you if you want, you can still use a generator with the help of a wrapper, or even a decorator. It could look like [slide]. JHX: this is to all the come has ended and the future plan is to align with the it sort of helps and sync. It's maybe a good time to have experimental implementation so we can have some feedback and check whether work fine. @@ -283,7 +283,7 @@ JHX: Normally, if the underlying Structure is already a deque, for example, arra WH: This proposal is quite pretty in an abstract way, but I'm a bit concerned about the complexity implications for things like iterator helpers or anything which produces iterator APIs that will want or need to support the nextLast API. For some cases, it's a bit unclear how that would work. And for generators and async things, it’s hard to do async from both ends. -JHX: I think there's many many cases we do not double-ended iterators, for example. streams are always one direction.. So the double-ended is mainly for the normal cases like Set or Map because in JavaScript will they can Double ended. And in some cases, for example, what are listening in the like, the literature repeats it can be used with the, it will help us because it's (?) It might be have the pattern that you have infinite sequence and you do some operation on that and and In, and even some cases you may want to do the operating like take last or something like that. So the double-ended iterator can provide a reasonable semantic and eventually if there are some iterative that it's not double ended if people want to use double and if they can convert converted, to array. +JHX: I think there's many many cases we do not double-ended iterators, for example. streams are always one direction.. So the double-ended is mainly for the normal cases like Set or Map because in JavaScript will they can Double ended. And in some cases, for example, what are listening in the like, the literature repeats it can be used with the, it will help us because it's (?) It might be have the pattern that you have infinite sequence and you do some operation on that and and In, and even some cases you may want to do the operating like take last or something like that. So the double-ended iterator can provide a reasonable semantic and eventually if there are some iterative that it's not double ended if people want to use double and if they can convert converted, to array. WH: In the examples you gave some of them are quite tricky. For example, you picked an infinite sequence which converges to a limit of 0. So there is a unique value and there are reasonable semantics for having a last element and iterating from the back of that particular mathematical sequence. But there are also mathematical sequences that do not converge, and it's not apparent to the API whether some mathematical sequence will converge or not. @@ -305,9 +305,9 @@ JHX: I think we all worry about complexity. My feeling is the reverse iterator a DE: Sorry. I misspoke. I meant double ended, not reversed. I just wanted to focus on. Can we can we handle this destructuring through totally forward iterators as your mechanism A and move forward with this proposal that way. -JHX: um, yeah, personally I prefer the mechanism B because I always think we should we should consider - as I said, if we choose mechanism A now we can never have double ended. +JHX: um, yeah, personally I prefer the mechanism B because I always think we should we should consider - as I said, if we choose mechanism A now we can never have double ended. -DE: look forward to more concrete use cases for this in a follow-up presentation. +DE: look forward to more concrete use cases for this in a follow-up presentation. SYG: If I can jump a little bit. I agree with DE. I think the like, concretely for me. The problem statement that is uncontroversial is that people may want rest in the middle of their array destructuring and it seems like your preferred - that it seems like there's not consensus or there's not enough agreement on double-ended iteration being its own problem statement and you are conflating the two and it seems like there is a path forward with mechanism A to solve the narrower problem of having rest in the middle and if you would like to expand the problem statement to be directly about double-ended iteration that's a pretty different problem. And then we're asking you to justify that better. Does that makes sense? @@ -368,7 +368,7 @@ JSC: So I have like I have possible solutions that basically are like, like, per SYG: what is the eviction trigger in your current proposal? Like, Like, when do you decide evict something from the LRU. -JSC: in the current proposal, current proposal, like it one? Actual solution could be, could be just a simple. A simple number of students. Having said that, it would be within the scope of the proposal to extend it to like make this something that's not in commendable and userland and to have it be something like be triggered by refinement, it by memory usage itself. I think that's excited but I think part of the state 1, +JSC: in the current proposal, current proposal, like it one? Actual solution could be, could be just a simple. A simple number of students. Having said that, it would be within the scope of the proposal to extend it to like make this something that's not in commendable and userland and to have it be something like be triggered by refinement, it by memory usage itself. I think that's excited but I think part of the state 1, SYG: I see, I don't think we're really asking for different things in that your it's it's okay. So Okay, let me rephrase. What I said earlier about is undesirable to have more g, c hooks. if your current Proposal with doing this only usual and we're doing things that are, that are already possible user land, sufficiently solves the memory management issues of large apps, cool. If it doesn't, I would like to think about Having like memory pressure. Be a trigger during stage 1 and Stage 2 to solve the problem of memory management for large applications. I agree with that and possibly other implementation, driven eviction triggers like that are not just memory, lay some other kind of resource exhaustion or resource management. @@ -412,7 +412,7 @@ JSC: “William Martin's, no. Need to talk actually, it says support to stage on DLM: I was originally going to ask why this can’t be done in a userland library, but if I came out of the discussion but I guess what I'd like to hear is your motivation for investigating this problem space inside of stage one rather than experimenting with this in userland libraries first done. I think everyone agrees, this is an important problem, but given complexity and perhaps a difficulty of coming up with a generic solution, I'm wondering if it wouldn't be better to explore this in userland libraries first. -JSC: well, there are a couple of problems to this from my perspective. One is that the original impetus of this from our perspective was because we wanted a standard memo-ization function API memorization is superficially fumble but course it high complexity Arts to out it manages its cache. And so the idea was why don't we make that much? But like, Modular much like a make, it a pluggable interface and like the comic things for memorization seem to be like to argue. So why don't we keep explore that separately? Because it made it probably will be separately and useful for large applications in addition to that there are their Arts do exist using its implementations vary. Something officials, confessionalism policies. Head. Instead let me be happy to try to do all of them, but it's, there isn't a lot that of something that is not not possible in user language SYG has, which has the trigger whether the memory signals from the engine itself. But also for this looks incredible, which course Does that satisfy you? +JSC: well, there are a couple of problems to this from my perspective. One is that the original impetus of this from our perspective was because we wanted a standard memo-ization function API memorization is superficially fumble but course it high complexity Arts to out it manages its cache. And so the idea was why don't we make that much? But like, Modular much like a make, it a pluggable interface and like the comic things for memorization seem to be like to argue. So why don't we keep explore that separately? Because it made it probably will be separately and useful for large applications in addition to that there are their Arts do exist using its implementations vary. Something officials, confessionalism policies. Head. Instead let me be happy to try to do all of them, but it's, there isn't a lot that of something that is not not possible in user language SYG has, which has the trigger whether the memory signals from the engine itself. But also for this looks incredible, which course Does that satisfy you? JSC: Okay. one memoization separate proposal is motivating this. Two there are existing there already exists userland libraries. And we would be happy to exploit those in stage 1. We think that this is an important enough and general enough problem, that it is worth exploring standardization, especially in really is really, it was agent, but it's true that like, even if this takes a long time to progress, there can be Solutions in userland. But I think that this is important and Broad enough and cross-cutting enough that it's worth exploring standardization for if not. And lastly we also would be exploring engine driven eviction. Alert/trigger, whatever the best word is, I don't remember. Does that satisfy your question? @@ -486,15 +486,15 @@ WH: What MM said is one possible use case. It seems to be a specialized use case MM: I'm not suggesting that if you use only for that, I'm suggesting that in exploring this, there might be some synergies there be explored so that if you want an unobservable memo, the platform might be able to help you out. -JSC: All right. Thank you very much +JSC: All right. Thank you very much - WH: I’m not sure having a platform silently memoize pure functions is in scope for this. +WH: I’m not sure having a platform silently memoize pure functions is in scope for this. -JSC: All right, all right with that said. Thank you very much, both of you. Next up is DLM, +JSC: All right, all right with that said. Thank you very much, both of you. Next up is DLM, -DLM: I think we've all known whether this isn't like there's enough motivation right now. That really explains why his can't stay his own Library. So I appreciate seeing a little bit more of that. +DLM: I think we've all known whether this isn't like there's enough motivation right now. That really explains why his can't stay his own Library. So I appreciate seeing a little bit more of that. -JSC: Thank you. All right, thank you very much. +JSC: Thank you. All right, thank you very much. SFC: Hello, my question is that often when we discuss new features in ecmascript as well as an Intl and other places, we look for prior art, and other programming languages are common place to look that. We used prior art a lot, for example, in the regular expression set notation proposal. That's not to say that prior art is a requirement because, you know, EcmaScript could be the language that sort of sets memoization is something that could be part of the language. So it's not necessarily needed to have prior art. But you know, if there's no prior art, it means that it's more incumbent on us on and on the proposal champions to make a very strong case. So I was wondering if there were other examples of major programming languages that have this feature and maybe what were some of the reasons that they decided that it was worth adding to the language? @@ -563,13 +563,13 @@ RPR: Strong ending. Presenter: Hemanth HM (HHM) - [proposal](https://github.com/tc39-transfer/proposal-object-pick-or-omit) -- [slides]() +- slides -HHM: All right. ergonomic Dynamic. object restructuring, maybe object occur, Ahmad Yeah, this time around the presentation starts at some of the examples that surface will be searching through GitHub and other sources to see how it is in the real world. Here's an example of picking up dependencies from config object. In this example we have a config object but we want to pick up dependencies, Dev dependencies and create dependencies only It's another example where we are picking only some of the other options from the user options. In this case, it's like shell and we do uid and stuff like that. And here from the request body, we are picking up things. We're interested, the name company name and password in this in this classic case of profile data. And this, this is a very interesting example where we looking, whether a component should reload or not. So here, it's trying to pick this data props with compare these and then again, it's picking from previous projects and compare dings. It's important to notice that compare case in previous crops all are dynamic in nature so it's not like we just have some set of values there. Here is an example where you're omitting sensitive data from user info location C, like license number and tax ID, and we have a new model here that's getting created by omitting action updates and action. Deleted here is an array of game the dynamic values. Of course, meeting schema is underscore ID and IDs and this is a real world example on one of the on neural network, Repose where we are China, project is committed with things that they don't really like to have in that particular model +HHM: All right. ergonomic Dynamic. object restructuring, maybe object occur, Ahmad Yeah, this time around the presentation starts at some of the examples that surface will be searching through GitHub and other sources to see how it is in the real world. Here's an example of picking up dependencies from config object. In this example we have a config object but we want to pick up dependencies, Dev dependencies and create dependencies only It's another example where we are picking only some of the other options from the user options. In this case, it's like shell and we do uid and stuff like that. And here from the request body, we are picking up things. We're interested, the name company name and password in this in this classic case of profile data. And this, this is a very interesting example where we looking, whether a component should reload or not. So here, it's trying to pick this data props with compare these and then again, it's picking from previous projects and compare dings. It's important to notice that compare case in previous crops all are dynamic in nature so it's not like we just have some set of values there. Here is an example where you're omitting sensitive data from user info location C, like license number and tax ID, and we have a new model here that's getting created by omitting action updates and action. Deleted here is an array of game the dynamic values. Of course, meeting schema is underscore ID and IDs and this is a real world example on one of the on neural network, Repose where we are China, project is committed with things that they don't really like to have in that particular model HHM: so we all get the point right life is all about taking and omitting things we want to know making things that we want big Philosophy for there. one would argue that it's very easy to kind of have a utility method. Maybe we can just use `fromEntries`, and, or `map`, or those keys to check through, has on properties. And something like that. What we see on the screen or maybe we can use our destructuring assignment. But we'll probably see. What are the challenges as he goes for the next slides and may be omitted could also look like this where we have object from entries. We do a key map over and try to filter and omit the keys that we don't want from a no. -HHM: the major challenge is had been asking notice in the previous two slides, half ergonomic, it's not really ergonomic and we can like destructuring cannot stick anomalous properties which are Dynamic by dynamically mean like request dissection basically requires a hard-coded number of properties and, and picking up properties from the Prototype isn't even really possible with the and omitting some properties. You can only let clone and delete and not really like what we saw in the previous example, +HHM: the major challenge is had been asking notice in the previous two slides, half ergonomic, it's not really ergonomic and we can like destructuring cannot stick anomalous properties which are Dynamic by dynamically mean like request dissection basically requires a hard-coded number of properties and, and picking up properties from the Prototype isn't even really possible with the and omitting some properties. You can only let clone and delete and not really like what we saw in the previous example, HHM: So how would this problem statement for our stage 1 before we even talk whether it should be object, pick, maybe something like object filter? So what do we all feel? Is this something worth exploring here? I would probably cause a bit to ask about statement and then we could talk about how we could probably solve it if it is really a problem to be solved. @@ -634,13 +634,13 @@ RPR: I'm so any questions or comments on this first? DE: So I don't really understand. I mean, yes, this is common in the community, but it seems largely related to pipeline. I'm not really sure why we want both. I mean, This, this is kind of a stage one question whether we want to be doing these in parallel, when I introduced pipeline, made a specific comparison versus sort of the functional version and achieving stage 1 was based on I think an idea that yeah we do kind of want to explore the syntax version. This is common enough. So I don't know, maybe this should be part of that effort; I know you're involved in the other one. I just don't really understand why it's introduced as a separate stage one proposal. -JSC: Well, it's a fair question. I would say that one, the pipe operator proposal. So first of all, why is so first of all, there's the question of why would this be useful for developers in addition to the pipe operator syntax Sometimes. And also why should this be a separate proposal to the pipe? Operator proposal? I'll tackle the second one first. It's the pipe, operator proposal is already very important and is already large and dealing with the dealing with syntax operator. It's the specs already called the specs are ready. Large-ish there's and there's like most of the conversation on it is, there's there's still ongoing conversation on it especially regarding the topic token. I think that if we are, would it be if we would consider a complimentary functional API? In addition to that, that would be worth keeping that in a separate proposal. I think, I think would be clean. the as for the other question, would this be useful for developers in addition to the pipe off to as as you're well aware DE, There's a sizable amount of the community that uses unary callbacks and want a serial callbacks a lot and they, I've heard in pendants from a lot of, a lot of them that they feel that the pipe operator with a topic reference does not adequately meet their conciseness needs or their Dynamic needs. I've heard I've heard from a couple of developers saying that in fact they would prefer would have preferred a function to an operator in the first place as separate them with commas and also they can, they can apply an array of call backs, so they can dynamically exclude, a callback that they might that they filter out or something. So, and yeah, that I think. And I believe I have a bunch of code examples, but it is a comment, it is a common pattern. Now it's hard to say like how much of this pattern would per-se, by the pipe Operator, it's hard to say. I've tried to find examples that I think I put in an explainer of like dynamically excluding of callback to from this square or whatever. And I could do a more in-depth analysis of conciseness versus the pipe operator. Having said that think still that at the very least, it's worth exploring adding this lightweight functional API, has that that At least a lot of developers from the feedback I got feel would least be complementary to an operator. +JSC: Well, it's a fair question. I would say that one, the pipe operator proposal. So first of all, why is so first of all, there's the question of why would this be useful for developers in addition to the pipe operator syntax Sometimes. And also why should this be a separate proposal to the pipe? Operator proposal? I'll tackle the second one first. It's the pipe, operator proposal is already very important and is already large and dealing with the dealing with syntax operator. It's the specs already called the specs are ready. Large-ish there's and there's like most of the conversation on it is, there's there's still ongoing conversation on it especially regarding the topic token. I think that if we are, would it be if we would consider a complimentary functional API? In addition to that, that would be worth keeping that in a separate proposal. I think, I think would be clean. the as for the other question, would this be useful for developers in addition to the pipe off to as as you're well aware DE, There's a sizable amount of the community that uses unary callbacks and want a serial callbacks a lot and they, I've heard in pendants from a lot of, a lot of them that they feel that the pipe operator with a topic reference does not adequately meet their conciseness needs or their Dynamic needs. I've heard I've heard from a couple of developers saying that in fact they would prefer would have preferred a function to an operator in the first place as separate them with commas and also they can, they can apply an array of call backs, so they can dynamically exclude, a callback that they might that they filter out or something. So, and yeah, that I think. And I believe I have a bunch of code examples, but it is a comment, it is a common pattern. Now it's hard to say like how much of this pattern would per-se, by the pipe Operator, it's hard to say. I've tried to find examples that I think I put in an explainer of like dynamically excluding of callback to from this square or whatever. And I could do a more in-depth analysis of conciseness versus the pipe operator. Having said that think still that at the very least, it's worth exploring adding this lightweight functional API, has that that At least a lot of developers from the feedback I got feel would least be complementary to an operator. JSC: Does that address your question a little bit? JHX: I just have this similar feeling on the like JSC said and the problem of the pipeline operators, the topic for Functional programming That's you will have many topics in and the this proposal actually allow you to avoid many topic. So I plus 1. -JSC: Thank you. Thanks. I guess RPY is up next. +JSC: Thank you. Thanks. I guess RPY is up next. RPY: Yeah, this is feeling very reminiscent to me of the discussions we had before when we decided to go with the placeholder approach to pipeline. Like, I think we had this same discussion, really, my memory was that the committee felt that the placeholder approach better fits with what the language provides like. think this is kind of encouraging currying and things which is tended to be done by user land libraries, but it's not something the language necessarily promotes as a style and not really saying that we should go one way or the other, but it does feel like this is a reaction to the F# pipeline, not being chosen at that time. @@ -656,13 +656,13 @@ JSC: Hmm. All right. Thank you, JHD. So there are a couple of things here. I wil JSC: The other thing is with regards to the performance concerns which I remember also regarding proliferation of callbacks is that I would like to divorce the idea of these functions that imposing character. from the from the from the, the, you know, from the very, from the Haskell inspired functional programming subset the communicate with other libraries. What I would like to focus on and which I think I tried to do it, the explainer is use cases that draw simple callback based programming which like I mentioned earlier, we've all done before, so like callbacks already exists, this isn't about out encouraging new like urging a lot of new callbacks without having to Curry, or partially apply every single time you want an unary function call. you know, we're what this is about is just is just posing callbacks that we are already dealing with in callback oriented programming, that's the idea. Now, whether that clashes too much against Like whether that doesn't sufficiently here, it's that this will encourage proliferate a lot of partially applied proliferation of a lot of partially applied or Costco functions, functions, of course, none of this can for sure, I don't think that should block stage one per say. I do appreciate the constat. Perhaps we should decide now that link once and for all or at least for the foreseeable future. whether we want to standardize at the very least manipulation of callbacks, -JSC: but I would say that try not to think about that. Try not to think about super abstract Haskell inspired functional programming, that's occurring and partial application so much as this is just about combining call the Callback already have Does that does that? Answer your question is have is for what equipment would work. +JSC: but I would say that try not to think about that. Try not to think about super abstract Haskell inspired functional programming, that's occurring and partial application so much as this is just about combining call the Callback already have Does that does that? Answer your question is have is for what equipment would work. JHD: So my sort of response to that is my without hard data in front of me, of course, I think my feeling is that the way that I typically see or do callback composition is not consistently Generalizable, like, it's not something that would work with this syntax it. The way that I usually do that is I make a I wrap it in a narrow function or something, and I combined the things together functions. Sometimes return an object of things, and I only need one property or returns an array of things. And I need to do, you know, a tuple of things and I need to do like multiple different function calls on the results of it and so on and so like how that composition is definitely a universal problem. What I'm skeptical about is that this form of callback composition is actually common in a broader sense. and I would say that if there's, if the if there could be more use cases in the explainer more ideally, more evidence of usage of these sorts of patterns, especially outside of the subsets of the ecosystem, we've been talking about that would be compelling, I think for me. And similarly, then I would say if it's, if that is actually compelling, then that would be something that the pipeline proposal could, it would worth looking at further. Either. But if we didn't think it was a compelling enough pattern for the pipeline proposal, it seems unlikely to me that we will decide that it is compelling enough. Now, JSC: All right. Yeah, I acknowledge your points. Jordan, and thinking, -SFC: Yeah, I was just going to observe that, at least function, that pipe async. Looks a lot like functions that are in the popular `async` package on npm that. I've personally used quite a bit and I guess in general, I'm somewhat in favor of exploring this problem space because I think that that you know, having a way with using regular functions to express this type of operation, is definitely something that developers have a lot. They sing package on npm is as you know, a lot of downloads even with promises which largely make you know like you know which in some sense move away from the Callback style but even with even in, even with that the popularity of that of packages like that, continue to increase. Because, you know, this is a very good way to lay out your code. I actually would, you know, my personal opinion would be that. This is the type of Direction I Would. I'm a like, would prefer to see us Explore like that as an alternative to the pipeline proposal, but even that pipeline is already at stage 2, you know. also, you know, can see a lot of the other arguments here that well, this is pretty much duplicating the work of pipeline. But, know, at least, for my opinion, you know, I think it would be really interesting to see like, you know, if this were to move forward, what would, what would it look like as alternative? But again, that that's that's a personal opinion of So that's all I have to say. +SFC: Yeah, I was just going to observe that, at least function, that pipe async. Looks a lot like functions that are in the popular `async` package on npm that. I've personally used quite a bit and I guess in general, I'm somewhat in favor of exploring this problem space because I think that that you know, having a way with using regular functions to express this type of operation, is definitely something that developers have a lot. They sing package on npm is as you know, a lot of downloads even with promises which largely make you know like you know which in some sense move away from the Callback style but even with even in, even with that the popularity of that of packages like that, continue to increase. Because, you know, this is a very good way to lay out your code. I actually would, you know, my personal opinion would be that. This is the type of Direction I Would. I'm a like, would prefer to see us Explore like that as an alternative to the pipeline proposal, but even that pipeline is already at stage 2, you know. also, you know, can see a lot of the other arguments here that well, this is pretty much duplicating the work of pipeline. But, know, at least, for my opinion, you know, I think it would be really interesting to see like, you know, if this were to move forward, what would, what would it look like as alternative? But again, that that's that's a personal opinion of So that's all I have to say. JSC: Thank you. SFC. I am pushing this as being complimentary for, for a. less General Uses of the pipe operator, which I'm also involved in, but I do think that it is, at least exploring the problem space, So to, to a certain extent, like we discussed earlier in with another Proposal, with memo stage. One has a connotation that the committee Might like, has positive at least some positive feelings towards it. it. Technically, it's just that the committee thinks it's worth. Devoting time to explore the problem space. In this case, the problem space is call back any kind like whatever callback composition or application of callbacks there might exist if I reach stage one, I would explore. I would try to explore it. A backbone into code. Black JHD mentioned. Different sorts composite callbacks healthy people, those callbacks and callback oriented code, and whether it may be worth standardizing functional API, in addition to the pipe operator and comparing them with versions, that use the pipe operator. That's what I would do at stage one. I appreciate the idea that maybe the committee wants to put its foot down and say we bless. We want to bless pipe, operator, and look at functional type option for a long, long time or effort. That's also the. That also, this muscle might be where the makes it stand either way. I would like to position this as a complement to operate whether or not this succeeds in reaching stage, 1 for exploration, and I also appreciate the mentioning of the async package. I think that one of the most compelling succinct, this thing's examples of increased since readability. When it comes to the proposed functions is with pipe async functions. Where you're composing a bunch or cereal, applying a bunch of async callbacks. think that that if I think that the benefits are clears with, You think one of the async functions? All right, thank you. Next up. @@ -674,9 +674,9 @@ SYG: so practically. I want to ask you as the champion, this the concerns expres JSC: What I would do concretely, what JHD suggested and which would be to explore as much callback oriented programming as I could. and see where callbacks those sorts of callbacks are composed and see if there's anything generalizable from there. That's what I would do. Concretely for stage 1 -SYG: Would that help address the motivational questions raised today of having pipe line versus the wouldn't some what? +SYG: Would that help address the motivational questions raised today of having pipe line versus the wouldn't some what? -JSC: What I would also do is translate those examples. I find them in the Corpus from callback oriented programming into versions, that use both a functional API approach and also syntactic pipe operator approach and compare them and then perhaps we will come to the conclusion step. This is just too redundant. the, let's pursue the popcorn like, like, let's give up on this. I think that, you know, there is a, there is sizable part of the community. That's clamoring for clamoring for this shift and I'm trying, I'm trying to make it so that this should be useful, even whether or not we have a part of the community, right? Like for callback oriented code and all the performance concerns that we've talked about before. If we can't find enough examples, when we compare them to pipe operator, that's like this is basically more readable than the pipe operator syntactically, then I would definitely give up on stage 2, that's if I, that's it, it's the stage 1. Now, that would be concretely, what I would do and I won't and it would help inform the motivation, vis-à-vis, the pipe operator is this enough of the benefit or pipe operator. when composing callbacks and callback oriented code? Does that answer your question? +JSC: What I would also do is translate those examples. I find them in the Corpus from callback oriented programming into versions, that use both a functional API approach and also syntactic pipe operator approach and compare them and then perhaps we will come to the conclusion step. This is just too redundant. the, let's pursue the popcorn like, like, let's give up on this. I think that, you know, there is a, there is sizable part of the community. That's clamoring for clamoring for this shift and I'm trying, I'm trying to make it so that this should be useful, even whether or not we have a part of the community, right? Like for callback oriented code and all the performance concerns that we've talked about before. If we can't find enough examples, when we compare them to pipe operator, that's like this is basically more readable than the pipe operator syntactically, then I would definitely give up on stage 2, that's if I, that's it, it's the stage 1. Now, that would be concretely, what I would do and I won't and it would help inform the motivation, vis-à-vis, the pipe operator is this enough of the benefit or pipe operator. when composing callbacks and callback oriented code? Does that answer your question? SYG: so okay, so what I have heard is that you have some idea of the concrete thing you would between here and stage and between here and asking for stage 2, that may involve you dropping stage 2.. Like, I'm not like it sounds like you're signing up to do that work. Anyway, like what is the downside of doing this before stage one, @@ -710,7 +710,7 @@ DE: That's a continuation of Dan's item from yesterday morning. Okay, the return DE: Hey everybody. So we had a professional stenographer yesterday morning. How did it go for all of you? Let's open up to questions and then we can come to a conclusion about whether to try to arrange for captioner. going forward. -RPR: I'll also say, I'm most eager to hear from the people who suffer the burden of writing the notes +RPR: I'll also say, I'm most eager to hear from the people who suffer the burden of writing the notes RRD: Yeah, I've been working the notes here and there I'm not. the biggest Note Taker, as someone that takes occasionally notes, this was a clear Improvement. I mean, the bot was already an excellent Improvement back in the day. I think we made a jump from being from missing some points. taking notes to being able to take almost everything but the but placed a huge amount of load on us by sometimes repeating things or having weird delays that we just make us lose the flow of what it was being said. We have a stenographer is almost real time which is a huge difference and even if some things need changing From time to time. It's a rare enough that now we are as no takers able to understand better what is being said in the room and so bring in more context from our knowledge. as TC39 delegates, delegates, for linking to parts of the spec saying, okay, we're talking about this section at this moment. Describing, what is being shown on the slides. Which are things that who are not able do when we are with but so I think that it's become increasingly more useful with this. @@ -722,7 +722,7 @@ WH: I found that this was useful at the margin. I always go through and fix up n ?: I just know that the things that you're looking over later and correcting is after the note takers have done their work here. So if the quality is improving overall after that work but also we have a simultaneous reduction in burden by committee. A, that seems like a pretty strong wind. I do this in real time as well. well. Okay, great. -[Note-taking paused so that note-takers can participate in dicussion. Long discussion not recorded] +[Note-taking paused so that note-takers can participate in dicussion. Long discussion not recorded] ### Poll diff --git a/meetings/2022-09/sep-13.md b/meetings/2022-09/sep-13.md index 0f8cf3ec..58e57d42 100644 --- a/meetings/2022-09/sep-13.md +++ b/meetings/2022-09/sep-13.md @@ -78,9 +78,9 @@ KG: And then this last thing is a PSA, which is that there is a tool called Sear Presenter: Ujjwal Sharma (USA) -USA: And we'll move on to Ecma 402. No, do you see my slides? Yep. All right. Hello &, welcome, everyone. I will try to keep this rather short. First of all, I would like to welcome you to the final meeting of the Hebrew year 5780 to and Ethiopian year 2014. It's been fairly calm these last days ever since the last meeting there has been fairly small engagement activity on 402. Except this one normative PR 708 introduces, microsecond and nanosecond to the IsSanctionedSingleUnitIdentifier, that's quite a mouthful table. This PR was sent in by Frank proposed by Andre, but the meat of it is that it adds to the list of units that are supported by number format for unit formatting and the list currently supports to two milliseconds, which is all right? Given that it was written in a Date-aware world and not so much, a Temporal-aware world but now it adds microsecond and nanoseconds. Why is this necessary? Merely useful is because it helps us DRY DurationFormat. So I would explain that a bit further in the DurationFormat topic that I have. But essentially, we're adding support for two units for unit formatting, which would complete the feature set. So, at the very least that would help. This PR has TG2 consensus and it was approved in the recent meeting. So, I'd like to ask for consensus for this PR (#708). +USA: And we'll move on to Ecma 402. No, do you see my slides? Yep. All right. Hello &, welcome, everyone. I will try to keep this rather short. First of all, I would like to welcome you to the final meeting of the Hebrew year 5780 to and Ethiopian year 2014. It's been fairly calm these last days ever since the last meeting there has been fairly small engagement activity on 402. Except this one normative PR 708 introduces, microsecond and nanosecond to the IsSanctionedSingleUnitIdentifier, that's quite a mouthful table. This PR was sent in by Frank proposed by Andre, but the meat of it is that it adds to the list of units that are supported by number format for unit formatting and the list currently supports to two milliseconds, which is all right? Given that it was written in a Date-aware world and not so much, a Temporal-aware world but now it adds microsecond and nanoseconds. Why is this necessary? Merely useful is because it helps us DRY DurationFormat. So I would explain that a bit further in the DurationFormat topic that I have. But essentially, we're adding support for two units for unit formatting, which would complete the feature set. So, at the very least that would help. This PR has TG2 consensus and it was approved in the recent meeting. So, I'd like to ask for consensus for this PR (#708). -BT: Anyone have concerns about this PR? Or trouble accessing the pr? +BT: Anyone have concerns about this PR? Or trouble accessing the pr? SYG:I have a quick question and I have no concerns about the PR whatsoever, but this is something that CM has repeatedly raised in the past. Like I don't really have the expertise or much care about these PRs. Can we do something about fast-tracking? Like I guess this is already the fast track but still - @@ -205,7 +205,7 @@ PFC: Yeah, that's a good point. JHD: Is there a strong reason to just not to add an accessor and store a bit and an internal slot even if nothing cares about it? -BT: This is straying from clarifying question territory. +BT: This is straying from clarifying question territory. JHD: I can put that on the queue. That's fair. @@ -215,11 +215,11 @@ PFC: I'll go back to the ISO 8601 grammar. There is a removal of some ambiguity PFC: (Slide 9) And then fixing a bug where a `Z` UTC designator was accidentally allowed in a PlainDate string when it was passed as part of a relativeTo option, which it shouldn't shouldn't have been. -PFC: There was another pull request specifying whether annotations could be added after shortened month day and year month syntax, but we found out that we're actually not done discussing that. So that might appear in a future plenary for consensus. +PFC: There was another pull request specifying whether annotations could be added after shortened month day and year month syntax, but we found out that we're actually not done discussing that. So that might appear in a future plenary for consensus. PFC: (Slide 10) The next change is some tweaks to the order of observable operations that you could observe if you're using proxy traps, we've made three methods consistent where two of them did things in one order and one of them did things in another order. This is probably not going to affect anybody's code unless they really want it to. -PFC: (Slide 11) Another PR from Andre from SpiderMonkey. This is some more validations of user code functions that are called in calendar calculations. This just adds some more checks for results that are inconsistent across calls if we're adding two dates under a certain circumstance, which I won't get into right now, but you can read the PR if you're really interested. +PFC: (Slide 11) Another PR from Andre from SpiderMonkey. This is some more validations of user code functions that are called in calendar calculations. This just adds some more checks for results that are inconsistent across calls if we're adding two dates under a certain circumstance, which I won't get into right now, but you can read the PR if you're really interested. PFC: (Slide 12) This one is another thing that affects the observable operations, but it's probably not going to affect anybody's code. We're skipping an unnecessary observable HasProperty operation when that is possible. @@ -237,11 +237,11 @@ PFC: Yeah, that does mean the parser has to change. You'll find the details of w BT: Okay, next is Dan. -DE: It's great to hear that you've resolved - that you have this common plan with IETF and I guess this, the result shows with this better, I mean, more extensible model, it shows that it was worth it to have this conversation with them and build consensus, and that you have that consensus now. As far as getting this proposal to a point where it's shippable do we anticipate any further changes? Is it really a question of just waiting for this formality? Do we have any other bugs that are open? +DE: It's great to hear that you've resolved - that you have this common plan with IETF and I guess this, the result shows with this better, I mean, more extensible model, it shows that it was worth it to have this conversation with them and build consensus, and that you have that consensus now. As far as getting this proposal to a point where it's shippable do we anticipate any further changes? Is it really a question of just waiting for this formality? Do we have any other bugs that are open? -PFC: Yeah, there are some. There are several small ones which are sort of on the queue, we need to get around to fixing them. There are are four large ones that I know of that are slowly moving along. So one is an issue from implementers about having - the fact that most methods need to check whether the calendar that's carried by a Temporal object is built in and unmodified or not. In a discussion with Frank from V8 and Yusuke from JavaScriptCore, we had discussed trying to make built-in calendars into frozen intrinsic objects, so that's still under discussion. It's not clear whether we are going to do that or whether we need to do that, but it's slowly moving forward. There's an issue from V8 that's asking us to remove the calendar slot from PlaneTime, we need to investigate if we could do that in a way that would still make it possible to introduce times with calendars in the future while still remaining web compatible. There's an issue where we need to get better integration of Temporal.TimeZone and Temporal.Calendar objects with Ecma-402. That was asked for by TG2. So that's that's open and that's a sizable task. And then there's the concerns about the mathematical values in Temporal duration that I mentioned the beginning of the presentation. So those are the four open issues that I consider substantial. +PFC: Yeah, there are some. There are several small ones which are sort of on the queue, we need to get around to fixing them. There are are four large ones that I know of that are slowly moving along. So one is an issue from implementers about having - the fact that most methods need to check whether the calendar that's carried by a Temporal object is built in and unmodified or not. In a discussion with Frank from V8 and Yusuke from JavaScriptCore, we had discussed trying to make built-in calendars into frozen intrinsic objects, so that's still under discussion. It's not clear whether we are going to do that or whether we need to do that, but it's slowly moving forward. There's an issue from V8 that's asking us to remove the calendar slot from PlaneTime, we need to investigate if we could do that in a way that would still make it possible to introduce times with calendars in the future while still remaining web compatible. There's an issue where we need to get better integration of Temporal.TimeZone and Temporal.Calendar objects with Ecma-402. That was asked for by TG2. So that's that's open and that's a sizable task. And then there's the concerns about the mathematical values in Temporal duration that I mentioned the beginning of the presentation. So those are the four open issues that I consider substantial. -BT: Just a time check. There's three minutes left. +BT: Just a time check. There's three minutes left. JHD: It certainly is not the only criterion, but I think it would be strange if we decided it was ready to be shipped in a meeting that it still contained normative changes to the proposal. So I'm hopeful that there will be a meeting where we have none and then we can discuss shipping it. @@ -345,7 +345,7 @@ JRL: The way I've currently implemented this is I'm just using the actual gramma WH: I'm trying to think of whether dedenting might create surprising behavior during cooking. I can't think of any cases at the moment. Things to look for might be something which is not an escape sequence becoming an escape sequence because of dedenting. I can't think of any such cases at the moment. -JRL: I don't think that can happen. I can prove it out afterwards so we're not taking everyone's everyone's time. +JRL: I don't think that can happen. I can prove it out afterwards so we're not taking everyone's everyone's time. WH: Okay, sounds good. @@ -377,7 +377,7 @@ CP: Quick update on ShadowRealms. For those that are not familiar with ShadowRea CP: In terms of SalesForce we have the use cases for all [??]. Some most of the champion, of course, of course, at the time. the use case for us is the integrity protection that we offer by providing a global object for different vendors that are running code in the same app and with such mechanisms, they can do whatever they want with the global objects that are assigned to them. They can have their own polyfills loaded into it, they can have modifications and global variables, and loading libraries and doing what kind of things people are doing on the web these days, they can do it in their own little thing. And that provides some flexibility for them to not collide with the rest of the app and the rest of the code that is running in. That's the use case that we have. But there are many things that you can do with with the shadowRealms, you can have a plug-in system, or you have some sort of library that doesn't go well with the app itself, you can just offload it. Same kind of things that you can with iFrames today just a lot more heavier, more complicated sometimes, so I believe that this feature is a gap that we can fill with it and that kind of a recap on it. -CP: The API is very straightforward. It's just a brand new global constructor called ShadowRealm. Doesn't have any options when constructing it. Once you create it, you get two methods import volume and evaluate input value. Value allow you to Evaluate or get an evaluate initialize a module inside the realm and getting access to one of the exported value from that module by specifying the binding name that you want. And `evaluate` is equivalent to eval. It expects a source text and it will evaluate that code inside the realm. And the novelty on this is basically what we call the callable boundary. One of the primary features of the Shadow Realm is that it does not mix the object graph of the realm itself with the incubator realm. What that means is that you cannot get an object reference from inside the ShadowRealm and vice versa. You would not be able to get a reference from the incubator into the ShadowRealm. And the reason for that has been that we considered this after talking to implementers and such we consider that this is problematic or a footgun, it opens the door for identity discontinuity between the different type of options that you can have and it creates a lot more problem that is solved. So, for that reason, the callable boundary is in place to prevent any object reference to be leaked into another realm. +CP: The API is very straightforward. It's just a brand new global constructor called ShadowRealm. Doesn't have any options when constructing it. Once you create it, you get two methods import volume and evaluate input value. Value allow you to Evaluate or get an evaluate initialize a module inside the realm and getting access to one of the exported value from that module by specifying the binding name that you want. And `evaluate` is equivalent to eval. It expects a source text and it will evaluate that code inside the realm. And the novelty on this is basically what we call the callable boundary. One of the primary features of the Shadow Realm is that it does not mix the object graph of the realm itself with the incubator realm. What that means is that you cannot get an object reference from inside the ShadowRealm and vice versa. You would not be able to get a reference from the incubator into the ShadowRealm. And the reason for that has been that we considered this after talking to implementers and such we consider that this is problematic or a footgun, it opens the door for identity discontinuity between the different type of options that you can have and it creates a lot more problem that is solved. So, for that reason, the callable boundary is in place to prevent any object reference to be leaked into another realm. CP: So, at the moment we have multiple companies and people involved in the efforts of getting the shadow realm in browsers and engines in general. Igalia has been working with SalesForce to implement some of these. to complete the spec that are needed especially the integration with HTML. But in general, like, the multiple groups that have been working on in terms of the implementation of WebKit. It's implemented in Safari 16. These that's what we learned today in Chrome is under a flag, and in FF it is also under a flag. @@ -387,9 +387,9 @@ LEO: Just as a detail for the HTML integration, work there is in progress and it CP: Excellent. Thank you. So the details are the current PR open is the one that is normative related to the propagation of errors. It came to our attention from Google, from SYG, where they encounter certain situations where it was challenging or difficult to understand what was going on because of the callable boundary restrictions, if the error occur with think the shadow realm instance, the error reported was simply a type error without details and that makes it difficult for the developer to find out that whether that's an error during evaluation of modules or linkage of modules, that no details are all. That makes it difficult to solve. They encountered that during the writing test for the, the new feature. And so we have been looking into how to solve this problem at the moment, or at least during the last discussion in plenary we decide that we could solve these by allowing certain information to cross a callable boundary and to be a specific to be copied, and some details of the of the error can be copied. At the moment we're only copying the message of the error. So that's what we are right now with the pull request, we're copying the error. We are asking for feedback. This is challenging because we have different types of errors, depending of the type of errors you might or might not have a message, the message might be a string value or could be anything defined by the user. So it becomes a little challenging in terms of what kind of information we want to copy and how that copy will happen. In the case of AggregateError, for example, there's not much in terms of the message. You just, have an aggregation of errors, and in this case, you're going to get a TypeError without a message. So you'll be guessing what's going on there. And similarly, there are other informations on the error, like the name and sometimes the the stack information about the error that are useful in some degrees for developers to find out what's going on. We believe in the champion group, we have concluded that most of the information that you see in error is fine to be shared in both in both directions, whether that's finding (?) from or from the realm itself calling into an (?) bound function through a wrapped function. Those seems to be okay to share some information. And so they the current implementation that we have in the pull request, does not distinguish between the direction the of the call. So it could be in either direction, could be when you call in for value or evaluate. well, so basically, that three places in the spec where we control the error and propagate the error by creating a brand new TypeError with the associated realm that will receive that TypeError. we're copying the message at moment, so that's what we are right now. -LEO: For the error details, we are not asking for consensus here at this meeting, but we will bring it to the next plenary as review is still in progress. This is the way that we tend to to ship it, but the PR has ongoing discussions today and this is just a clarification, we are not asking for consensus. right now is like there are things to be addressed but this is the way we tend to ship it. For the other case is just so the release of Safari 16, the got me out of surprise, it was not expected, I just want is when reaffirm when we moved Shadow Realms to stage three was that like it was pending the HTML integration and it was agreed that we wouldn't have like a final version of Shadow Realms before the HTML integration. Just to make clear, I believe the HTML integration is no relationship to the release of Safari 16 today, we are still working and we still have the compromise to get it complete because our goal is to have ShadowRealms correct and complete and learning in all browsers correctly as design. And there is there's one core spec that we are working right now, and there is the HTML integration, which we believe we tackled all the parts and we have identified the small details on the remaining parts. So w're going to continue our work with Igalia as well to continue the implementation, and we're going to have this implementation webkit we intended to have it complete. And that's how we want to report when we come back to request eventually request for stage four. Just wanted to make this clear that we're not going to slack on it because of this current release. +LEO: For the error details, we are not asking for consensus here at this meeting, but we will bring it to the next plenary as review is still in progress. This is the way that we tend to to ship it, but the PR has ongoing discussions today and this is just a clarification, we are not asking for consensus. right now is like there are things to be addressed but this is the way we tend to ship it. For the other case is just so the release of Safari 16, the got me out of surprise, it was not expected, I just want is when reaffirm when we moved Shadow Realms to stage three was that like it was pending the HTML integration and it was agreed that we wouldn't have like a final version of Shadow Realms before the HTML integration. Just to make clear, I believe the HTML integration is no relationship to the release of Safari 16 today, we are still working and we still have the compromise to get it complete because our goal is to have ShadowRealms correct and complete and learning in all browsers correctly as design. And there is there's one core spec that we are working right now, and there is the HTML integration, which we believe we tackled all the parts and we have identified the small details on the remaining parts. So w're going to continue our work with Igalia as well to continue the implementation, and we're going to have this implementation webkit we intended to have it complete. And that's how we want to report when we come back to request eventually request for stage four. Just wanted to make this clear that we're not going to slack on it because of this current release. -CP: Yeah, thanks. But if any of you have any comments, any suggestions, any feedback on the current release please open issues or comment. +CP: Yeah, thanks. But if any of you have any comments, any suggestions, any feedback on the current release please open issues or comment. BT: SYG is on the queue. @@ -445,7 +445,7 @@ MAH: I'd like to clarify the stack problem is not related to an error crossing t LEO: Think again the CPI. This should be natural. I am sorry. This should this should be controlled by the membrane system that is being used on top, but sorry, true. You think that's nice but getting at -DE: That's not what MAH was getting at. +DE: That's not what MAH was getting at. SYG: I think LEO's answer didn't describe what Matthew was getting at because it doesn't, that's not a case where it crosses the boundary. @@ -524,13 +524,13 @@ JHD: Yeah, be clear. I have no prescription about how a brand check is done. how ACE:Okay, that's it. Well, I'm imagining is if if this was solved separately, we might end with a more elegant solution, rather than forcing something into this proposal. For example, hypothetically, if the pattern matching proposal advanced and as part of exploring pattern matching we might have a consistent protocol for checking slots. That would kind of that would then solve this in perhaps Perhaps more are quite elegant way than if we just force record for effort into this. So that's why we're saying, maybe it's okay in our opinion, if this proposal went forwards without the check, but we're not saying there should not be a check. It's just we can't see a way of adding a check on this proposal on its own. Like it should be explored. More holistically in a way that fits in with the rest of the language. Because right now, they're kind of isn't, it doesn't seem to be anywhere for it to fit in the current kind of API that we have because the API is so small to keep a proposal. so small. -JHD: I understand that but I do not think that there can be a window where it shifts without this robust detection mechanism because there are lots of code out there that has to support old. You know, not the latest version of an engine, a browser or node. And to just say that till we get around to shipping, that nobody can check these things, I don't think that's viable. +JHD: I understand that but I do not think that there can be a window where it shifts without this robust detection mechanism because there are lots of code out there that has to support old. You know, not the latest version of an engine, a browser or node. And to just say that till we get around to shipping, that nobody can check these things, I don't think that's viable. RRD: Can I ask you a question Jordan? It feels like you're stating invariant here which we're unsure that we agree that the use case you presented corroborates with. JHD: Right, I have, and that's it. That's what Mark was mentioning about writing down and variance. We have hundreds of invariants that are not written down And the committee has not necessarily provide a consensus for them and pending that, that effort, it falls to delegates to maintain those invariants. And this is an invariant I'm going to maintain and I hope to get I hope that there will be consensus for it, but nonetheless, I do not wish to see anything advance that breaks this invariant. I find it a very - as SYG mentioned in chat, maybe it's not an invariant, if it's not intentional, but there are a lot of properties that have not yet been broken and I want I wish to retain those properties from broken or prevent them from being broken. -RRD: I guess to a greater extent is maybe we should do a temperature. Check the chairs don't find about this because this is kind of important. We can get to the queue first while we're on the topic. Yeah, let go to the queue and then if you can prepare a temperature check about how people feel about the this invariance and how strongly the community things that you should or should not (?) +RRD: I guess to a greater extent is maybe we should do a temperature. Check the chairs don't find about this because this is kind of important. We can get to the queue first while we're on the topic. Yeah, let go to the queue and then if you can prepare a temperature check about how people feel about the this invariance and how strongly the community things that you should or should not (?) KG: Yeah, this is sort of a way to avoid this proposal. How important is it to have these wrappers? As I understand it, there's like two places this comes up. There is, if you pass a record to the object constructor, you expect to get an object out, and there is, if you invoke i.e. dot call a sloppy mode function with one these things as the receiver, then you would get a wrapper as the `this` value inside of the sloppy function. And for the first case, you could in principle just make a copy of the thing, just like an object or an array. And for the second case, it's only observable that you have this wrapper in a sloppy mode function, and you just shouldn't have a sloppy mode function, so I, least would be okay with just saying that you don't get a wrapper and now, the this value for sloppy mode functions is even more confused because there is an inconsistency about whether primitives are boxed, but it's only observable if you are writing sloppy mode functions which you shouldn't be any way. So I don't care about this particular inconsistency. And if we did those two things, there would no longer be wrappers and we could just not care. And I like that anyway because these wrapper objects are a part of this proposal which is not actually necessary for anything as far as I can tell. So if it is possible to eliminate these, I would be happy. @@ -598,16 +598,16 @@ ACE: yes how we get is that the difference is its unobservable so you can hook o WH: Still not clear to me whether you want to distinguish records from record wrappers? -RRD: Yes, that's that's what we were discussing earlier with Jordan, actually. So we are, So do I use case for debugging presented, which is a partial use case. If we had some concrete code that can help us to understand better, the issue but Jordan has stated, there is an invariant that we need some kind of rubber stand four flexible. Check that something is a record wrapper or not. And so our question and the temperature that we would like to have is two other people in this committee feel strongly about this invariant or not. +RRD: Yes, that's that's what we were discussing earlier with Jordan, actually. So we are, So do I use case for debugging presented, which is a partial use case. If we had some concrete code that can help us to understand better, the issue but Jordan has stated, there is an invariant that we need some kind of rubber stand four flexible. Check that something is a record wrapper or not. And so our question and the temperature that we would like to have is two other people in this committee feel strongly about this invariant or not. -DE: Specifically the invariant that you should be able to definitively unfortunately, check whether a given object isn't given wrapper or has it given internals lat so we use a framing for temperature Champion to framing which says we really strongly agree or are unconvinced. So when I proposed it strongly agree is, you know, there was there is a rationale presented here that we don't have a brand check and so strong. Green would be. Yeah, that's fine. Or and then unconvinced would be I think something needs to change long lines with Jordan is saying. So could we do such a temperature check, it's still ambiguous. +DE: Specifically the invariant that you should be able to definitively unfortunately, check whether a given object isn't given wrapper or has it given internals lat so we use a framing for temperature Champion to framing which says we really strongly agree or are unconvinced. So when I proposed it strongly agree is, you know, there was there is a rationale presented here that we don't have a brand check and so strong. Green would be. Yeah, that's fine. Or and then unconvinced would be I think something needs to change long lines with Jordan is saying. So could we do such a temperature check, it's still ambiguous. RRD: Okay, let's let's do the opposite Then let's I don't think the opposite is less ambiguous. okay, then I'll let you know. DE: well, the mark could you looking at all the other object prototype to shrink things. WH: How do you do a branch check for an `Arguments` object? Or a `Date`? -DE: so I don't worry, he you feel like, it sounds like you're unconvinced Jordan's arguments. +DE: so I don't worry, he you feel like, it sounds like you're unconvinced Jordan's arguments. WH: DE, no, that’s not what I am saying. At the moment I’m trying to understand the context. @@ -617,19 +617,18 @@ WH: Okay, now I understand the context of what you're doing. BT: So we are at time on this agenda item. Can I propose that the champion group work to make some very clear temperature, check questions and then we can come back. -RRD: We have a temperature right now and we would like to also request an extension if that's possible because we haven't finished the slides yet though. This is very important to figure out. So we want to finish with the temperature check and then move on with the rest of the presentation if possible. To first, get an extension for Two minutes. +RRD: We have a temperature right now and we would like to also request an extension if that's possible because we haven't finished the slides yet though. This is very important to figure out. So we want to finish with the temperature check and then move on with the rest of the presentation if possible. To first, get an extension for Two minutes. BT: Sure thing, thank you. RRD: So temperature. check that we actually have, Is that strongly agree? Would be good. Would be a go ahead. The Proposal is fine without even without the possibility to have that sorry even without the possibility to have this check or and if indeed, we need to have a solution for that check, which is either providing brand shaking in the proposal or that is actually removing wrappers and some way. And to be clear, we're going to also explore removing wrappers as well. whatever you choose. RRD: -agree: fine to go ahead without brand check -unconvinced: something is needed to address brand check +agree: fine to go ahead without brand check unconvinced: something is needed to address brand check - Here +Here -BT: You have the results since yes. +BT: You have the results since yes. RRD: Yes, let’s try to get a screenshot of the results. @@ -641,9 +640,9 @@ RBU: I have them. - Indifferent: 2 - Unconvinced: 1 -RRD: Thank you. Okay, so we wanted to go through the TypeScript integration as well. So we did a proposition for the TypeScript team, It's still up to the same (?) you too. Yeah, See what they want to do with record and tuple. But the way we have been playing with the idea of integrating into the type system is too. Take the type data structures. Sorry types, using Record and notice the small R and Tuple, and you do a union the type that you want that thing to be so you have to follow a matching, a record can go into record and readonly type. But this object, we don't fit into it, and we would be adding another layer which is additional scripts in tax, that would let us use the ??? syntax to define the type so, this is very basic syntax, but it has consequences because we also want to have existing types match to records and tuples. So, specifically here you have a few interface and we want I mean that's something that ruler like to do is to have matching happen on. so if I'm passing a record to that function, that function should be able to accept my record as it can today accept objects. But there is a catch that: function could be changing the object internally. So if instead of an object we have a record who would a type error today TypeScript is not strict on the way it applies read only. So that means that it would typecheck okay. But it would have a runtime error. So eventually we would be interested in seeing if it should be possible to have another compare option which is street cred only. That being said, this is up to the TypeScript team to decide this. and there is another problem that was brought up by Daniel Rossenwaser from. In this example, this function takes a union of type that is opts that has two keys volumen and is also Union of number and string. So opts, in this model would accept to receive records, right? but if it is a record, the type of opts becomes record and it's not an object anymore. So that means that you couldn't go into the first Branch. It could go into the second Branch, that only accepts numbers and strings. This is a problem. That means that we need to take a decision on the control flow analysis. Do we error? Because then the else branch is could also receive records or do we let it through and assume that record is, he's kind of going to be going into the first branch even if it's not going to a ???time. So erroring here entitled to could be a breaking change because that means that as soon as we introduce records into detached replay system, then this type check could start failing, but this, we do it really does the runtime error and otherwise it could choose to not fail here and there is a precedent for that which is if opts is a function today, it would go also in the second Branch but let's TS is not checking for checking for that right now. So I will leave it to Ashley again. +RRD: Thank you. Okay, so we wanted to go through the TypeScript integration as well. So we did a proposition for the TypeScript team, It's still up to the same (?) you too. Yeah, See what they want to do with record and tuple. But the way we have been playing with the idea of integrating into the type system is too. Take the type data structures. Sorry types, using Record and notice the small R and Tuple, and you do a union the type that you want that thing to be so you have to follow a matching, a record can go into record and readonly type. But this object, we don't fit into it, and we would be adding another layer which is additional scripts in tax, that would let us use the ??? syntax to define the type so, this is very basic syntax, but it has consequences because we also want to have existing types match to records and tuples. So, specifically here you have a few interface and we want I mean that's something that ruler like to do is to have matching happen on. so if I'm passing a record to that function, that function should be able to accept my record as it can today accept objects. But there is a catch that: function could be changing the object internally. So if instead of an object we have a record who would a type error today TypeScript is not strict on the way it applies read only. So that means that it would typecheck okay. But it would have a runtime error. So eventually we would be interested in seeing if it should be possible to have another compare option which is street cred only. That being said, this is up to the TypeScript team to decide this. and there is another problem that was brought up by Daniel Rossenwaser from. In this example, this function takes a union of type that is opts that has two keys volumen and is also Union of number and string. So opts, in this model would accept to receive records, right? but if it is a record, the type of opts becomes record and it's not an object anymore. So that means that you couldn't go into the first Branch. It could go into the second Branch, that only accepts numbers and strings. This is a problem. That means that we need to take a decision on the control flow analysis. Do we error? Because then the else branch is could also receive records or do we let it through and assume that record is, he's kind of going to be going into the first branch even if it's not going to a ???time. So erroring here entitled to could be a breaking change because that means that as soon as we introduce records into detached replay system, then this type check could start failing, but this, we do it really does the runtime error and otherwise it could choose to not fail here and there is a precedent for that which is if opts is a function today, it would go also in the second Branch but let's TS is not checking for checking for that right now. So I will leave it to Ashley again. -ACE: Yeah, so we were really pleased that Daniel raised this to us because it actually hadn't occurred to us, exactly. That kind of narrowing behavior and realized we could be in a situation where our own proposal - at Bloomberg we use TypeScript a lot of thats that's that's no secret and didn't realize we might be in a situation where our proposal may then actually be ending up leading to change in typescripts. That would cause lots of breaking changes in our own code. And wouldn’t that be funny. So what we did was just try and get a sense of how big an impact this could have right now just on our own code base. So we created a patched version of TypeScript with this particular change implemented and run that across seven hundred of our internal TypeScript projects. So that's about 3 million lines of code to like you know a decent amount of TypeScript code and it only came back with 12 errors in the code, and the code there errors are in is exactly the type of thing that Daniel raised. Where there's an if else branch and the else branch is implicitly. Something typeof, doesn't equal object and they're assuming it's a string or a number. So all of those places, there's only 12 errors in each of those places would be very simple fixes. It just like a small change to the code and no then work. So that, you know, that we found that quite promising considering other TypeScript upgrades we've done recently have had a lot more errors and places to fi. That it doesn't prove anything about what should happen, but it suggests that potentially there's breaking changes aren't as big as it initially first sounded. But yeah, we were hoping other people will also do similar tests on their code bases because maybe it just it just so happens that the Bloomberg code base. We checked doesn't follow this pattern very much so this is still ongoing but the initial results were positive which we were happy about. +ACE: Yeah, so we were really pleased that Daniel raised this to us because it actually hadn't occurred to us, exactly. That kind of narrowing behavior and realized we could be in a situation where our own proposal - at Bloomberg we use TypeScript a lot of thats that's that's no secret and didn't realize we might be in a situation where our proposal may then actually be ending up leading to change in typescripts. That would cause lots of breaking changes in our own code. And wouldn’t that be funny. So what we did was just try and get a sense of how big an impact this could have right now just on our own code base. So we created a patched version of TypeScript with this particular change implemented and run that across seven hundred of our internal TypeScript projects. So that's about 3 million lines of code to like you know a decent amount of TypeScript code and it only came back with 12 errors in the code, and the code there errors are in is exactly the type of thing that Daniel raised. Where there's an if else branch and the else branch is implicitly. Something typeof, doesn't equal object and they're assuming it's a string or a number. So all of those places, there's only 12 errors in each of those places would be very simple fixes. It just like a small change to the code and no then work. So that, you know, that we found that quite promising considering other TypeScript upgrades we've done recently have had a lot more errors and places to fi. That it doesn't prove anything about what should happen, but it suggests that potentially there's breaking changes aren't as big as it initially first sounded. But yeah, we were hoping other people will also do similar tests on their code bases because maybe it just it just so happens that the Bloomberg code base. We checked doesn't follow this pattern very much so this is still ongoing but the initial results were positive which we were happy about. RRD: I guess we have a question from Waldemar. @@ -661,7 +660,7 @@ RBU: That is true that there is a problem. but at the same time that's not true WH: Yeah, I understand the desire. I'm not familiar enough with the state of TypeScript constant type declarations to be able to comment much on it. -RRD:I think, I think the main takeaway here, if I can try to the main thing is that for trying to steal be aligned with some trade-offs that TypeScript already made so, so, for example, here, could also pass a frozen object . was to take record it, wouldn't mind? Passing in Frozen objects, that function, and it would still fail at runtime. So we're we are aware that it's going to cause some problems, but also TypeScript early on also chose to keep compatibility and and let the TypeScript runtime fail because it was just better for adoption in existing JavaScript projects. So again, this is an ongoing discussion. none of those decisions are taken, maybe we'll go more strict and maybe we'll fence the structures more in the future, but right now, we're exploring that solution And so, your point is noted. +RRD:I think, I think the main takeaway here, if I can try to the main thing is that for trying to steal be aligned with some trade-offs that TypeScript already made so, so, for example, here, could also pass a frozen object . was to take record it, wouldn't mind? Passing in Frozen objects, that function, and it would still fail at runtime. So we're we are aware that it's going to cause some problems, but also TypeScript early on also chose to keep compatibility and and let the TypeScript runtime fail because it was just better for adoption in existing JavaScript projects. So again, this is an ongoing discussion. none of those decisions are taken, maybe we'll go more strict and maybe we'll fence the structures more in the future, but right now, we're exploring that solution And so, your point is noted. WH: Yeah, I think it's great to explore these issues. I don't see either of these issues of being a showstopper. Because I can come up with other examples, such as what you mentioned, neither is new to TypeScript. @@ -669,7 +668,7 @@ RRD: That being said, yeah, we're trying to be coherent here. And so, likewise t WH: Is this applied recursively? There are APIs which take objects which have other objects as fields, which have other objects as fields. And thus can you flatten those all into records of records of records and submit them to some WebIDL API? -RRD: I have to check, but I believe so and make sure to check certainly forever +RRD: I have to check, but I believe so and make sure to check certainly forever RBU: I'd I'd yell competent type programmer could do in touch with yes. diff --git a/meetings/2022-09/sep-14.md b/meetings/2022-09/sep-14.md index 6f236a1c..547fa48c 100644 --- a/meetings/2022-09/sep-14.md +++ b/meetings/2022-09/sep-14.md @@ -27,9 +27,9 @@ Presenter: Axel Rauschmayer (ARR) - [proposal](https://github.com/rauschma/iterable) - [slides](https://speakerdeck.com/rauschma/iteration-helper-functions) -ARR: It's an honor to present at/for TC39 And my presentation is about switching, iterator helpers to functions, maybe. Let's get started by looking at the status quo. The current iteration API is quite minimalistic and relatively elegant. So you have you start with an iterable, that is a factory for iterators, and then each iterator is a factory for values. And that API is almost functional in how it works. So, relatively small. The other status quo is that current iteration is based on mechanisms that all operate on iterables, not iterators. So examples for these that are built into the language are, are a Array.from for-all, destructuring [??]. The key point or Insight that I had when I look at these mechanisms is that programmers normally see iterators when they work with iterables. Another status quo that is interesting to look at is how current libraries work. So what our current JavaScript developers are used to and popular libraries that operate on these data structures include lodash, immutableJS and Ramda. Lodash is a collection of functions. It also has wrapper API, that's a bit like jQuery in how it wraps, then there's ImmutableJS, that is OOP with a comparatively deep inheritance hierarchy and it uses iterables often and supports iteration, but that's a completely new API. And then that's Ramda, and they're a lot of functions there. +ARR: It's an honor to present at/for TC39 And my presentation is about switching, iterator helpers to functions, maybe. Let's get started by looking at the status quo. The current iteration API is quite minimalistic and relatively elegant. So you have you start with an iterable, that is a factory for iterators, and then each iterator is a factory for values. And that API is almost functional in how it works. So, relatively small. The other status quo is that current iteration is based on mechanisms that all operate on iterables, not iterators. So examples for these that are built into the language are, are a Array.from for-all, destructuring [??]. The key point or Insight that I had when I look at these mechanisms is that programmers normally see iterators when they work with iterables. Another status quo that is interesting to look at is how current libraries work. So what our current JavaScript developers are used to and popular libraries that operate on these data structures include lodash, immutableJS and Ramda. Lodash is a collection of functions. It also has wrapper API, that's a bit like jQuery in how it wraps, then there's ImmutableJS, that is OOP with a comparatively deep inheritance hierarchy and it uses iterables often and supports iteration, but that's a completely new API. And then that's Ramda, and they're a lot of functions there. -ARR: When it comes to handling various values, it's interesting to compare two Styles. So, one hand, iterative methods and on the other hand functions on iterables. If you work with iterative methods you and you get a value. You first have to figure out. is the new API supported and if it is, you get an iterator. and here are a few examples of what that looks like. If you have an array, you get an iterator by `.values()`. And after that, you can apply the operation. If you have a map, you often want `Map.entries()` to get an iterator`'map.keys()` and `values()` works too. And once you do that, you can apply the operation `.drop()`. With the string, there is no method or let's say no method that has a string key. There's only a method that has a simple key and you have to invoke that one before you can access the API and invoke `.drop()`, which is `keys`, we're already fine. We can immediately invoke the new API. If on the other hand, the new value does not support the new API, We have to use the iterative from and then we get wrapping API like `Iterator.from`. Then we can drop. And if we check the same thing with functions on iterables, then things become simpler because we simply apply function `drop()` to whatever iterable value we have. And so if we have an array we can apply `drop()` directly, or `map(string.keys())` which is an iterable iterator and, and iterator will spit just plain any value as well. We always apply the function and we're fine. So next, let's look at what happens when we have one operand, but there are also a few cases when we have more than one main operand and then if we look at functions on iterables we have more than one of those.. If you have more than one main operand, that's very easy to use with functions because it's just one more argument or one more parameter with iterative methods. It's not completely clear. So we have two options, the additional operant could be like the first option that I'm showing here. Is to use an iterator or to just use the iterable trouble and well, it depends on which of these options you prefer. +ARR: When it comes to handling various values, it's interesting to compare two Styles. So, one hand, iterative methods and on the other hand functions on iterables. If you work with iterative methods you and you get a value. You first have to figure out. is the new API supported and if it is, you get an iterator. and here are a few examples of what that looks like. If you have an array, you get an iterator by `.values()`. And after that, you can apply the operation. If you have a map, you often want `Map.entries()` to get an iterator`'map.keys()` and `values()` works too. And once you do that, you can apply the operation `.drop()`. With the string, there is no method or let's say no method that has a string key. There's only a method that has a simple key and you have to invoke that one before you can access the API and invoke `.drop()`, which is `keys`, we're already fine. We can immediately invoke the new API. If on the other hand, the new value does not support the new API, We have to use the iterative from and then we get wrapping API like `Iterator.from`. Then we can drop. And if we check the same thing with functions on iterables, then things become simpler because we simply apply function `drop()` to whatever iterable value we have. And so if we have an array we can apply `drop()` directly, or `map(string.keys())` which is an iterable iterator and, and iterator will spit just plain any value as well. We always apply the function and we're fine. So next, let's look at what happens when we have one operand, but there are also a few cases when we have more than one main operand and then if we look at functions on iterables we have more than one of those.. If you have more than one main operand, that's very easy to use with functions because it's just one more argument or one more parameter with iterative methods. It's not completely clear. So we have two options, the additional operant could be like the first option that I'm showing here. Is to use an iterator or to just use the iterable trouble and well, it depends on which of these options you prefer. ARR: Next, slide, how important is chaining? So that has been, I've heard that as a key requirement or something that people like when it comes to iterator methods. So I wanted to take a comparative look at that. so, on the left-hand side, we have iterator methods that are chained on the right-hand side have functions that are applied to iterables. And what I tend to do is just name the steps. So each each of the steps starting with set and then filtered and then map, and then at the then at the end, the result, I named each of these steps and that's ok with me, I don't mind it but obviously some people do mind. And another pattern that I've seen that I've not used personally the single variable pattern, where single variable is used to get something that's a tiny bit like chaining. And one thing that JavaScript eventually may or may not get, is the pipe operator and should we ever get that one then we'd have a really nice combination with functions and the pipe operator because we get all of the upsides of iterator methods but none of its downsides. @@ -57,9 +57,9 @@ GCL: Yeah, but I think you immediately run into a problem there with regards. I' ARR: Are you arguing in favor of functions are against them? -GCL: in favor I guess. Cuz that's just sort of like what people do already. And then I think a big sticking point here is pipeline for my like, requirements with what we're building out for iterators here, I think these functions are only acceptable if we have pipeline. So I consider that to be a required dependency of any proposal that would be working with this function form, and since pipeline is currently…, I think, I think if pipeline existed I would not be wholly against us, I would just still prefer the iterator helpers proposal. As I mean, that's the one I wrote. I think I kind of have a bias towards that but I think, yes, I think pipeline is sort of a requirement here, all right? And then finally, I think we already sort of discussed this over the last few years in the iterator helper proposal. We started their sort of, you know, biased towards the idea of using prototype methods but we did discuss functional approach because you know, we started at stage one with the problem space and we looked at, know, multiple solutions. And I feel kind of like we should, you know, try to continue building upon that sort previous consensus where possible. That's it. +GCL: in favor I guess. Cuz that's just sort of like what people do already. And then I think a big sticking point here is pipeline for my like, requirements with what we're building out for iterators here, I think these functions are only acceptable if we have pipeline. So I consider that to be a required dependency of any proposal that would be working with this function form, and since pipeline is currently…, I think, I think if pipeline existed I would not be wholly against us, I would just still prefer the iterator helpers proposal. As I mean, that's the one I wrote. I think I kind of have a bias towards that but I think, yes, I think pipeline is sort of a requirement here, all right? And then finally, I think we already sort of discussed this over the last few years in the iterator helper proposal. We started their sort of, you know, biased towards the idea of using prototype methods but we did discuss functional approach because you know, we started at stage one with the problem space and we looked at, know, multiple solutions. And I feel kind of like we should, you know, try to continue building upon that sort previous consensus where possible. That's it. -ARR: All Alright, okay. Alright. That was lot. Yeah, sure. +ARR: All Alright, okay. Alright. That was lot. Yeah, sure. SYG: I want to make sure I understood the what GCL replied earlier to ARR. I understood GCL to be arguing against the iterable helpers proposal or change the actual proposed. And in favor of the iterator helpers status quo, proposal, am I correct in my understanding? @@ -71,11 +71,11 @@ ARR: Well, I've considered I would say iterable functions and iterator methods GCL: just clarify, I am over all in favor of the current iterator helpers proposal approach. Yeah. Okay. You might have gotten confused in all of the terminology there. -ARR: So when I said that current mechanisms operate on iterables, what I meant is shown here on the slide, is I didn't look at and how generators work because the trick they're using is so that they can be used to implement both iterables and iterators. I looked at whatdo the current built-in mechanisms do and they always consume iterables. So whether is `Array.from` whether it is for-of, whether it is array destructuring, whether it is Promise.all whether it is… all of these consumed iterables. and we, and when it comes to, kind of our functions except iterables without a pipe operator. Again that's definitely a matter of taste. But for me, it would be okay even without that as, you see on this slide where you could either name each step, or have this single variable pattern because when I look at my code and pay a little bit more attention during the recent months, I do not chain very often. So very often it is just one operation, one little helper that I apply, I do not change very often but again, that may be different for other people. So and when It comes to consensus my impression again, which is also biased in during the discussion themselves, There were several people who did not agree. So it was not unanimous. So the opinion was not unanimous and I'm but I am very well aware that what I'm saying is last minute and that the people who are working on iterator methods, that they've already invested a lot of work in that. So I'm very aware of that, but I did at least one to raise that point one last time and then be silent forever. Another thing to keep in mind, is that the work that has already been done for the iterator helpers was also the foundation for my implementation, and that would also be very much the foundation for a function based API for iterables should that ever happen. So the, the work that has already been done would not be lost it would be an important foundation for a function-based API. All right. +ARR: So when I said that current mechanisms operate on iterables, what I meant is shown here on the slide, is I didn't look at and how generators work because the trick they're using is so that they can be used to implement both iterables and iterators. I looked at whatdo the current built-in mechanisms do and they always consume iterables. So whether is `Array.from` whether it is for-of, whether it is array destructuring, whether it is Promise.all whether it is… all of these consumed iterables. and we, and when it comes to, kind of our functions except iterables without a pipe operator. Again that's definitely a matter of taste. But for me, it would be okay even without that as, you see on this slide where you could either name each step, or have this single variable pattern because when I look at my code and pay a little bit more attention during the recent months, I do not chain very often. So very often it is just one operation, one little helper that I apply, I do not change very often but again, that may be different for other people. So and when It comes to consensus my impression again, which is also biased in during the discussion themselves, There were several people who did not agree. So it was not unanimous. So the opinion was not unanimous and I'm but I am very well aware that what I'm saying is last minute and that the people who are working on iterator methods, that they've already invested a lot of work in that. So I'm very aware of that, but I did at least one to raise that point one last time and then be silent forever. Another thing to keep in mind, is that the work that has already been done for the iterator helpers was also the foundation for my implementation, and that would also be very much the foundation for a function based API for iterables should that ever happen. So the, the work that has already been done would not be lost it would be an important foundation for a function-based API. All right. JRL: (on tcq) +1 with non pipeline, this is a big step backwards. -JHD: Yeah. So all of the concerns of questions about training. I mean, the single variable pattern I think is, I would be, I think, I would expect that to be much more reliable than the naming, the steps pattern, but I think the chaining is pretty important. And that's one of the big motivations for pipeline. And as has been mentioned, if pipeline exists, then that sort of covers two things. One is, it means that a standalone function approach becomes ergonomic and it can be chaine, but now the concern about mixing styles is now irrelevant. It's totally fine to have a dot chain and then a pipeline. And then another dot chain, and so on. and so I don't think that the argument concerned about style mixing. I don't think that that certain applies as long as pipeline ends up landing at some point. Then, you had a slide that had three choices. One of them was methods. One was Standalone functions. And one was wrapping and if I recall. And so the current proposal is methods and wrapping it's just that all the built-in iterators are already wrapped, like their pre wraps all right, and you call an Iterator.from as the dollar sign function that you've experienced expect from jQuery, like you just Just if you don't want care, you just throw everything into that. +JHD: Yeah. So all of the concerns of questions about training. I mean, the single variable pattern I think is, I would be, I think, I would expect that to be much more reliable than the naming, the steps pattern, but I think the chaining is pretty important. And that's one of the big motivations for pipeline. And as has been mentioned, if pipeline exists, then that sort of covers two things. One is, it means that a standalone function approach becomes ergonomic and it can be chaine, but now the concern about mixing styles is now irrelevant. It's totally fine to have a dot chain and then a pipeline. And then another dot chain, and so on. and so I don't think that the argument concerned about style mixing. I don't think that that certain applies as long as pipeline ends up landing at some point. Then, you had a slide that had three choices. One of them was methods. One was Standalone functions. And one was wrapping and if I recall. And so the current proposal is methods and wrapping it's just that all the built-in iterators are already wrapped, like their pre wraps all right, and you call an Iterator.from as the dollar sign function that you've experienced expect from jQuery, like you just Just if you don't want care, you just throw everything into that. ARR: okay, that makes sense. @@ -107,11 +107,11 @@ ARR: Well, that depends on the iterable. It works like for-of. I mean, if you us KG: Those are different things. If I for-of over a set, if I have `x = new Set`, I can for-of over that multiple times. Is the proposal that if I did get an Iterable.map and passed it some mapping function and then that Set, the resulting thing - could I for-of over that value more than once and see the same set of things? -ARR: In general. No. Okay, this would be an iterable, it takes a set and a fisa mapping function and gives you a single slot shot, result, these operations I've, I've posted a link to the implementations at the end and most of them use generators. So whatever you implement yourself. Like if you were to implement filter yourself with a generator, then that's how these helpers work. +ARR: In general. No. Okay, this would be an iterable, it takes a set and a fisa mapping function and gives you a single slot shot, result, these operations I've, I've posted a link to the implementations at the end and most of them use generators. So whatever you implement yourself. Like if you were to implement filter yourself with a generator, then that's how these helpers work. KG: Okay, it seems very strange to me to have an Iterable.map function that gives you a thing that is only Iterable once but I have nothing further to say about that and I'll cede to the queue. -WH: Agree with KG’s last point — the semantics of reusability would be rather weird if applied to iterables. My question is more about the namespaces. Here you present global functions like `map` and `filter`, and those names are likely to clash with stuff, whereas the iterator helpers are in the Iterator namespace, where presumably they're less likely to clash. How likely is the ecosystem to converge on every iterator inheriting from proper Iterator prototypes so that users don't need to worry in practice about those iterator helpers being missing? +WH: Agree with KG’s last point — the semantics of reusability would be rather weird if applied to iterables. My question is more about the namespaces. Here you present global functions like `map` and `filter`, and those names are likely to clash with stuff, whereas the iterator helpers are in the Iterator namespace, where presumably they're less likely to clash. How likely is the ecosystem to converge on every iterator inheriting from proper Iterator prototypes so that users don't need to worry in practice about those iterator helpers being missing? ARR: Are you pointing out an issue with the iterator methods or with function-based helpers? @@ -143,13 +143,13 @@ ARR: at the very least, I would only want to have one style in the, in the stand SYG: That's, that's, that's for sure. -ARR: I'm saying that suppose the ecosystem… there won't be a common common opinion in the ecosystem because you have the very functional, very FP people and the OOP people and everyone is going to do different things. +ARR: I'm saying that suppose the ecosystem… there won't be a common common opinion in the ecosystem because you have the very functional, very FP people and the OOP people and everyone is going to do different things. SYG: So, my concrete question is what do you think the downside is if the thing that's in the standard library is the methods the prototype, and then there are user land libraries that do the functional style. ARR: well, you'd still have a you'd still have a mix of styles -SYG: So what that's that's like unavoidable, right? Like the ecosystem will have some stuff that's more. Functional oriented have some more stuff that's all OOP oriented. +SYG: So what that's that's like unavoidable, right? Like the ecosystem will have some stuff that's more. Functional oriented have some more stuff that's all OOP oriented. ARR: Okay. Now it comes to my taste and I prefer functions and the built-in stuff is what I'm going to work with much more often. So according to my preference, I prefer to have functions in the, in the standard Library. @@ -157,7 +157,7 @@ SYG: So the downside. Okay, so the downside, please don't read any criticism any ARR: Well, I would argue. It's not just my personal taste. I think these are better for everyone but again, well let's what we arguing about. that's the Crux of what the discussion. Since we are having at the moment. So I think those would be a better choice but obviously a lot of people disagree. -BT: Sorry sorry, we are over time box. If there are no objections, there's ten minutes of extra slack in the schedule today. And if there are no objections, I'd like to use that for this agenda item. Great. We'll take this to the hour then. SYG, was that all from your discussion item? +BT: Sorry sorry, we are over time box. If there are no objections, there's ten minutes of extra slack in the schedule today. And if there are no objections, I'd like to use that for this agenda item. Great. We'll take this to the hour then. SYG, was that all from your discussion item? SYG: Yes, thank you. @@ -165,8 +165,7 @@ ARR: Thank you. Thank you. Thank you. All right, if that topic is over then we have a hack, some acute mixed. -JHX: Yeah, I want to show my support on this topic and I think that they're always have many arguments on like whether it should be iterator helpers us all it will help us or water style whether it should be a prototype methods or standalone functions. And I think that this we are already have the proposal near stages three. I think the arguments never stopped. So I think, I'm not sure what's the best approach, but I think it just should be revisitsed. So, everyone can be confident that we are doing the right thing before it goes to stage three. Especially liked the -Personal I do not very like the exposing iterators too much, like this slide explains. And another: this is I think it's the problem of having the methods on iterator that there are another problem. Because in the ecosystem, many people think the iterable are a class. The interface is not class because it encourages back. It's also it's it's iterative income towards the interface, so many people only know if they manually write iterator that they just return objects, which have next() method. But to go with the sort of helps, then it's not real. It's a routine now. So I'm so this I think it may cause some confusion and I also want to mention that. Actually, we have another stage 1 proposal that use wrapper function style, and that proposal also cover many similar spaces, so I support revising to the whole design space. +JHX: Yeah, I want to show my support on this topic and I think that they're always have many arguments on like whether it should be iterator helpers us all it will help us or water style whether it should be a prototype methods or standalone functions. And I think that this we are already have the proposal near stages three. I think the arguments never stopped. So I think, I'm not sure what's the best approach, but I think it just should be revisitsed. So, everyone can be confident that we are doing the right thing before it goes to stage three. Especially liked the Personal I do not very like the exposing iterators too much, like this slide explains. And another: this is I think it's the problem of having the methods on iterator that there are another problem. Because in the ecosystem, many people think the iterable are a class. The interface is not class because it encourages back. It's also it's it's iterative income towards the interface, so many people only know if they manually write iterator that they just return objects, which have next() method. But to go with the sort of helps, then it's not real. It's a routine now. So I'm so this I think it may cause some confusion and I also want to mention that. Actually, we have another stage 1 proposal that use wrapper function style, and that proposal also cover many similar spaces, so I support revising to the whole design space. ARR: Okay. @@ -209,7 +208,7 @@ MF: This final one addresses the concern raised by an Agoric representative. Thi MF: Up to a week or two ago that was all of the open issues. So this would be what the proposal would look like if we merged all of those PRs. The things that are underlined here are things that have been updated since last time. So, you see that we've added toAsync, we've updated the toStringTag in some way. It's either an accessor or it’s writable. And then all of these function parameters have counters now. And then the four properties up here that we chose a kind of arbitrary places to put them, but it's probably appropriate, would be added. So that's what it would look like if we choose to merge all three of those PRs. -MF: So other noteworthy changes. There is only one noteworthy change that we made since the last presentation to the committee. It's this one fixing an issue that was noticed when iterator.from is passed something that claims to be iterable, it has Symbol.iterator, but the thing it returns is broken; it has a non-callable `next`. So it actually doesn't return to you an iterator. We now fail fast, we fail in Iterator.from. I was a bit mixed on making this change. It was noted that if we do not fail early, if we don't do this, we could make a `take(0)` operation not fail. If the iterator is just never consumed, so it's thrown away, or if you do a `take(0)` on it or something like that, you wouldn't see an error. But most people felt that it's better to just fail early always. That this is likely an error in programming. And I accept that rationale. So I've merged this change. +MF: So other noteworthy changes. There is only one noteworthy change that we made since the last presentation to the committee. It's this one fixing an issue that was noticed when iterator.from is passed something that claims to be iterable, it has Symbol.iterator, but the thing it returns is broken; it has a non-callable `next`. So it actually doesn't return to you an iterator. We now fail fast, we fail in Iterator.from. I was a bit mixed on making this change. It was noted that if we do not fail early, if we don't do this, we could make a `take(0)` operation not fail. If the iterator is just never consumed, so it's thrown away, or if you do a `take(0)` on it or something like that, you wouldn't see an error. But most people felt that it's better to just fail early always. That this is likely an error in programming. And I accept that rationale. So I've merged this change. MF: So now we come to the section of new open questions. So just recently in the last two weeks we’ve received a large influx of feedback, mostly due to the stage two reviews that came in and there are important and major design considerations I'll go over. @@ -293,7 +292,7 @@ JHD: Yeah, so I'm not really a big fan of the counter stuff. I mean, the thing t MF: Yeah, I pretty much agree with that feedback. I think that there are some negatives and some positives to having the counters. I'm fairly neutral on the topic. -JHD: I guess I have the next item as well. Dealing with iterables automatically assimilating an iterable, I guess, that's awesome. Except for the fact that strings are iterable by default, which is a horrific mistake that we've made. There has been a reasonable suggestion to mitigate that, which is to only allow object iterables, given that strings are the only iterable primitive at the moment. That would work, but it raises the question as you mentioned. And what to do with records and tuples, I could see a world where it made sense that records, tuples and objects were allowed, but not any of the other primitives. It's just, like, I think that if we do make that special case, it's weird and if we don't make that special case and allow iterables, it's the worst possible outcome because of strings. And we ran into this with array.flatMap as I recall a little bit. I think we kind of shunted to the side because of the decision to just use you know isArray. But I think that we need to do whatever possible to ensure that there's not this huge footgun that whenever you put a string somewhere that you can put in a number all of a sudden you get a bunch of characters instead. +JHD: I guess I have the next item as well. Dealing with iterables automatically assimilating an iterable, I guess, that's awesome. Except for the fact that strings are iterable by default, which is a horrific mistake that we've made. There has been a reasonable suggestion to mitigate that, which is to only allow object iterables, given that strings are the only iterable primitive at the moment. That would work, but it raises the question as you mentioned. And what to do with records and tuples, I could see a world where it made sense that records, tuples and objects were allowed, but not any of the other primitives. It's just, like, I think that if we do make that special case, it's weird and if we don't make that special case and allow iterables, it's the worst possible outcome because of strings. And we ran into this with array.flatMap as I recall a little bit. I think we kind of shunted to the side because of the decision to just use you know isArray. But I think that we need to do whatever possible to ensure that there's not this huge footgun that whenever you put a string somewhere that you can put in a number all of a sudden you get a bunch of characters instead. WH: Speaking of testing for things which are iterable or iterators, in what circumstances does that case arise? In what circumstance do you do different things depending on whether something is iterable or not? @@ -319,7 +318,7 @@ ACE: I guess just the opposite of JHD's thing to make this really useful for you MF: awesome. -ACE: When we talked about this with Bloomberg delegates earlier today or yesterday and a few of us liked the fact that this PR was raised because it seems natural based on the fact that the alpha few places where we have places like.map and filter it also passes index. It feels consistent to include it. This it's a trivial thing to include. Like, I certainly don't expect the consistency of the third argument of having everything because naturally that doesn't make sense due to being impossible. but being able to produce an index is so trivial.. Yeah, I was surprised when I first saw that it wasn't included for a more pragmatic reason of including it. The most common time I get excited and reminded about this proposal existing is when I see codes that has a long list of a chaining, and I can just, I can just see all these like wasted intermedia array is being created, it's just chain and I'm like, I really can't wait for iterator helpers so I can refactor this into something that's potentially more efficient, memory pressure wise. And having that not having that index argument just makes that refactoring that slight bit more work or risky that I now have to check if anyone's just passing a callback directly rather than inline arrow and I also have to check are they hoping to get the index? I quite like it. I think I can see why others don't like it. I can see how the index thing could solve that though. Personally, I preferred just, including it rather than someone having to add.indexed and now destructure the arrays. They're getting back on to the next one, But yeah, I probably won't help because uh, people think the opposite, no one's know, that does happen. +ACE: When we talked about this with Bloomberg delegates earlier today or yesterday and a few of us liked the fact that this PR was raised because it seems natural based on the fact that the alpha few places where we have places like.map and filter it also passes index. It feels consistent to include it. This it's a trivial thing to include. Like, I certainly don't expect the consistency of the third argument of having everything because naturally that doesn't make sense due to being impossible. but being able to produce an index is so trivial.. Yeah, I was surprised when I first saw that it wasn't included for a more pragmatic reason of including it. The most common time I get excited and reminded about this proposal existing is when I see codes that has a long list of a chaining, and I can just, I can just see all these like wasted intermedia array is being created, it's just chain and I'm like, I really can't wait for iterator helpers so I can refactor this into something that's potentially more efficient, memory pressure wise. And having that not having that index argument just makes that refactoring that slight bit more work or risky that I now have to check if anyone's just passing a callback directly rather than inline arrow and I also have to check are they hoping to get the index? I quite like it. I think I can see why others don't like it. I can see how the index thing could solve that though. Personally, I preferred just, including it rather than someone having to add.indexed and now destructure the arrays. They're getting back on to the next one, But yeah, I probably won't help because uh, people think the opposite, no one's know, that does happen. RBU: That like intentionally needing to wrap them for them, not having wrappers would exclude them. For some reason. So I think that's relatively uncontroversial, I don't have a strong opinion and off the top of my head as to what the Solution is regarding like string and iterator versus adorable. But what I will say maybe in favor of an exception for Tuple is that we already will have exceptions in this back for two people for things like .concat. So that's already. It's okay. Maybe to ease the pain or the burden a little bit. Just want to put that out there as a reference point assuming you probably would make an exceptional case, very clear to people there in the primitives. If we are excluding all primitives and I think that's Great. Yeah, @@ -345,7 +344,7 @@ KG: So this is an issue which was identified at the time the async generators we WH: I don’t want to spend too much time on this, but one more clarifying question: If an eager iterator helper is hooked to a generator that queues up and waits for .next’s to complete before handling the .return, would anything ever call those .next’s, or would everything just be left hanging? -KG: We would have to decide what happens to the promises returned by proceeding calls to .next. I don't know what the right answer is. +KG: We would have to decide what happens to the promises returned by proceeding calls to .next. I don't know what the right answer is. WH: This sounds like a topic for a bigger discussion, which we don't have time for today. @@ -359,9 +358,9 @@ SYG: Aside from the on the current topic of this return jumping the queue async MF: It's not a lack of compelling use cases, it's just not part of the more limited set of use cases that only treat this as a pipeline of data transformations. If we conceptually want this proposal to only support that limited set of use cases, we don't really need to be concerned about this. -SYG: I see. Okay. Given that we haven't heard strong support, not even strong given that we haven't really heard support in this session. For this use case, I would like to offer that. It might be like it's hard to tell how easy it would be or, how hard it would be to implement this between queue jumping, if other aspects of AsyncIterators behaved, like, if they were implemented on of async generators, except for this, this, if it's Particularly onerous because it adds this weird dimension just for AsyncIterators. I would be weekly against supporting it for implementation simplicity but of course that should not trump. If there are actual compelling use cases that the committee is largely in support of. But if there isn't, I would like to offer implementability as a test for whether we should do this. I will be unlikely to know how implementable it is before stage 3. So if that were the case and you as champion preferred to, to try to support this case, I don't know what we would do. but maybe you know, leave it. Someone will try to implement it if it's, you know, we come back with experience and say actually we don't think it's worth the trouble, then we revisit it. +SYG: I see. Okay. Given that we haven't heard strong support, not even strong given that we haven't really heard support in this session. For this use case, I would like to offer that. It might be like it's hard to tell how easy it would be or, how hard it would be to implement this between queue jumping, if other aspects of AsyncIterators behaved, like, if they were implemented on of async generators, except for this, this, if it's Particularly onerous because it adds this weird dimension just for AsyncIterators. I would be weekly against supporting it for implementation simplicity but of course that should not trump. If there are actual compelling use cases that the committee is largely in support of. But if there isn't, I would like to offer implementability as a test for whether we should do this. I will be unlikely to know how implementable it is before stage 3. So if that were the case and you as champion preferred to, to try to support this case, I don't know what we would do. but maybe you know, leave it. Someone will try to implement it if it's, you know, we come back with experience and say actually we don't think it's worth the trouble, then we revisit it. -MF: Yeah, thanks for the feedback I hadn't yet considered that this adds risk to this proposal around implementability. I do fear though that we haven't heard support for this use case today because it is just a rather hard use case to understand. People might need more time in which case I encourage people to ask any questions they have about it on our issue tracker there and you can try to resolve that and see if people care about it. And remember the thing we're trying to decide is: do we want to be as practically generic in our support for these more exotic usages or do we want to conceptually have these helpers limited to things that are implementable as generators? +MF: Yeah, thanks for the feedback I hadn't yet considered that this adds risk to this proposal around implementability. I do fear though that we haven't heard support for this use case today because it is just a rather hard use case to understand. People might need more time in which case I encourage people to ask any questions they have about it on our issue tracker there and you can try to resolve that and see if people care about it. And remember the thing we're trying to decide is: do we want to be as practically generic in our support for these more exotic usages or do we want to conceptually have these helpers limited to things that are implementable as generators? SYG: Okay, I have no relevant feedback for that question. @@ -389,7 +388,7 @@ JSC: It's a fairly common ask. There's some npm packages that do similar, that d JSC: a couple of design principles that are useful to keep in mind, and these are ordered. First of all, we want array.fromAsync to be similar to Array.from we want its optional parameters to not care whether we supply actual like, for instance, nullish values versus omitting the argument all together. We also really want it to be similar to the behavior of for-await-of like how array.from matches the behavior of the synchronous for of. And less importantly, but still desirable - this is the current from the current iterator helpers proposal - we wanted to at least roughly match the iterator helpers behavior when it comes to Iterator.from().map().toArray(). -JSC: Brief overview of its behavior. Of course, it works on async inputs and the return value is a promise that will resolve to a new array. There are no exceptions to it returning a promise. So if you have a lazy generator here, you can dump it into an array using a for-await Loop and then have Have like push it in to push each item into the result array, or you could just call the function in await with the result and it would be equivalent. Next slide please +JSC: Brief overview of its behavior. Of course, it works on async inputs and the return value is a promise that will resolve to a new array. There are no exceptions to it returning a promise. So if you have a lazy generator here, you can dump it into an array using a for-await Loop and then have Have like push it in to push each item into the result array, or you could just call the function in await with the result and it would be equivalent. Next slide please JSC:And like for await it also works on synchronous iterables too, and it basically does what you would expect. This includes doing the same thing as for await of in that if that synchronous iterable yields promises those promises get awaited and the result that it resolves to is what gets pushed into the array. Completely the same as for-await of=. @@ -427,7 +426,7 @@ JSC: Does anyone else have anything on the queue? [no] Okay, I would like in tha PFC: I'm wondering about the receiver behavior of the static method of fromAsync where it'll check if the `this` object is a Constructor. I was comparing that with what is called Type II built-in subclassing in the remove built-in subclassing proposal. In that proposal, it's described as sometimes beneficial, but at a cost. And for example, we removed that sort of "use the this object as a constructor" behavior from factory methods in Temporal, because it didn't provide the same benefits that, for example, Array.from does. Are there concrete benefits to having that behavior on fromAsync? -JSC: My answer is that I think that any concrete benefits that apply to array.from and making it a generic factory method, making from a generic factory method also should apply making fromAsync a generic Factory method. I care most about consistency between Array.from and fromAsync. And I think that it's fine for array.fromAsync to be a generic Factoring method. I think that if people are using array.from like a generic Factory method, then they will all the I can't think of a situation where they would not also want to reach for array.fromAsync and apply to their Constructor. Does that address your question or or does that create a new question for you? +JSC: My answer is that I think that any concrete benefits that apply to array.from and making it a generic factory method, making from a generic factory method also should apply making fromAsync a generic Factory method. I care most about consistency between Array.from and fromAsync. And I think that it's fine for array.fromAsync to be a generic Factoring method. I think that if people are using array.from like a generic Factory method, then they will all the I can't think of a situation where they would not also want to reach for array.fromAsync and apply to their Constructor. Does that address your question or or does that create a new question for you? PFC: Yeah, that would make sense that the consistency argument is very strong in this case because we also have `from` on the same constructor. @@ -466,7 +465,7 @@ Presenter: Kevin Gibbons (KG) - [proposal](https://github.com/tc39/proposal-set-methods) - [slides](https://docs.google.com/presentation/d/1HCqPMsWiTtsn92gA3b1luVpnVHWVVR0iKaAE0marxkA) -KG: Okay, so set methods. Still trying to make this happen. We talked about it last time but I did not have a large enough time box, and people had some questions and alternatives which we just didn't have time to complete this discussion last time. So I'm coming back for another hour, and I will keep coming back until we get this sorted out. +KG: Okay, so set methods. Still trying to make this happen. We talked about it last time but I did not have a large enough time box, and people had some questions and alternatives which we just didn't have time to complete this discussion last time. So I'm coming back for another hour, and I will keep coming back until we get this sorted out. KG: So previously, several meetings ago now, we decided that instance methods (set prototype union, intersection, whatever) should use Set internal slots on the receiver but they should access the public API on the arguments, so when you pass a Set to intersection or union or whatever as an argument, the methods should use the public API rather than reaching into the internals. So that it works if you have, you know, a proxy for a Set or something that is wrapping a Set and implementing the same interface but imposing additional constraints, or whatever. But we left it unresolved at the time how exactly you should access the public API. @@ -566,7 +565,7 @@ MM: So I agree with that, and I think the proposal that you've mentioned and tha KG: Yes, it's true that this is a problem that comes up a lot, I grant that point. -MM: But if you added this symbol, in my opinion it would only be a justified answer if it were also a precedent about how we deal with this question when it comes up again. And given the pervasive general JavaScript practice, I think that should be the inescapable JavaScript practice should be staying closer to that.It should be the precedent we are trying to follow. And, and, and if you're trying to do a one-off four sets and Maps, I can understand the motivation for doing a one off because it's a unique problem. But then don't do it in a way that looks like a precedent for applying to other problems and well, and in fact, in my opinion here, the, the, the keys solution to set map is adequate. +MM: But if you added this symbol, in my opinion it would only be a justified answer if it were also a precedent about how we deal with this question when it comes up again. And given the pervasive general JavaScript practice, I think that should be the inescapable JavaScript practice should be staying closer to that.It should be the precedent we are trying to follow. And, and, and if you're trying to do a one-off four sets and Maps, I can understand the motivation for doing a one off because it's a unique problem. But then don't do it in a way that looks like a precedent for applying to other problems and well, and in fact, in my opinion here, the, the, the keys solution to set map is adequate. KG: To be clear. I did intend this to set precedent going forward. @@ -698,7 +697,7 @@ KG: Yes. And I think that's the benefit of symbols over strings in general, is t KG: Okay, I think we have gone through the queue. I haven't had a chance to read the chat to see if other people have strong opinions about this, but given MM and WH's strongly expressed positions, and the relative lack of support for my position, I guess I am going to go with option two, unfortunately. I'm sad about this, but I would like to have set methods. And to be concrete, the thing that I plan to do is make all of the Set methods in this proposal eagerly access the .size getter, the .has property the .keys property, and then check that the .has property is callable and the .keys property is callable and call ToNumber on the size property, and then having done that to implement the remainder of the algorithm using those particular things. And in the case of union, it will have text that checks .has is callable even though it will never in fact call it, but at least this ensures that Union and intersection require the same interface even if not necessarily the same behavior for that interface. Would that satisfy everyone here and be a way to move on? -MM: that would satisfy me. +MM: that would satisfy me. WH: Yeah, that would be a solution to the problem. diff --git a/meetings/2022-09/sep-15.md b/meetings/2022-09/sep-15.md index 521a7ee5..ee4339b7 100644 --- a/meetings/2022-09/sep-15.md +++ b/meetings/2022-09/sep-15.md @@ -137,7 +137,7 @@ RBN: And then in addition to the using statements use of dispose, the disposed i RBN: To finish this example. What this does, then the to if you're to just do the assignments, you run into issues with exceptions then we're back to the again initial motivations of having to wrap these all with dragons and finally is to reach the end. instead. This simplifies this process, you create a stack that is scoped to the Constructor. You can use these new resources to get added. So if an exception gets thrown during construction of these resources, as the block exits, the stack will be disposed, which will dispose of the resources You've tracked, successfully. Once you have created all these resources successfully you can then use the move method to pull them out of the stack that is guaranteed to be disposed, and put them into a new stack that you can then dispose later. These again ore capabilities are also available in python's ExitStack. The last API method is the public dispose method, which is essentially a bound dispose. The this was a request on the issue tracker for folks that want to use the same type of capabilities, but within a factory function where they just want to use the bound dispose. This is something that I think is very useful but there's been some discussion about whether it's necessary. So, So, I don't. have a strong need for it, but I felt that it was valuable to consider. And AsyncDisposableStack is very similar similar to the DisposableStack, but is designed to work with async features. -RBN: so, I did want to talk about the postponed features, but I don't know how much time I’ve got. [discussion of timebox] +RBN: so, I did want to talk about the postponed features, but I don't know how much time I’ve got. [discussion of timebox] RBN: Well, the one thing that I wanted to get to. So `using void` I've discussed, I think it's valuable, We can discuss that a bit later. The `using` statement is something that I'm going to postpone or possibly drop. `using await` would be syntactically the syntactic form that we're considering for working with async disposables. and one of the reasons why this proposal still has async disposable stack even with deferring the async version of using is that asynchronous disposables are extremely valuable. They exist within the ecosystem with async iterables there's a number of use cases for these and AsyncDisposableStack provides a good middle ground for not yet. Having a syntactic mechanism of await that would be necessary. See. manage the implicit await, that we be concerned with So with that, I will go to the queue and we can talk about what we have here. @@ -157,7 +157,7 @@ WH: I don't think you're understanding the problem here. I also had raised it on RBN: I believe I do, but [crosstalk] -MAH: I guess. I have a specialized Case of this were in the `for using of resources` and then a `using` statement inside the block there would be nested aggregate errors, which as a user I would find weird. And I'm wondering if at least in this specific case, we might be able to avoid this. That's just a comment. I'll go to the I'll try to find the issue on GitHub. +MAH: I guess. I have a specialized Case of this were in the `for using of resources` and then a `using` statement inside the block there would be nested aggregate errors, which as a user I would find weird. And I'm wondering if at least in this specific case, we might be able to avoid this. That's just a comment. I'll go to the I'll try to find the issue on GitHub. KG: So I think this is probably not going to come up that much that you have a bunch of different errors. So I am not worried about having potentially deep structure, like three different errors that all need to get aggregated into a deep structure instead of a broad structure - that just doesn't seem like a big problem to me. I also think it seems like it would be nice to just not use AggregateError at that point to just say that the disposal error was caused by the original error, seems like it makes more sense for things that happen at approximately the same time. So, I wonder if there's the possibility of using error cause instead of aggregate error. But my main point is just I'm not worried about having deep structure and generally agree with WH’s point. @@ -187,7 +187,7 @@ RBN: Yes, it is called as soon as the last line of code in the module, just, exe DE: Oh, Okay, so those are things that alive just during the initialization, okay? That makes sense. And that's useful. -RBN: and the spec explicitly bans exporting `using` declarations. You can't export using even though you can export const export let, you can't export using. +RBN: and the spec explicitly bans exporting `using` declarations. You can't export using even though you can export const export let, you can't export using. DE: Okay, great. since yeah. to export a resource that ends up being disposed soon as a module finishes up evaluation. @@ -271,7 +271,7 @@ MAH: Would you be able to go back to the for-of loops slide, please? So given th RBN: One of the things that's part of what was postponed when we were talking about the what was previously, a using, await declaration, was that if you had a for-await, await that had an async disposable, you for-await because you're talking about two different things. When you're doing a for-await, you are your the awaits is in relation to the thing. You are iterating over the expression, the value that you get back, you don't also. mean, you potentially awaited as part of the iteration, but we don't do any type of among other implicit awaits, we don't start implicitly awaiting a yield star. But, once you get that value, we don't do anything with it. And I would be wary about introducing an implicit await at the end of the block, that is not in relation to the Declaration itself. which is again, so this was something that was in the proposal. the original for await x of expr, did not await async disposables. That required a separate declaration to be used -MAH: There isn’t really implicit awaits at the beginning and end, but it effectively also happens at the end of any for-await, block. I don't think it would be surprising to await the disposal. +MAH: There isn’t really implicit awaits at the beginning and end, but it effectively also happens at the end of any for-await, block. I don't think it would be surprising to await the disposal. RBN: Yeah, the only thing that would be surprising would be awaiting an async dispose here but not having a mechanism to do that for any other `using` declaration. @@ -345,7 +345,7 @@ RBN: Yeah, I would like to say I originally intended to request advancement to s DLM: Yeah for sure. The SpiderMonkey team went over this about a week ago, so if there's new justifications, we haven’t had a chance to review those yet and I'd be happy to bring those back to the team and try to get some specific issues raised. - and I suggest GitHub as a venue for further discussion, Or. of course, that's right. I do have one question. I need to ask to the committee quickly before we end one was that even if we're not able to stage 3 today, I am. interested in breaking off the using void. and using wait. portion portions of this proposal. I definitely do not want to abandon them and my question to committee is just as we did for The Decorator metadata. These Capability, the bindingless using and async using have been part of this proposal since its Inception. And I'd like to, as I look into branching these off in to follow on proposals the potential of maintaining stage 2 for these based on the existing discussions that we've had. And I'd like to see if I can get consensus on maintaining stage 2 for these features as they get broken off into separate proposals. +and I suggest GitHub as a venue for further discussion, Or. of course, that's right. I do have one question. I need to ask to the committee quickly before we end one was that even if we're not able to stage 3 today, I am. interested in breaking off the using void. and using wait. portion portions of this proposal. I definitely do not want to abandon them and my question to committee is just as we did for The Decorator metadata. These Capability, the bindingless using and async using have been part of this proposal since its Inception. And I'd like to, as I look into branching these off in to follow on proposals the potential of maintaining stage 2 for these based on the existing discussions that we've had. And I'd like to see if I can get consensus on maintaining stage 2 for these features as they get broken off into separate proposals. DE: I'm not convinced we want to eventually do `using void` for one. I want to suggest that we don't ask for reaffirming consensus for these things today. @@ -359,7 +359,7 @@ DE: I would like to see a presentation that goes into more detail on asynchronou RBN: I did hope to discuss those earlier on, but cut some of the slides short given time. That's fine. I will leave - essentially I will consider these to still be part of the stage 2 proposal. And when at the next meeting will seek to eventually break if we can advance, if we reach a point where you can advance to stage three and break these out, I'll try to make sure that I have specific separate presentation for each of these features, we can discuss them in isolation. -DE: Yeah. that sounds good to me. +DE: Yeah. that sounds good to me. ### Conclusion/Resolution @@ -372,9 +372,9 @@ Presenter: Ron Buckton (RBN) - [proposal](https://github.com/rbuckton/proposal-extractors) - [slides](https://1drv.ms/p/s!AjgWTO11Fk-TkoEtBecgCeh0FRhDqw?e=6ahvlJ) -RBN: For hopefully no more than the next hour, I'll be talking about. something called extractor objects. I'll go into a little bit about the motivations behind this and what I'm looking to accomplish, but first, I want to give a brief introduction to what an extractor object is. For anyone who's not familiar, extractor objects are a feature of the Scala programming language. an extractor object is. essentially an object that has an unApply method. Scala uses this prodigiously within variable declarations and pattern matching to. essentially extract the arguments that were used to produce a result. to provide a little bit more context, in Scala. An object can have an apply method and that's kind of, like, a Constructor, or a function. the apply method accepts arguments and it produces a result. This is not unlike function apply, or creating a new object via the new keyword in JavaScript, you pass in arguments. you get a result either an object or value. So essentially, the unapply method is the inverse of apply, it accepts a result and tries to give the arguments. So the example here, shows a customer ID that on application takes in a name and A random. ID and un-application takes in a formatted ID. And if it matches successfully Returns the results, otherwise it returns a value indicating that failed to match. So you can see here, a value of customer ID is created by applying customerID to a string, and you can then extract the name. then by unapplying the customerID. function, or the customer ID object to the results. Providing this interesting inversion of a call +RBN: For hopefully no more than the next hour, I'll be talking about. something called extractor objects. I'll go into a little bit about the motivations behind this and what I'm looking to accomplish, but first, I want to give a brief introduction to what an extractor object is. For anyone who's not familiar, extractor objects are a feature of the Scala programming language. an extractor object is. essentially an object that has an unApply method. Scala uses this prodigiously within variable declarations and pattern matching to. essentially extract the arguments that were used to produce a result. to provide a little bit more context, in Scala. An object can have an apply method and that's kind of, like, a Constructor, or a function. the apply method accepts arguments and it produces a result. This is not unlike function apply, or creating a new object via the new keyword in JavaScript, you pass in arguments. you get a result either an object or value. So essentially, the unapply method is the inverse of apply, it accepts a result and tries to give the arguments. So the example here, shows a customer ID that on application takes in a name and A random. ID and un-application takes in a formatted ID. And if it matches successfully Returns the results, otherwise it returns a value indicating that failed to match. So you can see here, a value of customer ID is created by applying customerID to a string, and you can then extract the name. then by unapplying the customerID. function, or the customer ID object to the results. Providing this interesting inversion of a call -RBN: How were these extractors used? Well? Scala uses extractors and variable variable declarations You can call unapply on the provided on the object, that's referenced using the argument. That's assigned from the right. If that matches, the result is then destructured to whatever variables are used. In he example of customerID, you would take this string. parse it, and if it is valid it would extract the name from the beginning part. similarly, if you had a point object that you could construct an X and Y value, you could extract the original X and Y value by taking the point on the right hand side the assignment and passing it into the left hand side. And then unapply is a unary functions gets called and then produces a value that in this case is a list that has a first and second argument. +RBN: How were these extractors used? Well? Scala uses extractors and variable variable declarations You can call unapply on the provided on the object, that's referenced using the argument. That's assigned from the right. If that matches, the result is then destructured to whatever variables are used. In he example of customerID, you would take this string. parse it, and if it is valid it would extract the name from the beginning part. similarly, if you had a point object that you could construct an X and Y value, you could extract the original X and Y value by taking the point on the right hand side the assignment and passing it into the left hand side. And then unapply is a unary functions gets called and then produces a value that in this case is a list that has a first and second argument. RBN: One thing that's important about this is that extractor objects are exhaustive, which means that for a value to be extracted, it has to match successfully. If it does not match, then it throws an exception. So if you tried to extract a value from a Some but provided a None, this would be an error. This is also used with, in Scala, with pattern matching in a Scala match. You can provide multiple cases and you can then match on the an extractor pattern to pull the name out to be used on the right hand side. matching a shape or a point you would match the X&Y, for a rectangle you might match the individual points, or if you're matching an option you can. match on whether it was a Some with a value or a None. Similar to variable declarations. to variable declarations. You use the same matching behavior but in this case, if a match fails, it'll move on to the next alternative in the list, Next, extractor objects in Scala, have a lot of similarity with languages both roughly in syntax. And sometimes, even an implementation. Rust's pattern matching uses a, if not user-defined mechanism, a similar approach in syntax to extracting objects. F# has active patterns, which lived you defined functions that can be used in a pattern to provide the same type of extraction. C# has the deconstruct method, which allows you to take an object and extract it into what is essentially a tuple that can then be used in pattern matching. The Hax language supports this through algebraic data type based enums and pattern matching. And there are some additional similarities in languages like, OCaml, racket, Swift. And a number of lisp derivatives. @@ -382,13 +382,13 @@ RBN: So, how would extractor objects relate to various currents and past and upc RBN: So I was talking about the past proposals. One example here was from Kat Marchan about collection literals, the idea being to introduce a novel syntax that allowed you to construct a Map or a Set or essentially, any value using some input value on the righthand side and then support it in both the structuring and pattern matching. Current proposals. We have the current pattern matching proposal, which uses a novel Syntax for matching values that be a patterns and has some support with custom matchers that provide user defined matching behavior. In this example, basic patterns are supported such as matching object literals, matching what is essentially array destructuring, matching literal values. and the ability to use custom matchers using interpolation syntax and the `with` keyword. -RBN: In upcoming proposals, this was previously discussed. in a plenary last year, it did not advance, I think there is some additional changes that we will be making to the proposal, but I think there is still significant value to enums and algebraic data types. I know I and I'm sure the Jack Works, who is one of the champions for this, intends to bring this back to committee. in the future. future. I think Specifically algebraic data types of a lot of value that can be added to the language. Both providing structured values, providing something that is record-like in its implementation that could potentially be supported over in shared code contexts with fixed sized objects. There's a lot of potential for algebraic data types that could be leveraged. +RBN: In upcoming proposals, this was previously discussed. in a plenary last year, it did not advance, I think there is some additional changes that we will be making to the proposal, but I think there is still significant value to enums and algebraic data types. I know I and I'm sure the Jack Works, who is one of the champions for this, intends to bring this back to committee. in the future. future. I think Specifically algebraic data types of a lot of value that can be added to the language. Both providing structured values, providing something that is record-like in its implementation that could potentially be supported over in shared code contexts with fixed sized objects. There's a lot of potential for algebraic data types that could be leveraged. RBN: So, with that brief tour of what extractors are, I’d like to now, actually, go into kind of what some of the motivations are I have for bringing this to committee and some directions that I think we could potentially take if we're interested in pursuing this. Currently there is no mechanism to execute user-defined logic during destructuring and despite the cases that I showed around, algebraic data types and pattern matching, That. is one of the more interesting cases that I interest that I'm looking to investigate. Currently with destructuring You can wrap the value that you are going to destructure in a function call. If you need to do some type of validation or normalization, but once you get one level deep within destructuring, you cannot then execute any other code that might affect destructuring. You can't validate incoming parameters, or parts of an object destructuring that you might want to check before moving on to other properties. And then up having to split things out into multiple statements. So there's some interesting values that can be achieved by enabling user-definedd logic. In addition. pattern matching does provide or propose a way to execute user-defined logic during matching with custom matchers. So if pattern matching moves forward with custom matchers then there would be a disparity between what you can do with patterns and what you can do with destructuring, the enums proposal and specifically att's would benefit from a syntax that is consistent and convenient across declaration, considering destructuring, and pattern matching since such can see such consistency is a key to being able to learn and understand the behavior of these features. If we had inconsistent syntax for declaration and construction, for how you create them and how you read from them, that can make it more difficult to actually follow what is actually trying to be done in the code. Also such consistency is evidence of immature and coherent programming language trying to avoid introducing warts into the language because we try to solve one problem, but at the expense of other other problems, and potentially introduce new ones down the line. And we unfortunately have these. Everything from Symbol.species, the regex methods. We've sometimes have implemented features that seem valuable at first, but then can end up tripping ourselves up and down the line. And I think looking into a syntax that we could have achieve some consistency would help us to have something that's a little bit more stable in the future. RBN: So to get to that, I'd like to talk about what extractors are. There are essentially two things that I'm discussing, potential Proposals. one is the specifically a proposal for extractors. in destructuring and I'll talk a little bit more about pattern matching in a moment. so, what I'm proposing with extractors is to investigate the potential for introducing novel syntax that allows us to execute user-defined code during destructuring. This allows inline data validation, transformation, and normalization. We could leverage the scala extractor objects and rust's variable patterns as prior art or base this design on custom matchers from pattern matching proposal. An earlier version of this had a separate symbol .apply method but I found in discussing this with the pattern matching proposal champions that I could just as easily leverage the proposed. Symbol.matcher. Built-in symbol that they're planning to use. This would also provide parity with custom matchers that are in the current pattern matching proposal will allow them to be used in destructuring and provide a basis for potential future with enums and algebraic data types. -RBN: And there are three areas where extractors can be applied: binding patterns such as variables and parameters, assignment patterns, and match patterns. There are two categories of extractor patterns array, extractors where the extraction is then evaluated using a radio structuring an object's just director is where extractions evaluated using object destructuring and, in this proposal and extractor consists of essentially two parts; a qualified name and this is another name for what is already in the decorators proposal, which is either an identifier reference, or a series of dotted identifiers, identifiers, following identify. But this would be something that references a in scope binding, that is a custom matcher. An object that has this Symbol matcher method. Array extractors, an extractor has either an array extractor pattern syntax or object Constructor pattern that follows it. So again these qualified names are then used to reference custom mattress in scope much like how decorators reference identifiers or dotted identifiers. +RBN: And there are three areas where extractors can be applied: binding patterns such as variables and parameters, assignment patterns, and match patterns. There are two categories of extractor patterns array, extractors where the extraction is then evaluated using a radio structuring an object's just director is where extractions evaluated using object destructuring and, in this proposal and extractor consists of essentially two parts; a qualified name and this is another name for what is already in the decorators proposal, which is either an identifier reference, or a series of dotted identifiers, identifiers, following identify. But this would be something that references a in scope binding, that is a custom matcher. An object that has this Symbol matcher method. Array extractors, an extractor has either an array extractor pattern syntax or object Constructor pattern that follows it. So again these qualified names are then used to reference custom mattress in scope much like how decorators reference identifiers or dotted identifiers. RBN: Array extractors with a binding pattern performer destructuring on a successful match. They are denoted by parentheses rather than square brackets helps to avoid confusion with element axis expression, computed property names and looks similar to a call expression, expression, which then allows to mirror construction versus application. So here you could parse a string input into three outputs and this parallels the same behavior we have for array destructuring. Here. you can see extracting a list of (a, b) from the creation of a list of two values. or extracting an option.Some value from a value that contains an option.Some(1). To support an essential pattern matching tenant of ensuring that patterns are exhaustive or the pattern matching is exhaustive for destructuring. We would want to error if the match fails. This is not unlike how if you try to destruture null, we will throw, or if you try to use array destructuring with an object without a symbol iterator, you would throw. This ensures that you don't end up with a garbage value in the constant that you're declaring. or The Binding that you're declaring. and avoids having to worry about introducing some type of fall back object that could be used in further destructuring. So throwing expect to find are I think is more reliable in this case. Array extractors and assignment patterns are very similar. Again. they parallel destructuring patterns, we already have where a array destructuring of A, B is paralleled by the array construction on the right list of A, B parallels. The list of 1, 2 on the right. This would require a cover grammar to support but I think is still, it is still feasible since currently a call expression. cannot be the target of an assignment. That is an error. Objects extractors are similar, in object structure and bonding pattern forms object. destructuring rather than array destructuring and uses curly brackets similar to an object literal. which means it's consistent with the existing structure and syntax and Potential. and mirrors. future role little construction. Syntax that ADT enums would hope to employ. And similar to this. The Binding patterns, object extractors in assignment patterns will use curly braces, and we would most likely enforce parentheses around these to avoid carving out too much syntax space of identifiers curly, we would want to definitely ensure that the thing that you want on the left is something that is a valid destructuring assignment target. And, as with array extractors, in both cases, these will throw errors. @@ -418,7 +418,7 @@ WH: OK, thank you for correcting the slide. When I was taking a look at it earli RBN: That was a typo and I do show with the iso date-time showing I think a much clearer example of this with date:, ISO date-time colon time. So, that was a typo in my transcription. So, to continue. With extractors and pattern matching there is an interesting value in how succinct the expression can be. The current pattern matching proposal leverages uses an interpolation care syntax that is similar to string interpolation the dollar, curly and curly and also requires a “with” keyword and then a destructuring pattern that follows it, which becomes very cumbersome to use when working with nested patterns. I've also been talking at length with a number of my contemporaries on the TypeScript team. We think that there is a potential for a pattern matching syntax that does not require interpolation. And would become much more susinct, where you could just simply say, when `Option.Some(Point)`, and extract the values, when `Option.Some with …`. And extract the values. Or in the example of the top, when message, right, text. or when message move, X&Y and Y and it becomes much simpler and easier to read than in my opinion than the current pattern matching syntax. So I think it could provide a very viable alternative to the interpolation and with it's Syntax that's used in that proposal. -RBN: and, there's some additional considerations about extractors. So the proposed syntax is a very tentative and is subject to change during stage 1. If this is adopted. Right now, I've presented a lot of syntax but these are very early, rough ideas about what this could look like. I am aware that we might need to disambiguate being in fixed token, similar to the collection, literals proposal. We also might need to drop Objective Extractor in favor of just an Array Extractors with the inverted call like behavior, that just has an object literal, But I'm concerned that might break symmetry but it's something I'm willing to investigate. And I also want to make sure that we're again able to do something that could be consistent with a future algebraic data type based enum proposal, which we do intend to propose in the future. But extractors provide more value than just what you would get out of ADTs even if we choose to pursue them. So I'm very much interested in pursuing this with or without,ADTs. But if we do choose to pursue ADTs, one of the things again that I mentioned I am interested in is a way to achieve consistency between declaration, construction, and destructuring, While this hasn't been adopted by TC39, there's been some interest. So, I'll show example of what I mean. Here we can see an example of an ADT enum that defines a right entry that contains essentially, a tuple of potential values. The name here isn't actually important except for potentially debugging or toString representation like cases. since the ordering is what matters. Itt's also very informative at least. Here we also have a `Message.Move` enum member that is a record-like value you a tagged record like value that contains an and Y property. its construction might look like message dot, right. With a string, which is just a normal function call, message.move with curly braces which would be essentially the tagged record construction call and destructuring would then look like that. +RBN: and, there's some additional considerations about extractors. So the proposed syntax is a very tentative and is subject to change during stage 1. If this is adopted. Right now, I've presented a lot of syntax but these are very early, rough ideas about what this could look like. I am aware that we might need to disambiguate being in fixed token, similar to the collection, literals proposal. We also might need to drop Objective Extractor in favor of just an Array Extractors with the inverted call like behavior, that just has an object literal, But I'm concerned that might break symmetry but it's something I'm willing to investigate. And I also want to make sure that we're again able to do something that could be consistent with a future algebraic data type based enum proposal, which we do intend to propose in the future. But extractors provide more value than just what you would get out of ADTs even if we choose to pursue them. So I'm very much interested in pursuing this with or without,ADTs. But if we do choose to pursue ADTs, one of the things again that I mentioned I am interested in is a way to achieve consistency between declaration, construction, and destructuring, While this hasn't been adopted by TC39, there's been some interest. So, I'll show example of what I mean. Here we can see an example of an ADT enum that defines a right entry that contains essentially, a tuple of potential values. The name here isn't actually important except for potentially debugging or toString representation like cases. since the ordering is what matters. Itt's also very informative at least. Here we also have a `Message.Move` enum member that is a record-like value you a tagged record like value that contains an and Y property. its construction might look like message dot, right. With a string, which is just a normal function call, message.move with curly braces which would be essentially the tagged record construction call and destructuring would then look like that. RBN: so, that leads to kind of my summary and open to the queue. I am seeking stage 1 to investigate the potential of introducing extractors in binding and assignment patterns. I am not seeking adoption at this time for extractors and pattern matching as that something I plan to continue discussing with the pattern matching champions. and will likely hinge on the potential advancement of this proposal is I'd like to make sure that all of these syntaxes are consistent. @@ -452,13 +452,13 @@ RBN: I'm aware that's a potential concern I mentioned that here we might need to WH: Okay. -KG: I am generally in support of something like this. In particular, one of my biggest concerns about pattern matching is the sheer amount of stuff that it introduces that it's just for pattern matching. So if there is a way that we could rip off this portion of pattern matching and make it not just for pattern matching, I would be in favor of that, whether or not pattern matching advanced I suppose. I do want to say, I don't want this if it is not what pattern matching is using. I don't think we can reasonably have both, but if we can have this and it can be used in pattern matching or if for whatever reason, pattern matching doesn't happen, this still seems nice. So I'm in favor of certainly going to stage 1 and exploring this. +KG: I am generally in support of something like this. In particular, one of my biggest concerns about pattern matching is the sheer amount of stuff that it introduces that it's just for pattern matching. So if there is a way that we could rip off this portion of pattern matching and make it not just for pattern matching, I would be in favor of that, whether or not pattern matching advanced I suppose. I do want to say, I don't want this if it is not what pattern matching is using. I don't think we can reasonably have both, but if we can have this and it can be used in pattern matching or if for whatever reason, pattern matching doesn't happen, this still seems nice. So I'm in favor of certainly going to stage 1 and exploring this. RBN: Yeah, I definitely agree. I'm very much. Interested in this for pattern matching. And from my discussions with the pattern matching Champions, it definitely feels like a bit of a chicken and egg thing. It's less likely I'll be able to introduce this in pattern matching without a stage one proposal to investigate this for new structuring or just as in general. But I definitely don't want to end up in a situation where we do one thing for pattern matching and something completely different for destructuring. This whole proposal about looking for something that allows us to achieve consistency and syntax and introduce some capabilities, that will be invaluable for algebraic data types in the future, which will heavily utilize both pattern matching and destructuring. So, I think it is extremely valuable for both. I also have - I kind of have a vision for what I'd like to see pattern matching do. It's something that I've been discussing at length with the pattern matching Champions and this is one of the key pieces to that discussion which is why I bringing it to committee? She? has a clarifying question. Question. -SYG: Kevin, do I understand your position correctly that you are asking that you'll be happy with this going forward. If pattern matching hampions also reduce scope in that proposal, Because my understanding right now is this Not. in place of stuff in pattern matching, but can compose with stuff Strictly. and pattern matching and additive +SYG: Kevin, do I understand your position correctly that you are asking that you'll be happy with this going forward. If pattern matching hampions also reduce scope in that proposal, Because my understanding right now is this Not. in place of stuff in pattern matching, but can compose with stuff Strictly. and pattern matching and additive KG: It's not strictly additive to what's in pattern matching. There is something that is in pattern matching that is a lot like this. And if the thing that is currently in pattern matching goes to stage 3, I would not want this proposal to further advance unless it was able to use. literally that syntax, which I don't think it would be able to do. I'm not asking that pattern matching be reduced in scope per se, just that if the feature of Ron is proposing goes forward and pattern matching goes forward then the future Ron is proposing must be a subset of pattern matching - or not a subset of pattern matching, but like the thing that is currently in pattern matching must be the same thing as what Ron is proposing. @@ -468,7 +468,7 @@ RBN: Yes. It's my hope that this syntax is much more readable and concise and ea RPR: Okay. SYG, are you happy? -SYG: Yes. that clarifies. +SYG: Yes. that clarifies. RPR: It sounds like they are both fine on their own, but it's the intersection of both proposals where there's some coordination needed. @@ -488,7 +488,7 @@ RBN: I can speak to that if that's alright the extractor syntax currently depend JHD: No, I mean, I think that's good. I haven't read through. the extractors proposal enough to be able to have a concrete list of overlaps, but there's a number of places including the stuff around its mentioned where it doesn't 100% match up. We may decide that's okay to not serve the additional use cases, but yeah the pattern matching proposal has more discussion to be had before coming back for stage 2 anyway. So Ron has been, and will be part of those discussions as it relates to extractors. So I think that we'll have to be able to make a good case for that later and we probably shouldn't spend time doing that right now. -DE: Okay. We all agree that there's more discussion needed, What I don't understand is - I mean, it sounds like we're just talking about talking about differences in The. match protocol not, it doesn't really and how expressive they are. But just in the form of the protocol itself, because, +DE: Okay. We all agree that there's more discussion needed, What I don't understand is - I mean, it sounds like we're just talking about talking about differences in The. match protocol not, it doesn't really and how expressive they are. But just in the form of the protocol itself, because, JHD: It's not about the protocol, it's about the ability to chain and compose patterns and the extractor syntax seems to be limited in the ways you can do that, the pattern matching syntax is not limited intentionally. @@ -500,18 +500,17 @@ DE: Yeah, suffice it to say that I don't understand the difference yet. RPR: Yeah, everyone's agreed that there will be further discussion. So I think this is good to advance to the next topic. -ACE: Yes. So further just echoing what people have said, full-on positive this going forward. I like the syntax like maybe with some tweaks but generally it looks good. Before someone else says it, it this also seems like with pattern matching, this proposal, the ADT / enums proposal, there's that kind of implicit Epic happening of, I kind of want each of them, if all of them happen, but also want each of them to be explored separately and to kind of stand on their own as much as they can. And I think that's a healthy way. Maybe in the future, they get merged together. So they land together. I don't know. I think that maybe the I like the fact there's multiple proposals that will complement each other, but feel like exploring this separately. so it's can be kind of ensured that it kind of works as well as it can without, say, for example, with pattern matching the risk there is because pattern matching creates its own syntax scope it could almost create any new syntax it wanted, whereas exploring this separately forces that it to be more compatible with the rest of the language. I think that's a healthy way to explore this. Big plus 1 to stage one. +ACE: Yes. So further just echoing what people have said, full-on positive this going forward. I like the syntax like maybe with some tweaks but generally it looks good. Before someone else says it, it this also seems like with pattern matching, this proposal, the ADT / enums proposal, there's that kind of implicit Epic happening of, I kind of want each of them, if all of them happen, but also want each of them to be explored separately and to kind of stand on their own as much as they can. And I think that's a healthy way. Maybe in the future, they get merged together. So they land together. I don't know. I think that maybe the I like the fact there's multiple proposals that will complement each other, but feel like exploring this separately. so it's can be kind of ensured that it kind of works as well as it can without, say, for example, with pattern matching the risk there is because pattern matching creates its own syntax scope it could almost create any new syntax it wanted, whereas exploring this separately forces that it to be more compatible with the rest of the language. I think that's a healthy way to explore this. Big plus 1 to stage one. JHD: So this is certainly a stage 1 concern and not a stage 1 blocker. But I have a lot of concerns about the syntax. In particular, I think the `identifier{` is not what I would consider pleasant, and I don't think dropping the object extractor functionality is actually viable. So I think there's a lot of syntax exploration that will need to be done. so that's just brought up as assuming. that this advances to stage one as a heads up, that's something to explore in stage 1. BSH: Yeah, my main concern here is, I feel like this feature is creating a way to write code that's very terse and clever but not necessarily easier to get right or easier to understand. In several of the examples, it looks to me like, you'd have to - to really know what's going on you would have to go and read what the extractor code does to understand. Well, what things can I use this extractor with? And what can I put on the right hand side? I don't see the benefit to balance out the readability. or what, wait, what does this in some way? Let you write more efficient code easily. or does it somehow - aside from the magical feeling and cleverness, what's the benefit? -RBN: Well. for one I'd like like to say that there's significant benefit when it comes to pattern matching, there is a lot of evidence for this syntax in a number of very popular languages from C#, F#, R -BSH:ust even Scala and many others have very similar constructs when it comes to what pattern matching looks like especially when dealing with complex objects. This definitely dovetails with what an algebraic data types proposal might look like as a way to express tagged records and tuples within a specific domain like we would with option having a Some and aNone. and having this, in pattern matching but not having it in destructuring would be a mistake. and the destructuring side of things is something that's still extremely valuable in heavily used in those other languages that also that have these capabilities. +RBN: Well. for one I'd like like to say that there's significant benefit when it comes to pattern matching, there is a lot of evidence for this syntax in a number of very popular languages from C#, F#, R BSH:ust even Scala and many others have very similar constructs when it comes to what pattern matching looks like especially when dealing with complex objects. This definitely dovetails with what an algebraic data types proposal might look like as a way to express tagged records and tuples within a specific domain like we would with option having a Some and aNone. and having this, in pattern matching but not having it in destructuring would be a mistake. and the destructuring side of things is something that's still extremely valuable in heavily used in those other languages that also that have these capabilities. BSH: I can completely see how it improves things with pattern matching. And perhaps my criticism is partly because I'm not completely convinced about pattern matching itself. So I think what would address this - Not something you can do now, but what would address my concern here, is if I could have a solid example of some code. Like, here's how you would write the code with this feature available and here's how you write it without it, and it's not just that it's shorter, but it should be actually easier to understand and maybe more performant. because, there's some benefit that you're getting performance-wise out of like the environment doing this for you. That's what I would need to see. That's all I'm saying. I'm not trying to block this or anything. I just that's that's my concern. I feel like it's just feels too clever and not really overall beneficial. That's it for me. -RBN: Well, I would say yes, it is clever. But it is also we're not again we’re not the first ones that would be using this. It's been a long-standing pattern in a number of other languages. And while it is clever it does allow you to be more terse It allows you to have shorter lines of code when it comes to code reviews, it allows you to do some complex things that you cannot do with destructuring that require multiple statements to execute. I have to check. I do believe I have an example of this on the explainer that shows the kind of the difference. I'd have to look. I think it's the instant extractor example that I showed in the earlier slide. That's the code to do this without it is multiple lines of code that there's possible repetition additional checking have to do so you're not redoing work. It does add a lot of people not having this today means more complex code that this can significantly reduce. Then having a form that if and when your team is, Our. is, proposed accepted the end advanced. having something that allows you to kind of work easily with these type of strict data structures would be. I think would be invaluable as evidenced by how commonly they're used in languages like rust. +RBN: Well, I would say yes, it is clever. But it is also we're not again we’re not the first ones that would be using this. It's been a long-standing pattern in a number of other languages. And while it is clever it does allow you to be more terse It allows you to have shorter lines of code when it comes to code reviews, it allows you to do some complex things that you cannot do with destructuring that require multiple statements to execute. I have to check. I do believe I have an example of this on the explainer that shows the kind of the difference. I'd have to look. I think it's the instant extractor example that I showed in the earlier slide. That's the code to do this without it is multiple lines of code that there's possible repetition additional checking have to do so you're not redoing work. It does add a lot of people not having this today means more complex code that this can significantly reduce. Then having a form that if and when your team is, Our. is, proposed accepted the end advanced. having something that allows you to kind of work easily with these type of strict data structures would be. I think would be invaluable as evidenced by how commonly they're used in languages like rust. RPR: Yes, it's the sense that it will be easy to satisfy BSH's request here to just reviewing the explainer and seeing that we've got those examples is crystal clear. @@ -519,7 +518,7 @@ RBN: Yeah, I'm happy to go through and add additional examples. KG: +1 -MM: Okay, so first of all, I support stage 1, but I do think it's worth explaining why I'm skeptical. I think that this feature — if you're constructing a new language from scratch, this is very useful. In fact, I did something that has some similarities to elements of this in another language and it was very, very useful. I loved it. The thing that we need to remember is that millions of people learn JavaScript as their first language, and they learn it from people that we would not consider to be language people or language experts and the fact that something is pleasant and viable in Scala — Scala does not have the dynamic range of programmer expertise that JavaScript has. People often learn to program in JavaScript by looking at other people's code and making incremental modifications. We're already a very hard-to-learn language and our language is already too hard for the purpose that it serve the world if we're over our syntax budget. So, I'm very skeptical this one but I also want to say — because I'm not doing anyone any favors by not saying this — the existing pattern matching proposal with the existing syntax on the interpolation and `with` is a non-starter. I could not imagine agreeing to go to stage 2 in that form. If we were going to do that, I think what Ron is showing here is a basis for a pattern matching proposal with a much more intuitive syntax. That would be the only way that we would get to pattern matching that I could see admitting into the language. So in any case, pattern matching research at stage 1, this proposal's extractor research at stage 1… that's fine. +MM: Okay, so first of all, I support stage 1, but I do think it's worth explaining why I'm skeptical. I think that this feature — if you're constructing a new language from scratch, this is very useful. In fact, I did something that has some similarities to elements of this in another language and it was very, very useful. I loved it. The thing that we need to remember is that millions of people learn JavaScript as their first language, and they learn it from people that we would not consider to be language people or language experts and the fact that something is pleasant and viable in Scala — Scala does not have the dynamic range of programmer expertise that JavaScript has. People often learn to program in JavaScript by looking at other people's code and making incremental modifications. We're already a very hard-to-learn language and our language is already too hard for the purpose that it serve the world if we're over our syntax budget. So, I'm very skeptical this one but I also want to say — because I'm not doing anyone any favors by not saying this — the existing pattern matching proposal with the existing syntax on the interpolation and `with` is a non-starter. I could not imagine agreeing to go to stage 2 in that form. If we were going to do that, I think what Ron is showing here is a basis for a pattern matching proposal with a much more intuitive syntax. That would be the only way that we would get to pattern matching that I could see admitting into the language. So in any case, pattern matching research at stage 1, this proposal's extractor research at stage 1… that's fine. RPR: Thank you Mark. I think KG was agreeing. @@ -529,7 +528,7 @@ RPR: Excellent. We have one less than a minute remaining. WH: Yep, I agree with everything Mark said. -JWK: I like the round `(` form but I am skeptical on the the `{` because in a previous slide Ron mentioned about the reflective syntax that can construct something with the curly braces syntax (`const x = Map { }`), we need to preserve this syntax space for other future usage. Like pattern matching requires a match `match [No LineTerminator here] {}`, do expressions also need that `do [No LineTerminator here] {}`. If we use this kind of syntax for construction, we will waste too much syntax space. +JWK: I like the round `(` form but I am skeptical on the the `{` because in a previous slide Ron mentioned about the reflective syntax that can construct something with the curly braces syntax (`const x = Map { }`), we need to preserve this syntax space for other future usage. Like pattern matching requires a match `match [No LineTerminator here] {}`, do expressions also need that `do [No LineTerminator here] {}`. If we use this kind of syntax for construction, we will waste too much syntax space. RBN: Yeah. I again, I'm aware of that also So match Requires match, NLTH, open paren followed by an expression. A close paren and then another no line Terminator in a curly. So this doesn't step on Match. It doesn't step on doExpressions because `do` is a keyword, I do see how it could potentially step on other future identifier early in expression space. And that is again why I am still declaring this as very, very early syntax and something that I'm actively planning to investigate how whether the object patterns are Whether the option pageants are an option. I think I had on the on here. It's that, it may be necessary to drop object extractor or patterns in favor of only a regex extractor patterns. I'm worried about the possible break-in symmetry with what we'd like to see with. Hey. ADT's, and then ADT's still needs to further investigate. The proposal still needs to further investigate what that ADT construction syntax would look like. So, there's a lot of exploration needs to be done here, and the upside is that we could still potentially move forward with the array extractor capabilities and for destructuring, and continue to iterate on object extractors as we go. but at the very least to having investigate the syntax based gives us a chance to find those inconsistencies and look for alternatives for syntax? @@ -556,7 +555,7 @@ NRO: so, from an Ecma 262 point of view, module loading is synchronous because t NRO: Okay. so we have mostly seen how loading modules works, Why am I talking about this? There are different proposals related to module right now. One of them is module blocks which allows creating a modyke that potential imports other models, but the important thing is that this module is created inline. And so it's not created by the host. and, we can later dynamically import this module to trigger the loading and execution of all the dependencies. so, we will need a new host to look, for example I'm calling it HostLoadModuleDependencies to load the dependencies of a module that you already have.Then we have the import reflection proposal. That actually stopped. When it comes from JavaScript modules, it allows loading a module without actually loading dependencies and without executing it. So we need two new hosts hooks now, one to load the module without actually loading the dependencies and want to them later load. The dependencies of the module will auditor righteously. So one of these two hooks is the same as what we need for module blocks. The other is a fourth one. one. and, lastly, we have the compartments proposal. Compartments allow virtualizing The related host Behavior to the final module loader. It needs to specify how the graph loading process works by delegating the loading of a single module records to an async function. -NRO: so, can we avoid introducing all these new host hooks and duplicating the loading algorithm between ecma262 and hosts. Well, yes, we can introduce a new single hook that we can use for every single of those use cases are presented, which is an asynchronous version of HostResolveImportedModule. I called it HostLoadImportedModule, it takes a module specifier and then loads it and (?) without doing anything else. So, with this hook, how would loading and evaluating a module work? Well. first the host has to load the entry point. It can reuse the same logic that uses in host imported model. Then we have the again the graph loading phase but this time the graph loading phase is not managed by the Important. Is that it calls Lodge, requested modules method in equal to six books, Ahmed 262 to recursively load all the dependencies using this new host hook which might be asynchronous and stores. the result of each call to this hook into an internal cache. And this internal cache is made it so that later we can access these modules synchronously instead of calling the asynchronous host hook. and, this process is mostly all async. After. the graph loading phase, we have the graph linking phase. So the host calls the link method again. And this time, the link method iterates over all the modules in a graph without interacting with the host anymore, because we already have the module records in this inline cache . And finally, the whole starts the evaluation phase by calling the evaluating a do we evaluate method in 262 and again we iterate through the module graph and we evaluate all the modules. oh, Also, the evaluation phase does not call into host hooks anymore. +NRO: so, can we avoid introducing all these new host hooks and duplicating the loading algorithm between ecma262 and hosts. Well, yes, we can introduce a new single hook that we can use for every single of those use cases are presented, which is an asynchronous version of HostResolveImportedModule. I called it HostLoadImportedModule, it takes a module specifier and then loads it and (?) without doing anything else. So, with this hook, how would loading and evaluating a module work? Well. first the host has to load the entry point. It can reuse the same logic that uses in host imported model. Then we have the again the graph loading phase but this time the graph loading phase is not managed by the Important. Is that it calls Lodge, requested modules method in equal to six books, Ahmed 262 to recursively load all the dependencies using this new host hook which might be asynchronous and stores. the result of each call to this hook into an internal cache. And this internal cache is made it so that later we can access these modules synchronously instead of calling the asynchronous host hook. and, this process is mostly all async. After. the graph loading phase, we have the graph linking phase. So the host calls the link method again. And this time, the link method iterates over all the modules in a graph without interacting with the host anymore, because we already have the module records in this inline cache . And finally, the whole starts the evaluation phase by calling the evaluating a do we evaluate method in 262 and again we iterate through the module graph and we evaluate all the modules. oh, Also, the evaluation phase does not call into host hooks anymore. NRO: just for comparison. This was the old process, so you can see that the linking phase and the evaluation phase with the old hooks called into the host while get new ones don't. @@ -615,7 +614,7 @@ ACE: So we're back. I think our 45-minute time box on Tuesday was ambitiously sh ACE: So going now into actually the micro level changes to how the spec is currently written. Something that we talked about on Tuesday is what happens with the record wrapper objects. So, you have a record primitive and then it gets coerced to an object in some way. It makes its way to the ToObject operation and you get back the object. So right now in the spec you get back a Record exotic object. So this is an object that has an internal hidden slot `[[RecordValue]]` that still holds onto the original primitive. And then it has a set of custom internal methods to ensure that this object behaves appropriately. So when you look at properties, it gets delegated through to that internal primitive that it's holding onto. One outcome of this, as people that were there on Tuesday are aware, is that this kind of opens up the brand checking question of how do you check for the presence of that `[[RecordValue]]` slot? It's kind of not directly observable. You have to observe these slots by other means. KG raised an interesting idea that we hadn't previously thought about which was: what if, when you coerce these things to objects, you don't get back an exotic object. you just get back an ordinary object, similar to - for all intents and purposes, apart from a few small places, the object is effectively just a frozen object. The reason we talked about this for records and not tuples is because there's a lot more happening with a tuple. While records are collections of string keys to values, tuples have a lot more. So they have TypedArray-style indexing. So if you use integer-indexed access, then if you read out of bounds, it doesn't then delegate to the prototype: it stops and returns undefined. So you can't add an integer index onto `Tuple.prototype`, and then make that suddenly appear on all on all tuples. Then all other properties you look up are then forwarded on to the Prototype. So that's how you can get to the kind of symbol protocols and the tuple methods. we think this is a really important property of tuples when reasoning about their kind of immutability that you have this guaranteed action, that when you access `length` or an integer, you're guaranteed what you're going to get back. And there's no way that suddenly becomes dynamically related to the prototype. They're also different in that they have a prototype. So the methods on that prototype do all brand check, so you can't take Tuple.prototype.map and then call and pass in an array-like, if the Tuple object that gets created with ToObject wouldn't have this slot then that object wouldn't be usable. in terms of you wouldn't be able to dot access the methods and use them because the receiver would be a plain object which wouldn't pass the brand check and would throw. So we think it's important that tuples do keep their special wrapper object, and, it's also ok in that that they don't hit this issue of not having a way of brand checking. -ACE: So, the `ToObject` operation appears in a few places. so if you have a sloppy function that's called with a primitive as the receiver, then within the execution of that function, the primitive is being passed to ToObject. There's also kind of explicitly passing the primitive to the `Object` constructor as a function call, and there's a few other little places that it can pop up. Usually when primitives get coerced to their objects, a common way you can brand check for these is using one of their prototype methods and calling it and you can try-catch around that to assert - you can check if it's going to have that internal slot because all these methods will throw. The issue of Records is that not having a prototype, not having any methods, not really having any static methods, there's not really been a clear place where this brand check can be. So we've kind of looked in lots of ways and we are actually - stepping back and looking at KG's suggestion: just not having a record exotic object in the first place it kind of completely sidesteps this issue. If there is no brand to check then there isn’t a brand-check that is missing in the first place. Which is why we think this is quite an interesting way to solve this. +ACE: So, the `ToObject` operation appears in a few places. so if you have a sloppy function that's called with a primitive as the receiver, then within the execution of that function, the primitive is being passed to ToObject. There's also kind of explicitly passing the primitive to the `Object` constructor as a function call, and there's a few other little places that it can pop up. Usually when primitives get coerced to their objects, a common way you can brand check for these is using one of their prototype methods and calling it and you can try-catch around that to assert - you can check if it's going to have that internal slot because all these methods will throw. The issue of Records is that not having a prototype, not having any methods, not really having any static methods, there's not really been a clear place where this brand check can be. So we've kind of looked in lots of ways and we are actually - stepping back and looking at KG's suggestion: just not having a record exotic object in the first place it kind of completely sidesteps this issue. If there is no brand to check then there isn’t a brand-check that is missing in the first place. Which is why we think this is quite an interesting way to solve this. AC:E So, if I just drop over to the actual [PR](https://github.com/tc39/proposal-record-tuple/pull/357). The PR effectively deletes a lot of code, it deletes like, three hundred lines of code and adds 30. and it means that we completely remove the record exotic object, the one that has its own implementations of DefineProperty. GetOwnProperty, HasProperty, Get etc. It removes all of those things. and, instead when a record is passed to `ToObject`, we just create an ordinary object with a null prototype, we copy all the properties and their values from the record primitive into that object, and then we freeze that object. Implementers don't necessarily have to do that, they could still do things more efficiently. But from a spec perspective, you know, it would be a kind of a linear copy and perhaps that's what implementers do, there's no expectation that they would optimize that but they are free to optimize it if they so choose. @@ -657,7 +656,7 @@ DE: There's three things I want to separate here. One is. these three qualities KG: I agree that someone will at some point encounter an object that they got from coercing a record to an object. I think that this will happen so rarely that I don't want us to spend hardly any effort in the language helping out that case. -MM: So I'm very interested in supporting understandability and debuggability on the of the programmer experience for people who are programmers writing strict mode code. For people who are programming sloppy mode code can't figure out what their program is doing the first thing they should do is switch to strict mode, and until they do that, I don't have any sympathy for their difficulty in understanding their code. I'm certainly not willing to pay complexity cost to the language as a whole in order to support the understand ability to somebody writing sloppy code. +MM: So I'm very interested in supporting understandability and debuggability on the of the programmer experience for people who are programmers writing strict mode code. For people who are programming sloppy mode code can't figure out what their program is doing the first thing they should do is switch to strict mode, and until they do that, I don't have any sympathy for their difficulty in understanding their code. I'm certainly not willing to pay complexity cost to the language as a whole in order to support the understand ability to somebody writing sloppy code. JHD: My reply to that. Is that anytime you use third-party code, which in modern JavaScript is almost all the time, you are using code that you don't control, that may or may not be written, and in sloppy mode. So, I agree with you that individuals first party code. I completely agree with you but that is not often enough. @@ -685,7 +684,7 @@ RBN: I, also probably use the previous argument here as well. That these objects BT: I think we're at time. Are the champions going to call for any consensus? -ACE: We would hope we plan to merge this PR you know, as part of a kind of stage 2 design of putting this back together, if people do have really strong objections - there's no rush in us merging the PR, we can leave it open, say, for a week, if people can continue the discussion on the PR. Our position is that we are still in favor of this PR, and also happy to continue the conversation. +ACE: We would hope we plan to merge this PR you know, as part of a kind of stage 2 design of putting this back together, if people do have really strong objections - there's no rush in us merging the PR, we can leave it open, say, for a week, if people can continue the discussion on the PR. Our position is that we are still in favor of this PR, and also happy to continue the conversation. BT: All right. Thank you. diff --git a/meetings/2022-11/dec-01.md b/meetings/2022-11/dec-01.md index 77f756d9..d602e60c 100644 --- a/meetings/2022-11/dec-01.md +++ b/meetings/2022-11/dec-01.md @@ -2,11 +2,9 @@ ----- - **Remote attendees:** - -``` +```text | Name | Abbreviation | Organization | Location | | -------------------- | ------------- | ------------------ | --------- | | Waldemar Horwat | WH | Google | Remote | @@ -43,17 +41,15 @@ | Caridy Patiño | CP | Salesforce | Remote | ``` - ## Iterator Helpers for Stage 3 Presenter: Michael Ficarra (MF) -- [proposal]() +- proposal - [slides](https://docs.google.com/presentation/d/1npPCpovE6NtFPFvagaq8eoX2VLXM6Tac_fl--7_NrzY/) \ - - MF: Okay. well, like yesterday, my title slide, I forgot to change, but this is for stage three as it says in the agenda, this is not just an update. So this is iterator helpers. You’ve seen this quite a lot recently. At the last meeting we discussed some final tweaks we want to make before stage 3, which we have since made. So I'm going to go over them. The first one, we discussed this a couple times with never any strong opinion given by committee and kind of some mixed feedback from the community as well. Given that, I ended up making what I think is a really uncontroversial call and merging this suggestion, which was to add a counter to all of the Array-equivalent methods that we have. So this would be like, reduce and map and them. You'll see in the summary slide of the end which ones have a counter. I'm calling it a counter not index because that was also brought up during the meetings, but I don't think that's really observable, that's more like a documentation thing. +MF: Okay. well, like yesterday, my title slide, I forgot to change, but this is for stage three as it says in the agenda, this is not just an update. So this is iterator helpers. You’ve seen this quite a lot recently. At the last meeting we discussed some final tweaks we want to make before stage 3, which we have since made. So I'm going to go over them. The first one, we discussed this a couple times with never any strong opinion given by committee and kind of some mixed feedback from the community as well. Given that, I ended up making what I think is a really uncontroversial call and merging this suggestion, which was to add a counter to all of the Array-equivalent methods that we have. So this would be like, reduce and map and them. You'll see in the summary slide of the end which ones have a counter. I'm calling it a counter not index because that was also brought up during the meetings, but I don't think that's really observable, that's more like a documentation thing. MF: We also talked about the possible web compatibility issue with toStringTag. And we decided to ditch the more complicated accessor solution that does a bunch of safety checks, and just make it a writable property. So we closed the accessor one and merged the writeability one. @@ -71,7 +67,7 @@ MF: So, the is a change we made without really any other precedent in the langua MF: This is kind of an infrastructure change. This was all described in the proposal’s spec before, and we separated it out. So now we have a pull request for all of the infrastructure for what a built-in async function is in 262. So, both this proposal and `Array.fromAsync` needed the same infrastructure underneath, so we combined them and refined them and I think that this pull request is ready to go. So, if you're looking for "where's all these things that were in there last time?", that change is because it's all been pulled into this 262 PR since it really isn't specific to iterator helpers. It just needs to be there. On a related note, we have one new open question, which is about built-in async methods. Currently I believe they are specified to have a prototype of Function.prototype. Should their prototype be AsyncFunction.prototype? It seems like an uncontroversial suggestion but it's actually, I think not, appropriate because there's really, no difference between (in the spec) specifying a built-in function that always returns a promise, which we have many of, versus specifying an async built-in function. It's really just a spec convenience for how we write it and it's an editorial decision really whether we choose one or the other and I don't think that that editorial decision should have these normative implications about what the prototype is. I'm make that argument in this thread, which I've pasted here, I think there's a similarity as well between spec things like this, and with the actual ecmascript source code where you can have functions and you can have async functions, and the functions can always return promises, but it doesn't change the prototype. And is there any real use to AsyncFunction.prototype anyway? It doesn't even matter. So I would prefer to reject the suggestion and just have all built-in methods, whether async built-in methods or not, extend from Function.prototype. Well, that is the question. I don't think it's a super important question to answer today. It does have to be answered before, obviously before we do implementation and stuff, but that's everything that's happened and I would like to ask for stage 3. I'm sure there will be discussion to be had first. - \ +\ BT: Yes, you are correct. There's four items in the queue. right now. so KG is up first. KG: So MF is aware of my opinion on this question, but as he mentioned one of the recent changes was to align Iterator.from with Iterator.prototype.flatMap, and how they handle the return value. And in particular, the current spec will reject strings, or will not treat strings as being iterable for the purposes of Iterator.prototype.flatMap. I think it is maybe acceptable for Iterator.prototype.flatMap because if you return a string - when you actually want the code points of the string? That's a pretty weird thing to do, and you can return the code point iterator manually. On the other hand, `Iterator.from(string)` just should work. That is a perfectly reasonable thing to write. Iterator.from is a way to get an iterator out of an iterable, and Strings are definitely iterable. There is no reason for it to reject that. So I think Iterator.from(string) should work, even if that means it is inconsistent with Iterator.prototype.flatMap. @@ -84,8 +80,8 @@ MF: I'd be open. JHD: Yeah, I agree with KG. I think that it's bizarre that strings are iterable but either way you're not from string. It makes a lot of sense. -BT: All right. Thank you, Justin. \ - \ +BT: All right. Thank you, Justin. \ +\ JRL: Weakly kind of like the current behavior. It just makes it really easy to explain how flatMap flattening behavior will work. Whatever `from` gives you, if it turns into a real iterator or returns multiple results, that's how flatten would work. If it only returns a single item, then that’s how flatten will work, they'll just be a single item. It kind of makes it nice and easy to explain. I'm just curious how we explain if we go with this change, how do you want to explain flatten's behavior. MF: It doesn't make it simpler, it just stays the same. It's just that the description of Iterator.from would be simpler if you were to defer to that. So if you were to instead define flatMap flattening behavior in terms of Iterator.from, just have the exception in flatMap instead of from. I don't think it changes the complexity of describing that, just where you describe that. @@ -114,7 +110,7 @@ BT: I think maybe we just ask for stage 3 and if folks want to object because th MF: I'm happy to do that. - \ +\ BT: All right. So MF would like stage three for iterator helpers. SYG: for the conditional thing. I Better want to understand what are the methods that are affected? @@ -133,7 +129,7 @@ MF: Yeah. I had mentioned that during the bottom of this slide. Yeah, that depen JHD: Sounds awesome. Thank you. Sorry I missed that slide. -SYG: I am still confused about what we're asking for consensus for. Okay, so we are asking for consensus for everything else, and then waiting to work out whether the from methods will accept strings, or are you asking for consensus that the from methods accept strings, and we're just need to wait to update the the spec draft. +SYG: I am still confused about what we're asking for consensus for. Okay, so we are asking for consensus for everything else, and then waiting to work out whether the from methods will accept strings, or are you asking for consensus that the from methods accept strings, and we're just need to wait to update the the spec draft. MF: The latter. @@ -143,11 +139,9 @@ BT: All right. we have a plus one for stage 3, Explicit, support from LCA. Thank MF: Thank you, everyone. And I also like to thank the other people who have contributed significantly to this proposal, with champions being YSV, GCP who started the proposal, and KG. So thank you all. Thank you. - ### Conclusion/Resolution -* Stage 3 - conditional on changing the Iterator.from handling of strings - +- Stage 3 - conditional on changing the Iterator.from handling of strings ## Async operations @@ -168,9 +162,9 @@ KG: Yeah, it's not totally clear to me what problem this is solving. Like, what JWK: Hmm. so the motivation is to hide promises. I know the motivation may not be very convincing to everyone. If we cannot get to stage 2 it's okay. KG: Yeah, I appreciate the desire to not have to think about promises, but I think we probably need to think about promises sometimes, and it's not worth adding new syntax just to hide promises. But that's just my personal opinion. I don't feel that strongly. \ - \ +\ BT: All right, there's a few other similar topics on the Queue \ - \ +\ EAO: Yeah. pretty much. I could I come in the same sentiment from Mozilla we don't really see. that this brings sufficient benefit to really be proceeding. I mean, all its really doing is saving about nine characters if I can tell right program tactic and I mean. On a personal level, I'm not really sure why if you're writing async/await-based code, you should not be aware of promises. and the capital P promise that can be used. JWK: hmm, or maybe we can think of that as a chance to rethink how we educate async/await to JavaScript developers. Today async/await is described as a syntax sugar for promises, but if we are to fill this gap, maybe we can change the async await, as a more top-level structure. @@ -211,7 +205,7 @@ RPY: Okay, in that case, you may as well deliberately force users to learn promi JWK: Hmm. Okay. Thanks. -JHD: I have a comments about that. So for me, this proposal is not about hiding promises in any way. I'm technically a co-champion this proposal and I think that that motivation is actually a reason not to have it. For me it's about the ergonomics of using promises, and reaching for `await` and being able to use `await.all`, for example, instead of having to kind of break the chain. When you do `Promise.all` you have to do the thing with the parentheses -that admittedly with something like pipeline, would be a more ergonomic, but pipeline doesn't exist yet. A lot of the footguns that I see in `async`/`await` usage are caused by the attractiveness of using `await` unnecessarily. And when these syntactic options are available than they become equally attractive and often they are the more correct choice than `await`. +JHD: I have a comments about that. So for me, this proposal is not about hiding promises in any way. I'm technically a co-champion this proposal and I think that that motivation is actually a reason not to have it. For me it's about the ergonomics of using promises, and reaching for `await` and being able to use `await.all`, for example, instead of having to kind of break the chain. When you do `Promise.all` you have to do the thing with the parentheses -that admittedly with something like pipeline, would be a more ergonomic, but pipeline doesn't exist yet. A lot of the footguns that I see in `async`/`await` usage are caused by the attractiveness of using `await` unnecessarily. And when these syntactic options are available than they become equally attractive and often they are the more correct choice than `await`. JWK: await an array? @@ -219,13 +213,13 @@ JHD: Yeah, I mean if you `await.all` an array, that is much more ergonomic and a BT: there's, a couple plus ones to that RRD. -SFC: +1 to what JHD said. \ - \ +SFC: +1 to what JHD said. \ +\ RRD: I can't say can't say adoration Marshall. Yeah. likewise actually I’ve refactored code, that was using sequential await because awaits looked very. you know. once you're in a single weight function, you want to use await, I think, psychological effect and that's absolutely not the language design concept of thing in a way. it's more psychologically, speaking your more enticed to use a for everything and I end up doing. sequential weights. where actually you could probably things and the availability of await.all might be a way to kind of push the developer to go towards this. Although I wouldn't like say this with certainty without research here. So it's certainly interesting to try to get people “to do the right thing”. that's it. BT: And JRL. -JRL: Yeah. Sort of the exact same thing. I have had to refactor code from beginners who do serial awaits because they want to avoid using Promises. Their mindset is that async is the replacement for Promises, using async await syntax is how they write their code. And so they avoid Promises methods entirely, Promise.all specifically is the one that I see the most, but allSettled, any, and race, I’m sure those will come up as well. It seems like that has the most improvement to me because it allows beginners to think of this as being the way that you do something in async/await syntax. +JRL: Yeah. Sort of the exact same thing. I have had to refactor code from beginners who do serial awaits because they want to avoid using Promises. Their mindset is that async is the replacement for Promises, using async await syntax is how they write their code. And so they avoid Promises methods entirely, Promise.all specifically is the one that I see the most, but allSettled, any, and race, I’m sure those will come up as well. It seems like that has the most improvement to me because it allows beginners to think of this as being the way that you do something in async/await syntax. RBN: So, I posted this also in the Matrix, but it I think one of the things that was discussed was, you know, async/await hides promises. But that's never really the use case of async/await. It's not to hide promises. It's to allow you to take what was previously somewhat complicated continuing picking continuation-passing style code and then allow you to write that code linearly so that you can leverage the benefits of existing control flow such as continue break return. Loops etcetera, things that you can't do when you're doing continuation passing without increasing the complexity of writing that code. These types of operations like await.all really don't provide anywhere near the same kind of benefit. There they are for all intents and purposes just hiding Promise, but you should not be hiding promises. You need to know how promises work to use async await. I think it's a mistake to try to make that a goal, but a rationale for a proposal. I don't really see the benefit that this introduces. And instead, this looks more like additional complexity, and trying to hide something that you really should know how it works. And to the previous point about folks not using promise.all and using serial awaits, I don't think that this solves that either, it does make it fewer characters to reach for the thing that they're also not reaching for currently. And I think that's more a matter of documentation and teaching, rather than ease of use. Promise methods are things that you should continue to use, you should know how to use. And I don't think that this actually addresses that. All right. thank you, Ron. @@ -234,7 +228,7 @@ BT: The queue is now growing rapidly again. we are getting a little bit tight on RRD: I agree with him. Yeah. Very was saying, because when I'm saying+1 this is a problem, I'm not sure this is exactly the solution, it should be research. Yeah, that's it. ACE: RBN said, a similar thing to what I was going to say but in a better way. \ - \ +\ DRR: So I think I think one of the unfortunate things is discussion is that like the most convincing thing for me is the heaviness right? The fact that promise that Arrangement, that is just like cumbersome. You do it over and over again. but really Promise is a namespace. If instead of these functions in Promise were like methods of something shorter, part of like a value called async or part of something that you Auto Import. Like, you just imported from another module. That would probably give you the lightweight sort of thing that you're looking for. And so that's kind of like - it feels weird that we just would be saying oh well writing Promise.all that is very long. You have to do it over and over again. Yes, maybe there's like a sort of thing where you can align it with async/await. So maybe it's more syntax oriented, but if it was just more like, wait, we probably wouldn't be having this discussion. And so I feel like based on that, it doesn't really matter if we add new syntax, overall. But, you know, I'm willing to hear more arguments over time. JWK: Thank you. @@ -249,20 +243,17 @@ BT: The queue is empty. JWK: Okay, maybe we need to do more research on this. but I don't think there is any other way to slow this problem if we need to do it in the syntax space. I'm not going to ask for stage 2 now. \ - BT: All right. I think just make sure to check the notes. There's a few interesting ideas for further investigation there, so Thanks for that discussion. - ### Conclusion/Resolution -* More research required. Maybe other solutions. Clarify problem. - -* Did not advance to stage 2 +- More research required. Maybe other solutions. Clarify problem. +- Did not advance to stage 2 ## Intl era and monthCode proposal for Stage 1 -Presenter: Shane F. Carr (SFC) +Presenter: Shane F. Carr (SFC) - [proposal](https://github.com/FrankYFTang/proposal-intl-era-monthcode) @@ -270,7 +261,7 @@ Presenter: Shane F. Carr (SFC) SFC: I'm presenting this on behalf of FYT who is not able to join us today on Thursday. But this is his presentation and I'm just walking through it on his behalf today. So let's get started. Okay, so this is a new proposal currently at stage zero. The motivation here is that in order to implement Temporal non-Gregorian calendars, it's necessary to specify some details about how era and era year and month code etcetera behave. The Temporal specification has intentionally left this piece out of scope, and this proposal aims to fill the gap. I also noticed that a lot of the Temporal champions are not currently here. So I guess I'm the one presenting this. So, I'll continue. So the scope here is that the Temporal proposal specifies the ISO 8601 calendar and the UTC time zone, but it does not specify the details of how those other calendars and time zones behave. So, this topic has been discussed several times in the TG2 meeting, most recently in October when FYT walked through these same steps of this same slide deck. So the goal of the proposal, again, is to define the semantics of the non Gregorian calendars scoped to the set that is defined by CLDR ( the common locale data repository). There are about 20 calendars that are specified in CLDR, just for context. These are things like the Hebrew calendar, the islamic calendar — there are several different versions of the Islamic calendar — the Buddhist calendar — there are several different versions of that — Ethiopian, Chinese, and so on. And all of these calendars are used either in government, official, cultural, or day-to-day use in various countries around the world. So the goal of this proposal is to clarify that so that it's implementable. So that these calendars are implementable when using the Temporal calendar specification. Cool. So more specifically, the scope here is that the Temporal.calendar interface which again is already stage 3 requires that the non Gregorian calendars, at least specified the month code, as well, as many of the calendars are going to be using the era and eraYear. But it does not specify exactly how those behave. -SFC: So, yep. Let's go ahead and look at some examples here. So let's suppose that you wrote this line of code you wanted to create a date in the Hebrew calendar, month M05L in year 5779. So the like, what is a, what is the year? What does, what does the year mean? What is it relative to? What codes are allowed as monthCode? And, in this case, M05L corresponds to the month Adar I. in, and in the year 5779 as you can can see but the exact behavior of how the monthCode behaves is not currently specified for implementation to be consistent, which I think this really drives home the main crux of this proposal, is that without this proposal like different browser engines, could implement these monthCodes and era codes differently and it would be totally permissible. And and this does not, Help of providing a web compatible API because the this is much of the goal of ECMA 402 is that we don't want to specify too much of internationalization behavior because so much of it is implementation and Locale dependent, but we do need to at least set up some guardrails And, you know, the real Crux of this proposal is that we need to set up these guardrails, and one of the key things that really be cannot avoided is identifier strings are really something that really need be to be clear and specified, because the actual code people write - the actual JavaScript files that get sent to two engines - need to be able to be interpreted in the same way. So it's going to walk through a few more slides here, a few more examples and then I'll get to the queue. So the Gregorian calendar eras as well. We don't currently specify what those eras are and how they work. Like, if you did want to specify the example of the identifier for a Gregorian era, which one do you want to use? There's many, many choices. There are many reasonable choices. One implementation could choose a different syntax to another one. So one concern that was brought up about this proposal as well we don't want to actually be defining what these strings are. We should be able to point to a pre-existing Authority for these codes for these identifiers. So, why don't we just use CLDR the identifiers? There's a little bit of a problem here. CLDR doesn't actually define identifiers. It uses integers from 0 to 1, and then for a few calendars like Japanese all the way to 236. This is, this is what it looks like in CLDR data, it has you know, the eras are defined as integers. But this is not ergonomically… it's not easy to explain this behavior and it makes it very basic behavior. That is only well defined for the purpose of data transfer and data interchange, but it does not make sense for actually writing in code because these identifiers have no meaning outside of the data that they represent. So, one of the one things that I really, really want to emphasize here is that this proposal is going up for stage 1. Stage 1 entrance requirements are that we've identified the champion, outlined the problem space with examples, a high level API, discussion of key algorithms, identification of cross-cutting concerns, and having a repository that summarizes all this. All these things are done. Now, I understand one of the main concerns about this proposal is where do the actual strings get defined? FYT in the presentation Just showed that Well, they don't, they're not actually defined currently in CLDR itself, because they have integer identifiers, but maybe we don't want to put them into ECMAScript because, you know, ECMA 402 maybe shouldn't be the authority, the global authority for what for where these codes are. specified. So I think one of the main questions to be answered before we go to stage 2 is exactly what authority we use and, you know, I think that FYT has additional slides to explain different options for the authority for where these strings are specified. So stage 1 acceptance signifies that the committee expects to devote time to examining the problem space solutions and cross-cutting concerns. So therefore, we'd like to request approval for advancement of this proposal to stage 1. And I believe that's the end of slides. So now we can go to Q&A. +SFC: So, yep. Let's go ahead and look at some examples here. So let's suppose that you wrote this line of code you wanted to create a date in the Hebrew calendar, month M05L in year 5779. So the like, what is a, what is the year? What does, what does the year mean? What is it relative to? What codes are allowed as monthCode? And, in this case, M05L corresponds to the month Adar I. in, and in the year 5779 as you can can see but the exact behavior of how the monthCode behaves is not currently specified for implementation to be consistent, which I think this really drives home the main crux of this proposal, is that without this proposal like different browser engines, could implement these monthCodes and era codes differently and it would be totally permissible. And and this does not, Help of providing a web compatible API because the this is much of the goal of ECMA 402 is that we don't want to specify too much of internationalization behavior because so much of it is implementation and Locale dependent, but we do need to at least set up some guardrails And, you know, the real Crux of this proposal is that we need to set up these guardrails, and one of the key things that really be cannot avoided is identifier strings are really something that really need be to be clear and specified, because the actual code people write - the actual JavaScript files that get sent to two engines - need to be able to be interpreted in the same way. So it's going to walk through a few more slides here, a few more examples and then I'll get to the queue. So the Gregorian calendar eras as well. We don't currently specify what those eras are and how they work. Like, if you did want to specify the example of the identifier for a Gregorian era, which one do you want to use? There's many, many choices. There are many reasonable choices. One implementation could choose a different syntax to another one. So one concern that was brought up about this proposal as well we don't want to actually be defining what these strings are. We should be able to point to a pre-existing Authority for these codes for these identifiers. So, why don't we just use CLDR the identifiers? There's a little bit of a problem here. CLDR doesn't actually define identifiers. It uses integers from 0 to 1, and then for a few calendars like Japanese all the way to 236. This is, this is what it looks like in CLDR data, it has you know, the eras are defined as integers. But this is not ergonomically… it's not easy to explain this behavior and it makes it very basic behavior. That is only well defined for the purpose of data transfer and data interchange, but it does not make sense for actually writing in code because these identifiers have no meaning outside of the data that they represent. So, one of the one things that I really, really want to emphasize here is that this proposal is going up for stage 1. Stage 1 entrance requirements are that we've identified the champion, outlined the problem space with examples, a high level API, discussion of key algorithms, identification of cross-cutting concerns, and having a repository that summarizes all this. All these things are done. Now, I understand one of the main concerns about this proposal is where do the actual strings get defined? FYT in the presentation Just showed that Well, they don't, they're not actually defined currently in CLDR itself, because they have integer identifiers, but maybe we don't want to put them into ECMAScript because, you know, ECMA 402 maybe shouldn't be the authority, the global authority for what for where these codes are. specified. So I think one of the main questions to be answered before we go to stage 2 is exactly what authority we use and, you know, I think that FYT has additional slides to explain different options for the authority for where these strings are specified. So stage 1 acceptance signifies that the committee expects to devote time to examining the problem space solutions and cross-cutting concerns. So therefore, we'd like to request approval for advancement of this proposal to stage 1. And I believe that's the end of slides. So now we can go to Q&A. BT: The queue is empty. but we can give it a few moments to see if folks want to enter the queue. All right. The queue remains empty. So I think That is just militant agreement. @@ -280,7 +271,7 @@ USA: Yeah, thank you, SFC. One thing that we communicated directly to SFC and FY API: Just wondering, if was CLDR like the like the people working on that? I mean, have they been posed? The question, I mean if there are they against creating identifiers for these or if we chose the solution for it, are they okay with it? Do they see in… -SFC: I mean we've definitely been talking with the Unicode folks. We have several docs and proposals in the Unicode space right now and I mean, ICU4X is already using some of these identifiers. So, like these identifiers are going to have to be written down somewhere and I think it's good that Temporal the forcing function, because it hasn't really been a problem that unicode had to solve, but I think that it is definitely a problem that needs to be solved and I've not heard any pushback from Unicode on that. +SFC: I mean we've definitely been talking with the Unicode folks. We have several docs and proposals in the Unicode space right now and I mean, ICU4X is already using some of these identifiers. So, like these identifiers are going to have to be written down somewhere and I think it's good that Temporal the forcing function, because it hasn't really been a problem that unicode had to solve, but I think that it is definitely a problem that needs to be solved and I've not heard any pushback from Unicode on that. API :So yeah. you know what are they @@ -304,11 +295,9 @@ SFC: Okay, if I just like another five to ten minutes for an additional question BT: All right. - ### Conclusion/Resolution -* Stage 1 - +- Stage 1 ## Intl DurationFormat Stage 3 Update @@ -316,13 +305,13 @@ Presenter: Ujjwal Sharma (USA) - [proposal](https://github.com/tc39/proposal-intl-duration-format) -- [slides]() +- slides USA: Hello. And welcome to another update for Intl.DurationFormat. The proposal has been proceeding along quite nicely since we've been become such big fans of the history slides, I would like to present some myself. The proposal itself actually came up in a discussion with ZB. in March of 2016. but the actual proposal Of was was kick-started by YMD in February, 2020. So the proposal Prestige, one in February 2021. I joined in later as Champion went to stage two in June, and then moved on from there. So, it's missing the fact that there's another stage 3 update now, But yeah, I know, It over time. I took over from YMD. and you know, me made a bunch of updates. USA: Just a quick refresher. What is DurationFormat? Well, DurationFormat is a Intl formatter like many other Intl formatters, but it formats compound durations of time. So, as opposed to something like relative time format it, takes in a pom pom duration of time with multiple units. It's possibly a format that it is locale sensitive not only for regular sort of Pros like duration for many but also for a digital Styles since those are also local dependent. It is the actual proposal itself. If you think about it architecturally it's built on top of NumberFormat, as well as ListFormat. So it takes all units. It formats them using the unit formatting option of number format and then coalesces them together using this format. So as I mentioned relative time format. You might know that there is this constructor that would that already allows you to format a single unit of time, and DurationFormat sort of builds on top of that and improves on that. and well, to state the obvious. It also sets up the stage for Temporal duration, which will be first class object that represents durations. and those objects should be able to be formatted using the duration formatted. -USA: Moving on. There are a few normal changes in this proposal that I've been in response to implement the feedback that I need to mention here in front of plenary and get consensus on first one is the sort of spec bug, You can say, that throws so, so yeah, the change through a RangeError for strings strings this Reported by YSZ from Apple. Thank you again, for recording this. Essentially, the problem is that duration format as we discussed in previous meetings has no currently mechanism for accepting strings. It accepts objects only, and it throws if you pass in any argument other than an object, the Temporal duration Constructor that that does allow. In fact, a the ability to purse strings, it does except strings a person but at the same time he throws RangeError for invalid strings. So this change it aligns the spec in the sense that it continues to throw an error in all cases, except for Strings, it throws a RangeError in itself. This doesn't mean much, but it means that you know, most improbable the the exact kind of error would not change and just to not cause any observable difference. So, thank you for pointing this out. And Yeah, another is sort of a major rewrite or format two parts. So as you might know, all the format functions that we have in do have regular format function as well as the format two parts function. Now, this format two parts function in duration format while it well. while it did work had a few quite fundamental issues and essentially did not produce exactly the result that was expected. So there were a number of a number of issues that were reported by FYT, and all these old bugs, as well as some feedback that has been raised is addressed in this PR. So this PR is basically a total overhaul from my two parts, and fixes everything anyway. And finally this the the last PR is to reorder property access, this was reported again. by FYT and it as fixed by PFC in Temporal basically. if you see the spec text of DurationFormat, there is a huge table in in the proposal that that stores a large amount of information regarding the different units. And the the mean of iteration within the proposal it followed table order. So essentially the duration units were followed in a certain order and because of that, the field access the property access was was done in a different order as opposed to Temporal and this was not the right idea. Thank you FYT for raising this, and thank you for fixing this as well. So, yeah, now, it fixes the order and, the new order is hard coded like in Temporal. So should fix the problem. +USA: Moving on. There are a few normal changes in this proposal that I've been in response to implement the feedback that I need to mention here in front of plenary and get consensus on first one is the sort of spec bug, You can say, that throws so, so yeah, the change through a RangeError for strings strings this Reported by YSZ from Apple. Thank you again, for recording this. Essentially, the problem is that duration format as we discussed in previous meetings has no currently mechanism for accepting strings. It accepts objects only, and it throws if you pass in any argument other than an object, the Temporal duration Constructor that that does allow. In fact, a the ability to purse strings, it does except strings a person but at the same time he throws RangeError for invalid strings. So this change it aligns the spec in the sense that it continues to throw an error in all cases, except for Strings, it throws a RangeError in itself. This doesn't mean much, but it means that you know, most improbable the the exact kind of error would not change and just to not cause any observable difference. So, thank you for pointing this out. And Yeah, another is sort of a major rewrite or format two parts. So as you might know, all the format functions that we have in do have regular format function as well as the format two parts function. Now, this format two parts function in duration format while it well. while it did work had a few quite fundamental issues and essentially did not produce exactly the result that was expected. So there were a number of a number of issues that were reported by FYT, and all these old bugs, as well as some feedback that has been raised is addressed in this PR. So this PR is basically a total overhaul from my two parts, and fixes everything anyway. And finally this the the last PR is to reorder property access, this was reported again. by FYT and it as fixed by PFC in Temporal basically. if you see the spec text of DurationFormat, there is a huge table in in the proposal that that stores a large amount of information regarding the different units. And the the mean of iteration within the proposal it followed table order. So essentially the duration units were followed in a certain order and because of that, the field access the property access was was done in a different order as opposed to Temporal and this was not the right idea. Thank you FYT for raising this, and thank you for fixing this as well. So, yeah, now, it fixes the order and, the new order is hard coded like in Temporal. So should fix the problem. USA: Well, I would like to ask for consensus for these, minor bug Fix PRs. @@ -334,11 +323,9 @@ BT: We don’t have any comments. USA: All right, perfect. Thank you, everyone. I will continue to make progress on DurationFormat. I would like to inform you that LibJS among other implementers like for instance, JSC have implemented this proposal and so it is starting to ship. So, I'm quite positive about reaching stage 4 some time soon. but that will have to happen another day. All right, thank you. you. - ### Conclusion/Resolution -* Consensus for the minor bug fix PRs - +- Consensus for the minor bug fix PRs ## Mass Proxy Revocation for Stage 1 @@ -350,13 +337,12 @@ Presenter: Alex Vincent (AVT) AVT: While we're doing this, I want to express thank you to SES for helping me go through this process, starting back in January and in particular MM, who unfortunately is not here right now, and JWK, who are the champions on this proposal So one. more moment I'm actually on the wrong. I'm sorry. I have to switch Windows here. okay, here we go. go. so, I was all right. Now we're at the top. We're ready to go. So, I'm here talking about proxies in general, because it turns out that we have a bit of a problem with them in terms of scalability. what we think about membranes which is where we see a lot of use of proxies, then you have Shadow targets which point through a WeakMap to underlying original elements or original. - \ +\ [screen sharing being sorted] \ - AVT: Yes. Okay. All right. Starting over. With proxies we have particularly shadow targets which point to underlying nodes and proxy handlers, which come from usually the membrane into the proxy. and those are the two things that we create a proxy with. We have the shadow target and the proxy handler. Proxy.revocable is a way to revoke the proxies because it returns a revoke() Function as well. well. So, this is great for individual proxies, but we have membranes, which will create lots and lots of these proxies for use in the real world. I mean, they're in use right now with Mozilla Firefox, they call them cross compartment wrappers. SalesForce has, it's observable membrane, and the whole point of a membrane is that you have a one-to-one mapping between the original objects and the underlying proxies and the proxies make that work, that point that refer to them. So what is a membrane? Well, it's basically a way that you can have object graphs that are separated by this abstraction called a membrane and each member and each object graph will treat the other object graphs with suspicion. And you can use a proxy to look up a property of an object as I'll show in a moment and it will return another proxy, again, keeping that one to one relationship. So that through the appearance of the other of the user in the second object graph, it looks just like they're in the first object graph. It's almost transparent, emphasis on almost as I'll explain in a moment. so, you have the proxy for the first child node. excuse me for a parent node over here. And it will have a reference to the underlying HTML element. Well, if we look up the first child property that first job property does not exist. on the proxy until it comes back through a cycle to Across the membrane. And then we get the head object and then the membrane will create that proxy and connect the two proxies together. Just like we were in the underlying object graph. Similarly, if we have an event listener, which follows the Observer pattern, and we want to pass it into the underlying object graph, that will usually result in a proxy being created by the membrane to refer to that event listener inside that first object graph. So, this allows a bunch of security features, which are really useful. Think about file system access you do not want to grant that to a webpage in most cases. but it also goes both ways. You don't necessarily want to have the underlying object graph have access to things in the … again it's mutual suspicion is the concept but there's also the possibility of integrating say web extensions before web extensions were an actual standard. So that motivated both proxies and WeakMaps. The downside as I said is, when you have hundreds or thousands of nodes, they're everywhere. And that means you have hundreds or thousands of proxies, and hundreds or thousands of revokers. And, that's what we're here to talk about. If you try to revoke hundreds or thousands of proxies, you're running hundreds or thousands of revoker functions. And even if you don't revoke them, you still have to create and hold references to those revokers. Have to hold onto these and you have to be able to create them in the first place. so, the original model of a membrane was centered around cell membranes. no, you do not need to understand biology this is a question that was raised four and a half years ago when I first presented membranes with MM. and the original concept was one-to-one mappings as I said earlier with a maximum of to quote sides and quote, objects are circles in this object graph, and the proxies are semi In that meeting. I went when the question was raised about biology being necessary to understand I responded with: No. it's actually more three-dimensional and I will admit fully. I did not fully understand what I was saying at that time, but in September 2018, about two months, after that meeting I had a revelation that said “no actually this is really valuable” and I came up with a new geometric model for membranes. here I'm putting every object graph in its own plane. Two dimensional plane. The physical distance between objects and proxies means nothing here. It's just, this is straight out of object graphs in discrete mathematics. But instead of having circles, and hemispheres, I'm sorry. Let me try again, instead of having circles and hemispheres and semicircles circles representing objects and proxies rep respectively. Now, we can expand to having spheres for objects and hemispheres for proxies and I draw a little cylinders to indicate the connections between them. This has a few advantages in this new geometric model. Again. physical distance is not relevant in this particular model, except that you do not want spheres hemispheres to intersect but the idea, of inside the cell membrane versus outside, the cell membrane goes away and you also have the capability of adding more object object planes. now instead, of to object graphs, you can go to as many object graphs as you want. You can swap them, you can reorder them, you can put there is no object graph has preference or precedence over another and I've started calling these hypergraph membranes. Excuse me. -AVT: So why are we actually here today? Why am I going through all this background? What worked and we're talking about Revoking proxies if you want to revoke the green object graph here, think well. then. you not only do you have to revoke the proxies in that realm, but you also have to revoke proxies pointing to that Realms underlying objects. And, again, each revoker function, you have to hold on to but it's only two slots that it's clearing to actions clearing two slots. The target and the proxy handler, that's all it does. So what I came up with this is the idea that I'm trying to bring to the committee here for discussion over the next several months. Is adding a third dictionary object, excuse me, a third dictionary. Argument to both new proxy and proxy.revocable, and this would initially support a single property, a revocation signal. The signal is a symbol that we would create via proxy.signal. I'm a side proxy.create signal. We pass it in as a third argument and then we want to kill the proxy. we can call Proxy.finalizeSignal. and it's if the signal is, is revoked. any proxy holding reference to that signal is considered dead. This has a bunch of advantages in the sense that we might not need proxy down. As a couple. we would not need to create hundreds of revoke hers. That's less. memory allocations. Let's garbage Pressure, in fact, hundreds of revoking functions are down to at most 2^n-1. revoke or functions and most and even more for even better. It's usually (n*n-1)/2. because you, revokeers would only be dealing with two object graphs. now, we are aware of the cancellation proposal and the exact shape of our revocation signal here is completely flexible. If the cancellation proposal moves forward and is stabilized, we could certainly change this proposal to model on that one or two. use it actively, we're completely open on that. What we were most concerned about is getting the shape of these arguments, right. And seeing if we can get this idea to move forward. And that's it. That's all I have to present gentlemen, and ladies, thank you. +AVT: So why are we actually here today? Why am I going through all this background? What worked and we're talking about Revoking proxies if you want to revoke the green object graph here, think well. then. you not only do you have to revoke the proxies in that realm, but you also have to revoke proxies pointing to that Realms underlying objects. And, again, each revoker function, you have to hold on to but it's only two slots that it's clearing to actions clearing two slots. The target and the proxy handler, that's all it does. So what I came up with this is the idea that I'm trying to bring to the committee here for discussion over the next several months. Is adding a third dictionary object, excuse me, a third dictionary. Argument to both new proxy and proxy.revocable, and this would initially support a single property, a revocation signal. The signal is a symbol that we would create via proxy.signal. I'm a side proxy.create signal. We pass it in as a third argument and then we want to kill the proxy. we can call Proxy.finalizeSignal. and it's if the signal is, is revoked. any proxy holding reference to that signal is considered dead. This has a bunch of advantages in the sense that we might not need proxy down. As a couple. we would not need to create hundreds of revoke hers. That's less. memory allocations. Let's garbage Pressure, in fact, hundreds of revoking functions are down to at most 2^n-1. revoke or functions and most and even more for even better. It's usually (n*n-1)/2. because you, revokeers would only be dealing with two object graphs. now, we are aware of the cancellation proposal and the exact shape of our revocation signal here is completely flexible. If the cancellation proposal moves forward and is stabilized, we could certainly change this proposal to model on that one or two. use it actively, we're completely open on that. What we were most concerned about is getting the shape of these arguments, right. And seeing if we can get this idea to move forward. And that's it. That's all I have to present gentlemen, and ladies, thank you. BT: All right, we have one item on the queue Ron. @@ -366,7 +352,7 @@ AVT: I don't think so. respectfully. And I'm getting back to the point of it's n RBN: That's what I'm trying to get at there's still going to be overhead because you still have to perform some type of allocation to track that record for the revocation. So I can understand it releasing some of it but I also don't see how disposable stack couldn't also potentially be used? That was going to go to what my other topic was going to be about cancellation but I'll add that to the cues and, SYG was on there as well. well. So, I still think that disposable sack is a potential solution to this. -JWK: I can see it's possible to use DisposableStack for this use case. There is some chance that the engine can unobservably optimize. At first, we create a proxy by Proxy.revocable. Then we add the revoke function to the Disposable stack callback. The engine can drop the revoke function because it knows the revoke function is only used to revoke the proxy and if the DisposableStack is also implemented by the engine. Now revoking the proxy does not need to hold the revoker function in the memory anymore. The engine will know it should revoke the proxy when the Disposable stack calls. But yes, there are still some differences. The engine still needs to create the revoker function. It will live for a very short time but it is GC pressure. +JWK: I can see it's possible to use DisposableStack for this use case. There is some chance that the engine can unobservably optimize. At first, we create a proxy by Proxy.revocable. Then we add the revoke function to the Disposable stack callback. The engine can drop the revoke function because it knows the revoke function is only used to revoke the proxy and if the DisposableStack is also implemented by the engine. Now revoking the proxy does not need to hold the revoker function in the memory anymore. The engine will know it should revoke the proxy when the Disposable stack calls. But yes, there are still some differences. The engine still needs to create the revoker function. It will live for a very short time but it is GC pressure. ```javascript const { revoke, proxy } = new Proxy(...) @@ -400,7 +386,7 @@ SYG: Like cancellation as an analogy for promises, Like I can also Imagine a wor AVT: I'm suspecting. this is suspicion only. that it will not take much more that we're talking about a third slot on the proxies which is basically a pointer but don't hold me to that gentleman. I don't know. a collection to track everything that has the same signal, right? -SYG: Like, we're not going to walk the Heap and say I'm going to we're going to look at every object in existence to see which would sir proxy with this special signals. You're going to have this collection like a registry like a finalization registry under the hood. Anyway, Right. and the Gap have weird implications with cross from. Does that have weird in implications with ephemeral marketing and if it doesn't then, OK, maybe that's fine. But more investigation, not a stage one concern. +SYG: Like, we're not going to walk the Heap and say I'm going to we're going to look at every object in existence to see which would sir proxy with this special signals. You're going to have this collection like a registry like a finalization registry under the hood. Anyway, Right. and the Gap have weird implications with cross from. Does that have weird in implications with ephemeral marketing and if it doesn't then, OK, maybe that's fine. But more investigation, not a stage one concern. AVT: Okay. @@ -412,8 +398,8 @@ RBN: So, the slides brought up the cancellation proposal. and I wanted to speak JWK: Yeah, the last version of the proposal is based on the host-provided signal. AbortSignal. If you are interested, you can see the old spec. -AVT: you talkin about, the cancellation? proposal Jack and \ - \ +AVT: you talkin about, the cancellation? proposal Jack and \ +\ JWK: The massive proxy revocation. RBN: Okay. And that and that said, one of the goals of something like the cancellation and abort signal. approaches, is that? There is a separation between the thing that cancels and the thing that receives that that cancellation signal, hence the aboard controller board signal or cancellation source and cancel token. The disposable stack can theoretically used in the same way but it doesn't provide that level of Separation. but that's generally not what you need here. here. That separation. is usually, because you have, a canceller and a cancelable that operate at a distance with some level of operations occur in between whereas something like disposable stack might still be useful here because you're passing it. could be passing directly to say the proxy API as well. So we might be able to continue to leverage disposable stack but as something for this because it is kind of within its wheelhouse of stating, what you're essentially doing is cleaning up all the proxies and resource cleanup is the purpose of disposable stack and the resource management proposal. @@ -442,13 +428,11 @@ BT: All right, that's sounds like stage one approval. AVT: For what it's worth I was looking for a mechanism to do revocation of proxies en masse. If this proposal doesn't make it all the way through, I am perfectly fine with that. but, thank you very much. - ### Conclusion/Resolution -* Stage 1 - -* Need to look into implementation complexity and if solution can be generalized +- Stage 1 +- Need to look into implementation complexity and if solution can be generalized ## Chair Announcements @@ -456,7 +440,7 @@ BT: all right, have a few announcements. before lunch first, I believe Michael h MF: Yes. I would like to take this time to draw your attention to a reflector thread, number 450. You don't have to go there right now, but in it, I mentioned that we haven't been having TG3 meetings in the last few months because our chair, who was an employee at F5, has left and is no longer a delegate. So we need to look for a new chair so we can continue to have TG3 meetings. Remember TG3 is our Security group. We focus on on topics related to security. We also have a lot of upcoming topics that are posted in the security repo under TC39. You can check out those, where we've been talking about some of those. But I would really like to get those meetings started again. I was appreciating the ones that we were having, but we do need a volunteer for chair, or possibly multiple volunteers for chair. So you don't have to volunteer here and now, but if you would like to go to number 450 on the reflector, you can throw your hat in there. That would be very much appreciated by all the attendees of TG3. That's all -BT: All right, thank you, MF. on the topic of chairing, it has been my honor to serve you as chair for the last handful of years here. Unfortunately, that is coming to an end pretty soon here. So, we will be looking for opening nominations for chairs for TC39 relatively soon. I do plan to stay on to help with facilitation and RPR and USA will continue as chairs. So, that will remain the same. But but it's a job that I think three people as ideal for. So, if cheering TC39 is something that you're interested in, please consider, Throwing your hat in the ring, I guess. And if anyone has any questions about what the work entails or that kind of thing. I am happy to be a resource, so feel free to reach out. Alright, and then I think you the eyes. Next up was an announcement. +BT: All right, thank you, MF. on the topic of chairing, it has been my honor to serve you as chair for the last handful of years here. Unfortunately, that is coming to an end pretty soon here. So, we will be looking for opening nominations for chairs for TC39 relatively soon. I do plan to stay on to help with facilitation and RPR and USA will continue as chairs. So, that will remain the same. But but it's a job that I think three people as ideal for. So, if cheering TC39 is something that you're interested in, please consider, Throwing your hat in the ring, I guess. And if anyone has any questions about what the work entails or that kind of thing. I am happy to be a resource, so feel free to reach out. Alright, and then I think you the eyes. Next up was an announcement. YSV: Hi everyone. mmm similar to BT, I will be actually taking some time off. So we will be down one facilitator. and I think having a facilitator in the room to assist the chairs, whenever necessary is very useful. So, I'd like to encourage folks to consider becoming facilitators It is a lower bar in terms of work than doing the chair But it definitely helps the committee function smoothly and it also relieves a bit of pressure during committee time from the chairs, so that they can organize other work. that needs to be done. while the committee's in flight and make sure that things run. Let me know if you have any questions about what that looks like. @@ -464,16 +448,15 @@ RPR: I'll say we are very grateful for BT’s chairing over the years. Certainly All right. I think we've finished for lunch two minutes early. So Brian, if you're anything else. anything else. -BT: I, think that's it. I think we can break for lunch. - +BT: I, think that's it. I think we can break for lunch. ## Explicit Resource Management for Stage 3 Presenter: Ron Buckton (RBN) -- [proposal]() +- proposal -- [slides]() +- slides RBN: We've gone over some of this. before. in over the years that this proposal has been around, but I'll briefly cover some of the motivating reasons that we've been looking at this proposal. We wanted to address some inconsistent patterns for resource management for example there are various mechanisms for handling resource clean up such as return on iterators, release lock on stream, readers close on file handles and there's numerous example of this examples of this in the explainer repo as well. Meaning that for any given API, you can be difficult for users to know what's the right way to clean things up. Even if you look at some node examples in node streams some some objects have close(). some have destroy(), which is the right one to use. So it can be unclear as to what is the right mechanism to actually perform resource clean up in those cases. We also wanted to address issues with scoping of resources and managing resources lifetime. Currently, this can only be addressed really with try/finally. But when you use, try/finally, to actually access the handle in the finally, you have to declare it outside of the Meaning that? or trivial kit for the majority of trivial cases, it's hard to know what whether that object is actually still alive. and we wanted to also address a number of common foot guns for managing multiple resources. Sorry, this also applies to the scoping as well. if you happen to release. a resource without using a try, finally Block in this case, you could have an exception in the intervening code, and then as a result don't actually release the lock that you've taken, and this could potentially run into a deadlock in async cases. In cases where dealing with multiple resources, we have similar issues with if you allocate your resources and use a single try/finally, for brevity's sake, you run into issues, where an exception, when closing one resource could result to be in another resource not being closed, or closing resources out of order, If, for example, be depended on a still being alive, so these and then these are A number of footguns. The only way currently to do this right with multiple resources, is continuously nesting, try/finally blocks which results in this fairly heavily nested code before you actually get to the logic that you actually intend to represent. So, all of these things are specific motivations that we are intending to solve with this. So, we have a couple of motivating examples to show how this can be used. So, here's some examples of using the node FS promises API to open multiple file handles. In this case, if these file handles happen to then implement this pose you would be able to inside of a block declare each with each resource they are required. perform, whatever operations, you need. read and write. And then in the any exceptions are occur, or if code complete successfully then those resources would then be easily. disposed one in reverse order Assuming there were any potential dependencies and make sure that all of those resources have been cleaned up successfully, or that appropriate errors have been thrown. @@ -487,15 +470,15 @@ RBN: So, about SuppressedError proposal actually I think maybe the first one tha RBN: So, there was an open question that we'd brought up in the last meeting about whether or not using declarations should become unusable in the block scope exits when they are closed over. it would have prevented access enclosures that executes later. you can introduce constant aliases. The champions’ preference at the time was no. for various reasons we had It was. biggest brought up was that this would essentially introduce a new tdz. and we were concerned I was concerned and spoke with several implementers about this that introducing a new tdz would have potential side effects when it comes to optimization and performance. a number of implementers chimed in that there were concerns and the issue has since been withdrawn. -RBN: So, with that, I'll move on to a discussion about the Disposable and AsyncDisposable interfaces. So this is the protocol with which disposing works. So the Disposable interface describes an object that has a [Symbol.dispose]() method. invoking, the method indicates the caller doesn't continue to use the object and therefore the object can release its resources. this [Symbol.disposed]() method is used by the semantics of both the using declaration disposable stack and the async Disposable stack class yet adaption adapters. the idea with [Symbol.disposed]() is that when invokes the object, that hosts, it should perform all necessary, cleanup Logic for the object with the intent being that, while the object itself has not been freed in memory. it will still exist until garbage collection. What did anything that the object is holding onto that might potentially need to be freed such as file system, handles network, two streams. Etc, that those are cleaned up in a timely manner. And when called this symbol, as opposed methods should avoid throwing exceptions but there's no requirement for this to be the case, it's just a better practice rather than a mandate or rather than it shouldn't throw exceptions when called more than once. it should generally be safe to repeatedly called disposed. And again that's not required. +RBN: So, with that, I'll move on to a discussion about the Disposable and AsyncDisposable interfaces. So this is the protocol with which disposing works. So the Disposable interface describes an object that has a `Symbol.dispose()` method. invoking, the method indicates the caller doesn't continue to use the object and therefore the object can release its resources. this `Symbol.dispose()` method is used by the semantics of both the using declaration disposable stack and the async Disposable stack class yet adaption adapters. the idea with `Symbol.dispose()` is that when invokes the object, that hosts, it should perform all necessary, cleanup Logic for the object with the intent being that, while the object itself has not been freed in memory. it will still exist until garbage collection. What did anything that the object is holding onto that might potentially need to be freed such as file system, handles network, two streams. Etc, that those are cleaned up in a timely manner. And when called this symbol, as opposed methods should avoid throwing exceptions but there's no requirement for this to be the case, it's just a better practice rather than a mandate or rather than it shouldn't throw exceptions when called more than once. it should generally be safe to repeatedly called disposed. And again that's not required. RBN: The AsyncDisposal interface. asynchronous version of disposal. and I'll get a little bit more into where the proposal split might be happening when it comes to async using. But we've decided to keep in some of the capabilities of async using due to their value within a potential value within the ecosystem. while we settle on a issues on the async disposed or async using syntax. in the meantime isn't disposable account is very Cooler(?) to the Disposable interface but these objects would have an async disposed method purpose of this method is to allow resource clean up. that is not necessarily capable of being performed within a within a synchronous block of execution. much like an async iterator has a return that it can be potentially asynchronous. an async close essentially returns a promise that resolves. when the resource has been freed, In. within the API, we have introduced disposable stack and async disposable classes. The Disposable stack class is a container. Its purpose is to hold multiple disposable resources such that when the stack itself is disposed those resources. but that it contains are also disposed. It's called a stack because resources are added in a in order and then released in the reverse order. -RBN: DisposableStack is very similar to python’s ExitStack and borrows heavily from some of the design there. It's a convenient container for wrapping multiple disposals. Disposables in a very is very helpful when working with complex. constructions, such as classic instructors where if you are creating a class that it is also disposable that host multiple disposable resources. There are certain patterns of attaching those resources that are very difficult to use without a construct like this. And I think I have some examples later in the slides that show this. In addition disposable stack provides some help with interop. It allows you to take a not only to use a well-defined disposable resource of, also to adopt resources that are not using the Disposable interface by basically registering that value with a custom callback the ability to add just a the other deferred. plain callback method, which is again, which is designed to roughly approximate. the go defer statement. There's the method allows you to move things out of the stack. and the disposal syntax or semantics. so get a little bit more deeper into this disposable stack Use is a method that accepts a disposable adds to the stack it allows known to find just like the using declaration and just like the using declarations symbol disposes. Act and cached on Entry. when the resource is added both the resource and its method or added to the internal disposable stack. And importantly when you call this method, the result of the Disposable is returned. This allows you to get very close to the resource acquisition time whenever you create the resource or what we get the resource from a function. that there is very little opportunity for other user code to run between when the resource is allocated and when it is added to the stack, therefore, the use method Returns. The thing that you put in, so that you can say, declare a variable that equals stack use the thing that you're going to dispose. and get that value back after. It's been added to the stack and a way you can say, Canon(?) consider a using declaration to be a syntactic sugar, over a every block, having a disposable stack, That new resources get added to during those decorate. Those declarations are are initialized, +RBN: DisposableStack is very similar to python’s ExitStack and borrows heavily from some of the design there. It's a convenient container for wrapping multiple disposals. Disposables in a very is very helpful when working with complex. constructions, such as classic instructors where if you are creating a class that it is also disposable that host multiple disposable resources. There are certain patterns of attaching those resources that are very difficult to use without a construct like this. And I think I have some examples later in the slides that show this. In addition disposable stack provides some help with interop. It allows you to take a not only to use a well-defined disposable resource of, also to adopt resources that are not using the Disposable interface by basically registering that value with a custom callback the ability to add just a the other deferred. plain callback method, which is again, which is designed to roughly approximate. the go defer statement. There's the method allows you to move things out of the stack. and the disposal syntax or semantics. so get a little bit more deeper into this disposable stack Use is a method that accepts a disposable adds to the stack it allows known to find just like the using declaration and just like the using declarations symbol disposes. Act and cached on Entry. when the resource is added both the resource and its method or added to the internal disposable stack. And importantly when you call this method, the result of the Disposable is returned. This allows you to get very close to the resource acquisition time whenever you create the resource or what we get the resource from a function. that there is very little opportunity for other user code to run between when the resource is allocated and when it is added to the stack, therefore, the use method Returns. The thing that you put in, so that you can say, declare a variable that equals stack use the thing that you're going to dispose. and get that value back after. It's been added to the stack and a way you can say, Canon(?) consider a using declaration to be a syntactic sugar, over a every block, having a disposable stack, That new resources get added to during those decorate. Those declarations are are initialized, RBN: In addition. this, So the next few methods I'm going to discuss were originally all on the use method at the last meeting. We decided to break up views into multiple functions. to avoid the overloaded Behavior. One of those overloads was this mechanism adopt we can change the name as we discussed but this was the one I found to be the most appropriate for the use case. Its purpose is interop and allows you to adopt a foreignresource that uses a non-disposable syntax or uses a non-disposable object, and attach custom disposed semantics. or to use a disposable where you want to override the disposed semantics with something else. in this case, unlike adopt, the value is added regardless as to its value. So this can be an object, it can be a number, it can be undefined. that value will then be passed as an argument to the on disposed call back. when the resource is or when the stack is disposed and this resource would be disposed that callback is then invoked with the resource as its argument. it Returns the value of this argument much like use So that again the acquisition of the resource that is passed to value is as close to the it's added to the stack as close to the time that is required, as is possible. -RBN: The third method that we added, to break up the overloaded use is the defer() method. This only accepts a callback and adds it to the stack. so it's similar to adopt it as a callback, only. it's like you executed with an empty argument list. in case a function depends on arguments.length. It's an approximation of Go’s defer statement, which to allow you to add essentially any function wants to the stack gets used later. As I mentioned during the previous, proposals discussion around proxy revocation that you could in theory. great a stack and then just defer the revoke methods that are returned. So, here are some examples of the of how this pose will stack works in the case of adding a disposable. You create the resource calling stack use again, the acquisition of the resource here, Get Resource One. is as close to the point where you add this at it to the stack as possible before it gets returned. This is also very valuable if you're calling an API that takes in multiple arguments and each of those Disposable you can in an expression position. when calling that function call stack use for First Resource. In the first argument stack used for the second resource in the second argument and those get added in the appropriate order. Such that they are again released in the appropriate order. Adopt, for example, here is very similar. It allows you to add non-disposable values as if they were disposables again as close to the acquisition as possible with a call back, that allows you to handle that clean up later. and in the example of fact, defer allows you to add any you call backs. would like, as a dispose. Now, the Disposable stack move method. this is very similar to a behavior in pythons exit stack. which I believe called pop all. the idea here is that you are taking all of the resources out of one disposable stack, putting them into a new disposable stack and returning it The specific use case for this is around construction We're in a class Constructor they might have an example of this here. Yeah, in a class Constructor. I have this example is class has two resources, Channel and socket both are disposables. However, I don't want to use using declarations for these because I don't want them to be disposed when the Constructor exits. I want them to be disposed when the class is disposed. However, if there are any exceptions that occur during construction, we want to make sure that those resources are cleaned up. so in this case, we can use a disposable stack and then have that stack attach the resources that you are adding to the class. Such that once the Constructor, it completes, you can then call stack move to take everything out of that stack. So that stack can still be disposed. since that will be disposed regardless as to what you do or do not add to it or any other changes you make to it. It then. those resources can be pulled out of that stack so that they won't be disposed now because construction is completed and because this again is a convenient container this allows me to then call [Symbol.dispose]() later on on this container object. And then dispose all of the things that were added to it in the correct order. So you don't have potential for this case socket has a dependency on channel so it could be the closing, the socket might require some cleanup that requires a channel to still be alive. So rather than reproducing this stuff Is this socket disposed, is this channel disposed into this dispose method of the class and somehow possibly transposing those and doing it wrong This ensures that if you call this pose on the stack that you created during construction, that everything happens in the correct order. and we mentioned, before, the original stack will become disposed. This again was a new changed since the last plenary And again, this is extremely handy for class construction. It's also helpful for factory functions where you're doing more FP style work, where you're not actually using class instances, you can do the same type of behavior is just using constants. And again before this used disposes, a getter. Now, you would just essentially use an arrow function. So, in addition to DisposableStack, we're also introducing the AsyncDisposableStack. which is similar to pythons AsyncExitStack. essentially, mimics the same capabilities of disposable stack is designed to work with asynchronous disposables. So the defer callback can return a promise. So can the on disposed has to adopt and the Disposable that passed in can be async or a normal synchronous disposal, the Disposable as that this boat. Suppose method can Will essentially just be executed. +RBN: The third method that we added, to break up the overloaded use is the defer() method. This only accepts a callback and adds it to the stack. so it's similar to adopt it as a callback, only. it's like you executed with an empty argument list. in case a function depends on arguments.length. It's an approximation of Go’s defer statement, which to allow you to add essentially any function wants to the stack gets used later. As I mentioned during the previous, proposals discussion around proxy revocation that you could in theory. great a stack and then just defer the revoke methods that are returned. So, here are some examples of the of how this pose will stack works in the case of adding a disposable. You create the resource calling stack use again, the acquisition of the resource here, Get Resource One. is as close to the point where you add this at it to the stack as possible before it gets returned. This is also very valuable if you're calling an API that takes in multiple arguments and each of those Disposable you can in an expression position. when calling that function call stack use for First Resource. In the first argument stack used for the second resource in the second argument and those get added in the appropriate order. Such that they are again released in the appropriate order. Adopt, for example, here is very similar. It allows you to add non-disposable values as if they were disposables again as close to the acquisition as possible with a call back, that allows you to handle that clean up later. and in the example of fact, defer allows you to add any you call backs. would like, as a dispose. Now, the Disposable stack move method. this is very similar to a behavior in pythons exit stack. which I believe called pop all. the idea here is that you are taking all of the resources out of one disposable stack, putting them into a new disposable stack and returning it The specific use case for this is around construction We're in a class Constructor they might have an example of this here. Yeah, in a class Constructor. I have this example is class has two resources, Channel and socket both are disposables. However, I don't want to use using declarations for these because I don't want them to be disposed when the Constructor exits. I want them to be disposed when the class is disposed. However, if there are any exceptions that occur during construction, we want to make sure that those resources are cleaned up. so in this case, we can use a disposable stack and then have that stack attach the resources that you are adding to the class. Such that once the Constructor, it completes, you can then call stack move to take everything out of that stack. So that stack can still be disposed. since that will be disposed regardless as to what you do or do not add to it or any other changes you make to it. It then. those resources can be pulled out of that stack so that they won't be disposed now because construction is completed and because this again is a convenient container this allows me to then call `Symbol.dispose()` later on on this container object. And then dispose all of the things that were added to it in the correct order. So you don't have potential for this case socket has a dependency on channel so it could be the closing, the socket might require some cleanup that requires a channel to still be alive. So rather than reproducing this stuff Is this socket disposed, is this channel disposed into this dispose method of the class and somehow possibly transposing those and doing it wrong This ensures that if you call this pose on the stack that you created during construction, that everything happens in the correct order. and we mentioned, before, the original stack will become disposed. This again was a new changed since the last plenary And again, this is extremely handy for class construction. It's also helpful for factory functions where you're doing more FP style work, where you're not actually using class instances, you can do the same type of behavior is just using constants. And again before this used disposes, a getter. Now, you would just essentially use an arrow function. So, in addition to DisposableStack, we're also introducing the AsyncDisposableStack. which is similar to pythons AsyncExitStack. essentially, mimics the same capabilities of disposable stack is designed to work with asynchronous disposables. So the defer callback can return a promise. So can the on disposed has to adopt and the Disposable that passed in can be async or a normal synchronous disposal, the Disposable as that this boat. Suppose method can Will essentially just be executed. RBN: As I mentioned. earlier on in the slides there are a number of features that this proposal has had since essentially its Inception that have been either postponed or withdrawn, one thing that we are postponing is async using declarations. So this is the would be the async syntax that is the same form. that we are currently proposing for the synchronous syntax. But designed to work with a sync dispose, we do believe that we have a syntax that we can move forward with. But there are some caveats to that that are a reason to postpone the syntactic portion temporarily. So again this async form of using declarations, these would be allowed at the top level of async functions. and at the top level of modules those Have implicit support for couple of low weight, but would not be allowed in a regular block And the concern here was one raised by MM that a regular block, that contained an async using would have an implicit await and he prefers to find to make sure that every potential. a interleave point. is marked by either wait or yield. Thus, a regular block would not be be sufficient. One approach that we've discussed in the Is link tissue. to potentially leverage async do Expressions that you would have to await, which is what I showed in the example. Here, that would mean that this Intex would be dependent on the advancement of async do. which is one reason to postpone it one concern that I have with leveraging async do is that it's fairly easy to accidentally leave off the await. since the sync since an async do block, would still execute up to the First. asynchronous. point either the Away tour, the results of the block. therefore, it can look like your code is working correctly, If you have a async do that contains an async using and then synchronous code. But then the result isn't actually cleaned up. So one alternative we might consider is introducing a specific await using block. where the weight is kind of required and due to how that would I work with with ASI and careful, use of “no line terminator here” productions, we can essentially ensure that if we were use this syntax that it would avoid that case. It also means that we wouldn't have a dependency on async do. but if and when a seeing through does advance, they're still that potential for a dangling await. @@ -503,9 +486,9 @@ RBN: Another thing that we have postponed to a follow-on are separate proposal i RBN: the using statement which was the original design of this proposal back when it was first, introduced, has been withdrawn. This is due to the fact that it seems that we've primarily been in favor of the are RAII, the “resource acquisition is initialization” style. I've been very much in favor of that and even C# which has the using statement that this was heavily influenced by recently added using declarations. as well. So, we've been less motivated to actually introduce this. this. We considered keeping it as a bridge to a sync using statements, but given that we've found potentially valid syntax or Bibles(?) and text you want to use for async using declarations that we decided to withdraw both this and the async using, or using await or whatever you want to call it statement that we'd had initially as well. -RBN: And that get them to open issues. This is the slide that I mentioned the beginning. There was a concern raised by JHD and that the ordering of arguments for the adopt method should match the precedent set by methods like array prototype for each or map. or reduce. I've contended that that ordering isn't a preferred ordering. that this is more in line with things like JSON.parse. Where text comes first, were the non callback, comes first, the callback comes second. stringify, where the callback comes later, or array, from where the callback comes later. Both of these were introduced after Arrow functions. Well JSON wasn't but array from was introduced after Arrow functions were introduced. which kind of reduce the need to use this trailing This, argh and many API. Most new code doesn’t use that API design anymore, because it will end up using Arrow functions for readability. So, we also prefer leading with the value, because that value ends up being returned. It's a bit more awkward to have the value come later and be optional if the idea is that you're intending to return that value to keep it as close to the acquisition Point as possible. leading with the value also in, in our opinion, is better for type inference and editors using either TypeScript or flow or one that supports JS for Or for type annotations and such as working with JavaScript envious code One reason for this is that if you had the undisposed common value or as with any of the other as we something like a ray reduce if you are writing code and have that type of type inference, you don't know what the type of X is yet because you haven't provided it. Therefore it's really hard to write this callback. Instead, you have to write a right? I, think you're going to actually be disposing, then go back to where you started and then write the code that you would have used for disposal disposal. So this is a very poor developer experience for those working in editors that have this inference capability. Where if we continue to use the value comma on disposed order then by the time we get to the point She writing the disposed callback. We actually have the type of the things. So it's much easier to write which improves the developer experience. developer experience. JH. had said earlier in chat that he was going to be unavailable for this talk. that his concern, he believes is still valid. and that if we do decide to move to stage 3, that he would ask it, be conditional on resolution of this situation. And so with that, I will lead to comments that are in the queue. and we can talk about the status of The Proposal. <End of slides> +RBN: And that get them to open issues. This is the slide that I mentioned the beginning. There was a concern raised by JHD and that the ordering of arguments for the adopt method should match the precedent set by methods like array prototype for each or map. or reduce. I've contended that that ordering isn't a preferred ordering. that this is more in line with things like JSON.parse. Where text comes first, were the non callback, comes first, the callback comes second. stringify, where the callback comes later, or array, from where the callback comes later. Both of these were introduced after Arrow functions. Well JSON wasn't but array from was introduced after Arrow functions were introduced. which kind of reduce the need to use this trailing This, argh and many API. Most new code doesn’t use that API design anymore, because it will end up using Arrow functions for readability. So, we also prefer leading with the value, because that value ends up being returned. It's a bit more awkward to have the value come later and be optional if the idea is that you're intending to return that value to keep it as close to the acquisition Point as possible. leading with the value also in, in our opinion, is better for type inference and editors using either TypeScript or flow or one that supports JS for Or for type annotations and such as working with JavaScript envious code One reason for this is that if you had the undisposed common value or as with any of the other as we something like a ray reduce if you are writing code and have that type of type inference, you don't know what the type of X is yet because you haven't provided it. Therefore it's really hard to write this callback. Instead, you have to write a right? I, think you're going to actually be disposing, then go back to where you started and then write the code that you would have used for disposal disposal. So this is a very poor developer experience for those working in editors that have this inference capability. Where if we continue to use the value comma on disposed order then by the time we get to the point She writing the disposed callback. We actually have the type of the things. So it's much easier to write which improves the developer experience. developer experience. JH. had said earlier in chat that he was going to be unavailable for this talk. that his concern, he believes is still valid. and that if we do decide to move to stage 3, that he would ask it, be conditional on resolution of this situation. And so with that, I will lead to comments that are in the queue. and we can talk about the status of The Proposal. <End of slides> -SYG: I think I'm first up on the queue. I'll start. So I added this earlier when you were talking about the auto bind concerned that I had. my main concern, there was the kind of the Hidden per instance storage and the behavior being different or something that looks just like the regular method. After you show me the use case, I think I would be ok actually with a function that combines the move Plus explicit creation of this of the bound. dispose. But if you as Champion are happy with just using arrows, that is, of course, even slightly preferred. +SYG: I think I'm first up on the queue. I'll start. So I added this earlier when you were talking about the auto bind concerned that I had. my main concern, there was the kind of the Hidden per instance storage and the behavior being different or something that looks just like the regular method. After you show me the use case, I think I would be ok actually with a function that combines the move Plus explicit creation of this of the bound. dispose. But if you as Champion are happy with just using arrows, that is, of course, even slightly preferred. RBN: by me, but I think I'm happy with using preferreds…. I think I'm happy with using Arrow functions the idea to add the disposed as an auto buying method was something that came from I believe someone in the community that thought it would be useful. So, I can see there might be potential that we might want to consider adding something like that in the future, but I don't find it to be as strong motivation with arrow functions. with the only downside being that you have to create a closure variable or a variable, you close over to make that work. So, I'm less inclined to use auto bind, I do find the existence of a regular just dispose method, helpful. Just like we have. values on array and map and set and keys and entries on. on those. which some of them are just an alias to the same method that's used for iterables. So I found those to be useful but was less inclined to maintain a auto bind given the discussion that we were having @@ -517,7 +500,7 @@ RBN: so, we've had multiple discussions on the proposal repo about this. that wh KG: Didn’t DD say that he didn't want to add anything without syntax? -RBN: I believe DD did, yes. But I've been also conversing with other folks on the web platform and on node.js as well. +RBN: I believe DD did, yes. But I've been also conversing with other folks on the web platform and on node.js as well. KG: Okay. @@ -525,7 +508,7 @@ RBN: Under theirs. less of a name. Click inclination to do much until those exis KG: I guess I have the opposite opinion. Mostly I am less confident that we'll be able to find a happy resolution for async syntax. I would very much like to, but async do isn't viable for reasons I'll get to later. And so, I'm not sure we'll be able to find something which satisfies MM. And I feel like it would be kind of unfortunate if we ended up in a situation where we have this protocol that exists to support syntax which we thought we were going to add that we then never add. That's my main concern about this. And on the other hand, if we do think that async using syntax is coming soon, it seems like it would be fine to defer the async stack until that near point in the future, rather than trying to bring it in now. -RBN: I'm not sure if MM is present, but in the issue where we've been discussing this we have kind of settled on the approach here. He was in favor is last comment there about using the await async do. Which is why I brought that up in that slide +RBN: I'm not sure if MM is present, but in the issue where we've been discussing this we have kind of settled on the approach here. He was in favor is last comment there about using the await async do. Which is why I brought that up in that slide KG: Okay, I guess I'll just skip ahead. @@ -533,7 +516,7 @@ RBN: So I was gonna say that's a yes MM has does believe that there is a way for KG: Okay, well, if he's okay with some other syntax, that's fine. But `await async do` just doesn't work. `async do` has limitations on what you can put within it. And in particular, you can't put control which affects the surrounding context, because if you aren't awaiting, then the async block is not executing in a straight line with the surrounding context. It just doesn't compose. You can't use `async do` for this sort of thing. -RBN: and we do have an alternative syntax were considering, which is this await using block? Since the purpose of the block is to the purpose of having a separate Syntax for the block is to explicitly. indicate that What what is being awaited is the dispose of the using declarations Therefore, a very clear and explicit await using block to indicate that this block contains those basic using declarations sense. And the choice of using. the await using for the block. is to match MM’s specific requirement that any potential asynchronous to interleaving point is explicitly. demarcated by await keyword. whereas the using declaration inside is marked with a I think because it is, not actually doing anything asynchronous, it is only indicating that an asynchronous effect may occur. Therefore, these two things, go hand in hand and I think that might be the best approach that will go that we can go forward with +RBN: and we do have an alternative syntax were considering, which is this await using block? Since the purpose of the block is to the purpose of having a separate Syntax for the block is to explicitly. indicate that What what is being awaited is the dispose of the using declarations Therefore, a very clear and explicit await using block to indicate that this block contains those basic using declarations sense. And the choice of using. the await using for the block. is to match MM’s specific requirement that any potential asynchronous to interleaving point is explicitly. demarcated by await keyword. whereas the using declaration inside is marked with a I think because it is, not actually doing anything asynchronous, it is only indicating that an asynchronous effect may occur. Therefore, these two things, go hand in hand and I think that might be the best approach that will go that we can go forward with KG: Okay. well, I will think more about this syntax later but I'm glad there's a potential resolution. However, either way, either the async syntax is something we are going to work out soon, in which case deferring the async dispose symbol until the syntax comes along seems like not a high cost, or the async syntax is not something that we'd get soon, in which case it seems like there is all the more reason not to include the async symbol until we are sure that we can actually get syntax for it. @@ -547,17 +530,11 @@ RBN: Yeah, that was actually the first example that I showed is that you can it SFC: Great; in that case I'm fine if async dispose is dropped. - - ??: next I think is MAH. - - - - MAH: I want to say that. the async using could be a lot today in a position where we're already configured, where we're okay with it being there, which is the top level of an async function, or the top of a for weight of block. The. concern is that it doesn't cover all the do scoping that programmers may be wanting to do. and using for weight off is a bit of a hack if they wanted to create a block there. So, we do need to find a way to create a block That is clearly marked within the way to keyword. I hear KG's issues with the async do expressions. I believe, RBN, as a alternative syntax that's who We maybe we can all agree on and but but I do see a path forward. and I would prefer if everything went in at once because it seems that async async usages. are the primary usages. has that ecosystem is interested in. -RBN: and I'll say, one more thing to the discussion about, potentially deferring, the async symbol and async DisposableStack. When in the last meeting I proposed maintaining those while splitting off the rest, were primarily because I wasn't sure. We had figured out the async syntax yet. I do think that we are pretty close to that. So I'm actually more comfortable with potentially postponing those. since I hope to again, present, specifically, the async portion of this. at the next plenary. And so, I am perfectly fine with postponing those given that. We believe we have a clear path forward. +RBN: and I'll say, one more thing to the discussion about, potentially deferring, the async symbol and async DisposableStack. When in the last meeting I proposed maintaining those while splitting off the rest, were primarily because I wasn't sure. We had figured out the async syntax yet. I do think that we are pretty close to that. So I'm actually more comfortable with potentially postponing those. since I hope to again, present, specifically, the async portion of this. at the next plenary. And so, I am perfectly fine with postponing those given that. We believe we have a clear path forward. SYG: then there's nothing I need to stay with you. @@ -625,7 +602,6 @@ RBN: Thank you. RPR: Alright. That part is kept at stage 2. - ### Conclusion/Resolution Stage 3 with the following conditions: @@ -644,7 +620,6 @@ The following does not remain at Stage 2: 1. `void` syntax - ## Module Expressions Presenter: Nicolò Ribaudo @@ -663,7 +638,7 @@ NRO: So, what's changed since last time we presented this? We have removed a sho NRO: Module expressions now evaluate to a module object, which we have renamed from `ModuleBlock` to `Module` to align with other modules proposals. Specifically, the `Module` and `ModuleSource` constructors proposal presented yesterday includes a `Module` constructor. The `Module` constructor that module expressions introduce is very limited and does not have any properties other than `toString` on the prototype. We have made it as small as possible so that module expressions do not depend on other proposals and can move ahead, while other proposals can expand the capabilities of these objects. -NRO: We now have written a specification text on top of the refactoring that is hopefully going to be merged in soon in ECMA-262, which changes how modules loading is divided between HTML and 262. Importing module expressions does not go through host hooks anymore, so it's completely contained within 262. Well, except if the module expression imports some external dependency. +NRO: We now have written a specification text on top of the refactoring that is hopefully going to be merged in soon in ECMA-262, which changes how modules loading is divided between HTML and 262. Importing module expressions does not go through host hooks anymore, so it's completely contained within 262. Well, except if the module expression imports some external dependency. NRO: We have continued working on host integration. We have not yet updated the HTML integration pull request yet. However, some details about how it works are that `import.meta` is built using the same data as the outer module. This means that in practice module expressions share the same `import.meta.url` as the outer module. It is already possible for different modules to have the same `import.meta.url` if they are in the same file. If you are using two different `script` tags in an HTML file. JHD is not here, but he asked me to mention that he thinks that every module should have its own `import.meta`, and that we should carefully consider the implications of this choice before asking for stage 3. @@ -681,14 +656,8 @@ NRO: so, just to clarify: is the `module` Keyword in TypeScript is deprecated, o DRR: it's not deprecated right now, so there's no warning that if you use it we're considering that for our next version. But at the very least, deprecation means that you won't get a warning until maybe a few versions in. And then, after that, We will we won't cut it off for like another two and a half years probably, probably - - ??: Okay, thanks. for the clarification. You have a reply from JWK - - - - JWK: I have tried to implement module expressions in TypeScript and currently, module expression does not conflict with the TypeScript module, because typescript requires a module has an identifier after the module keyword. But yes, module declarations will conflict because they share the same syntax. NRO: so, if necessary can we bring up this again later with module declarations, since it's currently a separate proposal? Because that's the most likely to complete with that: it uses the same syntax space. @@ -757,7 +726,6 @@ RPR: Thank you. So we're at time. Nicolo to say a final wrap word. NRO: So yes, thanks everyone. I plan to work on these issues. We would probably present at some feature meeting with some solutions to this problems. - ## Module declarations Presenter: Nicolò Ribaudo (NRO) @@ -768,7 +736,7 @@ Presenter: Nicolò Ribaudo (NRO) NRO: Okay, hello. It's me again. As RPR mentioned, the module fragments proposal has been renamed to module decorations. Initially it was a very different proposal from module expressions, but we've renamed it because the proposal has changed enough that it can considered as an extension or a follow-up. As you could guess from the name, module declarations are the declaration form of module expressions, and so they we behave like shown in this example. A module declaration is like a cost assignment with a module expression. However, model declarations give us something new, which is that you can statically import them either from where they are declared or from other modules, and so this also extends what models capture because it upgrades from nothing to visible module decorations. Visibility follows the usual scoping rules. And what is the motivation for this? It's quite different from module expressions. What we're trying to solve now is bundling. Modern JavaScript apps have many many files, and loading them one by one has some like different performance problems. Like, going back and forth to the different requests. This is partially solved by HTTP2. Also, you get better compression if you have single big file with aggregated things, it compresses in a more optimal way. And well, we probably all know about bundlers like how they already solve this problem. -NRO: The problem is that maintaining the ESM semantics is hard. There are mainly two different approaches. Some of them merge all the top level scopes, and Rollup is an example of it. So you need to rename variables, you need to manually recreate namespace objects when you have a namespace import. And you cannot really represent the semantics of TLA, because if you put everything in a single module, you cannot have modules executing in parallel. Other bundlers wrap every module in a function, for example webpack does that, however, it's quite hard to preserve the live binding and semantics across different modules and you have to manually manage things: you have a JavaScript-written runner to link all the functions together instead of relying on the built-in logic to link all the modules together. With module decorations, bundlers could take different files as they do now, and almost only concatenate them in a single file using module declarations to represent all the different modules, and they only have to rewrite the specifier of an import statement to refer to the inline module decoration, instead of existing external files. +NRO: The problem is that maintaining the ESM semantics is hard. There are mainly two different approaches. Some of them merge all the top level scopes, and Rollup is an example of it. So you need to rename variables, you need to manually recreate namespace objects when you have a namespace import. And you cannot really represent the semantics of TLA, because if you put everything in a single module, you cannot have modules executing in parallel. Other bundlers wrap every module in a function, for example webpack does that, however, it's quite hard to preserve the live binding and semantics across different modules and you have to manually manage things: you have a JavaScript-written runner to link all the functions together instead of relying on the built-in logic to link all the modules together. With module decorations, bundlers could take different files as they do now, and almost only concatenate them in a single file using module declarations to represent all the different modules, and they only have to rewrite the specifier of an import statement to refer to the inline module decoration, instead of existing external files. NRO: There is a parallel effort in other standard venues to optimize bundling, which is the "bundle resources" proposal and it allows containing different sources such as images or css files. We believe that module declarations and bundle resources can coexist, because they work at different levels. Bundles are at the HTTP level, so you still have to go to through all the network layer logic in the browser implementation to get files out of the bundle, and module declarations are within 262. So they are like more closely to where they're needed. And also, usually applications have many more JavaScript files than other resources, so it makes sense to have a solution that specifically tries to minimize that while working in parallel on other optimizations. @@ -776,15 +744,15 @@ NRO: What's changed since the last time we presented this proposal? We have adde NRO: To import module declarations from other files, you use the existing `import` statement. For example, if our previous module exports a bundled module, you can import the bundled module and then import things from it using the `import` statement. As mentioned before, in the previous proposal you would have added a URL fragment to specify the module to import. -NRO: What does this mean for the HTML integration? We still haven't opened a pull request because the proposal is still at stage one, but our plan is that module declarations inherit almost all the decisions from module expressions. So, importing them is completely done within 262. They inherit `import.meta`, and they can be structured cloned across workers. It's important to not notice this difference, which is that module declarations capture other module decorations. So you could have a graph of module decorations and transfer the whole graph from one worker to another. And it works the same as `structuredClone` already works. If the same declaration is cloned twice in a single call, it will be deduplicated, like in this graph. And, I mentioned that loading of module declarations is completely within 262, except that there is some complexity related to how to import module declarations from other files work. Because in this example, the JS, cannot start importing until we finish loading 1.2. This complexity doesn't currently exist. Currently, we can load all the importance in parallel. So we need to adjust the loading logic to allow some imports to be blocked on others. And that's all, thanks for listening. +NRO: What does this mean for the HTML integration? We still haven't opened a pull request because the proposal is still at stage one, but our plan is that module declarations inherit almost all the decisions from module expressions. So, importing them is completely done within 262. They inherit `import.meta`, and they can be structured cloned across workers. It's important to not notice this difference, which is that module declarations capture other module decorations. So you could have a graph of module decorations and transfer the whole graph from one worker to another. And it works the same as `structuredClone` already works. If the same declaration is cloned twice in a single call, it will be deduplicated, like in this graph. And, I mentioned that loading of module declarations is completely within 262, except that there is some complexity related to how to import module declarations from other files work. Because in this example, the JS, cannot start importing until we finish loading 1.2. This complexity doesn't currently exist. Currently, we can load all the importance in parallel. So we need to adjust the loading logic to allow some imports to be blocked on others. And that's all, thanks for listening. NRO: Before going to the queue, I just want to quickly mention that since JHD isn't here he asked me to say that he finds this scoping rules of modules weird because module declarations are Linked together before evaluation of the module. So if you can import a module declaration like this, but if `numbers` was a constant variable whose value was a module expression, you could not statically import from `numbers` anymore because numbers would only be available at runtime and not at linking time. So, yeah, let's go to the queue and remember that and I plan to ask for stage 2 at the end of the time box. -CP: the TDZ, there is not really a TDZ. It is more of a developer expectation. When they do the same things with function because they cannot really call those functions. during the linkage and evaluation phase like this is kind of new now, you have potential failure in the linkage process because you have a thing that is not declaration is an expression, and that might trick some developers, but I understand that that might work well with warm. +CP: the TDZ, there is not really a TDZ. It is more of a developer expectation. When they do the same things with function because they cannot really call those functions. during the linkage and evaluation phase like this is kind of new now, you have potential failure in the linkage process because you have a thing that is not declaration is an expression, and that might trick some developers, but I understand that that might work well with warm. NRO: So, module declarations are hoisted in the same way as strict-mode non-annex-B functions are hoisted. So you can use them before the declaration, which means that if you import a module declaration from another file in a cycle it's hoisted, so it should still work. Yeah. -CP: So what if what I'm saying is that today, if you doing that with functions, you might get to import a function and export it again. and, obviously hoisted. but the with modules now, you'll be able to Simply modify your code and the function will work fine, but you're making from declaration to expression and it continues to work unless than the function is called. But for modules is a little bit more harsh I would say because you can only work if it is a declaration, if you make it an expression it Doesn't even link anymore. +CP: So what if what I'm saying is that today, if you doing that with functions, you might get to import a function and export it again. and, obviously hoisted. but the with modules now, you'll be able to Simply modify your code and the function will work fine, but you're making from declaration to expression and it continues to work unless than the function is called. But for modules is a little bit more harsh I would say because you can only work if it is a declaration, if you make it an expression it Doesn't even link anymore. NRO: Okay, yes. Thanks for clarification. So yes, this is similar to what JHD mentioned, which is that module expressions are only available at evaluation time, so you can not statically import from them and you can only statically import from module decorations. @@ -826,7 +794,7 @@ NRO: So, imagine, that A, B, C, and D are four module declarations that all impo MM: Yeah, I certainly agree that this can go to stage two. without answering this question. So, let me just just note this as a concern for stage 2 which clearly you're on board with, so, that's I'm fine. -NRO: So I would like to repeat the invite to you, to attend in some of Module Harmony Call to discuss this proposal +NRO: So I would like to repeat the invite to you, to attend in some of Module Harmony Call to discuss this proposal GB: I was wondering if maybe ron could go ahead of me since I think it's also related to this topic. topic. All right. @@ -856,7 +824,7 @@ NRO: Yes for it up this for the topic in queue, please reach out to me. there is RBN: My topic is currently in the queue, I do believe is concern for me, regarding the potential advancement for stage 2. So, I want to address that. Yeah. Yeah. So My concern is that. or part of this is clarification, but a module declaration can essentially close over another module declaration in a containing scope. Can it close over anything else? - \ +\ NRO: No, only module declarations. RBN: This is a bit of an odd, inconsistency, compared to function, declarations or function Expressions class declarations class Expressions that have the ability to close over things. And I understand that that's not necessarily portable. In the conversation, with members of my team, earlier this week, there was concerns raised about the fact that there was not sure that it did not have this. consistency, even if you were forced to, provide those, bindings for, those closures when you actually imported at the at the source location, so there was some concern there that, that might not be also inconsistency between you can close over a lexically declared module declaration, but nothing else. @@ -865,7 +833,7 @@ NRO: Yes. Module declarations… You could consider it as a parallel scope where RBN: I'm not sure if Daniel might have any other concerns, but I might say that I won't really block stage 2, but I do think this is a concern, to, to consider. I know that the earlier bleep was dominant in a cola is Luca’'s proposal which was somewhat similar as providing a block mechanism. That was portable did close over things, but you would have to supply those close value. So that might be something we need to talk more. or as we're pushing this up for stage between stage, 2, and stage 3. -KKL: and, for the record, I also will not block but I do think that in stage 3 and Stage 2, we will run into physical limitations that will not allow for the possibility of preserving the invariant the constructing, a new module from its source always works. but +KKL: and, for the record, I also will not block but I do think that in stage 3 and Stage 2, we will run into physical limitations that will not allow for the possibility of preserving the invariant the constructing, a new module from its source always works. but DE: I think that will be fine. And I think we can discuss that later. @@ -873,7 +841,7 @@ RPR: Are there any objections that would block stage 2? Do we have any support f GB:, it very well thought out in line with the other modules work that's been going on. I do have stage three concerns but I don't know any stage to consume. I suppose. for stage 2. -RPR: USA in the room with +1 +RPR: USA in the room with +1 RPR: I can see that there are concerns on the queue. Daniel did you want to speak to yours? We want to make sure that we bring everything out before concluding. @@ -883,35 +851,33 @@ RPR: Okay, thank you. JWK: This is different than the decorators case in the static decorator, if you Imports it's you cannot use it in the normal random value space. -DRR: but this although can cure but you can't, you have a special form of values that you can import from, but you can't do that with any other value. Yeah. so your yeah. I mean you're creating a new thing that you can do with Imports but only certain things can be done. You can do that with. so that's where kind of fun I don't want to spend too much time on this, +DRR: but this although can cure but you can't, you have a special form of values that you can import from, but you can't do that with any other value. Yeah. so your yeah. I mean you're creating a new thing that you can do with Imports but only certain things can be done. You can do that with. so that's where kind of fun I don't want to spend too much time on this, RPR: All right, thank you. WH. Can you go in 20 seconds? WH: Yes; Maybe I just don't understand this, but I don't know what happens if you have a module declaration inside a function. When does it take effect and become visible to other modules? What if you have one inside a loop? What other things can refer to it? So I just don't yet understand the interaction of module scoping with our language scoping here. - ### Conclusion/Resolution State 2 Lots of open topics - ## ShadowRealm Presenter: Caridy Patiño (CP) -- [proposal]() +- proposal -- [slides]() +- slides CP: We want to provide an update on ShadowRealms. I will go really quickly because we have two things I want to spend time on discussing. That's why we asked for 60 minutes today. For those that are not very familiar with it, ShadowRealm provides a way to evaluate code inside a new global context with a new global object. We have been working on these for quite some time. Yesterday we were looking at the first presentation about Realms, it was actually nine years ago from Dave Herman. So a long time coming, hopefully this time around we can get it done. In terms of the proposal, you have the proposal, the spec, the explainer, (we have a new explainer just for errors). In terms of the API, nothing has changed, it remains the same. In terms. of the implementation status, we have implementations in the three main engines. Apple was able to pull it out of Safari 16. That was the concern that was raised last time in plenary. It's not available in any of the engines yet, but it's implemented in all three of them. For today, we have updates on the integration process with HTML. We have two normative changes. very small, but we believe that we need to achieve consensus on those two. And then we have an explainer with clarifications as well. -CP: I will go over them really quickly and we can go into the normative changes, to spend time deciding whether or not those are what we want to have there. In terms of the HTML integration, there was a setback for a few weeks, there was a gigantic spec refactor not related to ShadowRealm that affected the changes that we proposed, and there was a little bit of back and forth with the rebase process, finally. Igalia took over the pull request and updated it, So now it's ready. I've been ready for a week to be reviewed. Additionally, there are other things that we want to provide, specifically a explainer about how to make decisions whether or not new features coming to HTML should be included or not in ShadowRealm. These are not part of the normative text though. It is just going to be an explainer for implementers. I don't know if there is anyone in the meeting that can provide more details, that's where we are right now. +CP: I will go over them really quickly and we can go into the normative changes, to spend time deciding whether or not those are what we want to have there. In terms of the HTML integration, there was a setback for a few weeks, there was a gigantic spec refactor not related to ShadowRealm that affected the changes that we proposed, and there was a little bit of back and forth with the rebase process, finally. Igalia took over the pull request and updated it, So now it's ready. I've been ready for a week to be reviewed. Additionally, there are other things that we want to provide, specifically a explainer about how to make decisions whether or not new features coming to HTML should be included or not in ShadowRealm. These are not part of the normative text though. It is just going to be an explainer for implementers. I don't know if there is anyone in the meeting that can provide more details, that's where we are right now. -CP: In terms of the normative changes, there are two of them. And this one is motivated by previous discussions in plenary. specifically, from Shu. There were some developer productivity and developer ergonomics issues with respect to errors. specifically when an error occurs during the linkage process of modules or some other error occurs on instantiation of the module. The developers were not getting, at least in the Google implementation, sufficient information. They were not getting the proper error message. So it was difficult to find out what was going on when they used ShadowRealms. Luckily we have the implementation in Firefox that actually went beyond what was already specified. And they came up with this idea of stitching together a better error message that can provide, not only information about the original message. but also giving you the hints that this was happening because the error is crossing the callable boundary. So in this normative change, this error.message can be stitched together by the host using information from the original error. But remember that in 262 we do not have any specification details about the error message, So those are host specific implementation details. So we're trying to navigate the waters of this in the spec text. Providing specific details about how this process of creating a new error message can contain a lot of more information. So in this particular case, Firefox is using the name of the error and the message of the error to stitch together a new TypeError.message when an error is crossing the callable boundary. The TypeError is created on the other side, and the message is now stitched together in such a way that gives you all the information that you might need. in terms of the actual text. +CP: In terms of the normative changes, there are two of them. And this one is motivated by previous discussions in plenary. specifically, from Shu. There were some developer productivity and developer ergonomics issues with respect to errors. specifically when an error occurs during the linkage process of modules or some other error occurs on instantiation of the module. The developers were not getting, at least in the Google implementation, sufficient information. They were not getting the proper error message. So it was difficult to find out what was going on when they used ShadowRealms. Luckily we have the implementation in Firefox that actually went beyond what was already specified. And they came up with this idea of stitching together a better error message that can provide, not only information about the original message. but also giving you the hints that this was happening because the error is crossing the callable boundary. So in this normative change, this error.message can be stitched together by the host using information from the original error. But remember that in 262 we do not have any specification details about the error message, So those are host specific implementation details. So we're trying to navigate the waters of this in the spec text. Providing specific details about how this process of creating a new error message can contain a lot of more information. So in this particular case, Firefox is using the name of the error and the message of the error to stitch together a new TypeError.message when an error is crossing the callable boundary. The TypeError is created on the other side, and the message is now stitched together in such a way that gives you all the information that you might need. in terms of the actual text. -CP: In the in the pull request. we are asking about the part where we specify that when they're resting copy, when the new message is stitched Together by the host, that should not be. that should not cause any. ecmascript code to be executed. Meaning, is not observable for the usual And that these operation is being It's been carry on. This is the area that we want feedback today. we believe is fine. but at the same time, we're not married to the solution. This is what Firefox Implement they figured out how to be able to stitch together the message when the data associated to the arena error was generated by the host or was modified by the user. But if the users trying to intersect these files, or providing the proxy of their own in those cases, they will bail out and they will be just changes the information that they have on our informational That's the implementation Firefox has. We believe this is a good one. We don't know from other implementers If this would be, they would be able to do the same thing. If we decide that this is not What is this is not a good solution then we can do a role I get on the error, which is going to be observable by ecmaScript code. So that's the first number two changes and I think I will just go over the two remaining, three remaining slides and then we can go back to discuss the details of the agent. So keep this in mind. The second normative changes, might be a little bit more more controversial. In discussions with the SES folks, Mark Miller came up with this example that is interesting because up to this point we at least I believe that the membrane implementation was able to cover all the cases to censor the information about the errors in such a way that you Will not be able to observe that you are running inside a virtual environment. MM came up with this idea of getting the engine to throw an error language error that can be captured on the same round without crossing the boundary. a callable bond theory and at that point, you will be able to capture the entire stack. observe that you are inside a virtual environment. As a result of these, we looked into how we can provide a mechanism for a developer to create a better environment that can censor this kind of stack And we come up with nothing else than just providing a normative change that would. prevent the host from leaking this information. there is observed from within as Shadow realm. it does not affect the current state of things where there is no Shadow realm, you get divorced at that will remain the same. Even if there're came from Shadow Realm. But if you are observing their own within the shop around you get censor and the censor means stack frames are going to be removed from the ground. The stock produced by the hose. Again, we do not have anything about Our stock error stocks in 262, this is all outside of 262 but do we can provide guidance on and certain information in the spec that can be used by the by implementers to, to carry on this kind of censoring. to be more specific about it. this is the text that is going that is in the pull request right now. and hopefully we can agree on it or find a better solution. They highlight the important parts, This is again, only when you are observing an error inside, a shot of wrong intents. and the error should only contain any stock information about Inside. The. the chat around, meaning all the functions, all the frames that you can create from within the ShadowRealm that you can observe there anything for not side, you should not be able to see it. And the reason for push for this is because we have no other ways for virtual environments to hook into this process and be able to implement censorship. in userland for this type of error. So this is the second one time. I'll get back to it in a bit in the last one. This is the last slide. There was a request from Google. to clarify in the explainer the story around security and I've been sensitive topic since the very beginning. I believe, I personally believe that this time we are very specific. about what you could do with the shadow realm in terms of security. We have an explainer section now that contains the details about integrity, availability and confidentiality. with the details of why ShadowRealm does or does not provide guarantees around different vectors and hopefully this is sufficient to to get everyone on the same page. I'm not to confuse. users of the ShadowRealm on what the guarantees, are. that's the objective of that. So that's pretty much it I wanted to now open for questions and then specifically wanted to go into the to the normative changes and and get feedback. back from on the plenary about it is two things. +CP: In the in the pull request. we are asking about the part where we specify that when they're resting copy, when the new message is stitched Together by the host, that should not be. that should not cause any. ecmascript code to be executed. Meaning, is not observable for the usual And that these operation is being It's been carry on. This is the area that we want feedback today. we believe is fine. but at the same time, we're not married to the solution. This is what Firefox Implement they figured out how to be able to stitch together the message when the data associated to the arena error was generated by the host or was modified by the user. But if the users trying to intersect these files, or providing the proxy of their own in those cases, they will bail out and they will be just changes the information that they have on our informational That's the implementation Firefox has. We believe this is a good one. We don't know from other implementers If this would be, they would be able to do the same thing. If we decide that this is not What is this is not a good solution then we can do a role I get on the error, which is going to be observable by ecmaScript code. So that's the first number two changes and I think I will just go over the two remaining, three remaining slides and then we can go back to discuss the details of the agent. So keep this in mind. The second normative changes, might be a little bit more more controversial. In discussions with the SES folks, Mark Miller came up with this example that is interesting because up to this point we at least I believe that the membrane implementation was able to cover all the cases to censor the information about the errors in such a way that you Will not be able to observe that you are running inside a virtual environment. MM came up with this idea of getting the engine to throw an error language error that can be captured on the same round without crossing the boundary. a callable bond theory and at that point, you will be able to capture the entire stack. observe that you are inside a virtual environment. As a result of these, we looked into how we can provide a mechanism for a developer to create a better environment that can censor this kind of stack And we come up with nothing else than just providing a normative change that would. prevent the host from leaking this information. there is observed from within as Shadow realm. it does not affect the current state of things where there is no Shadow realm, you get divorced at that will remain the same. Even if there're came from Shadow Realm. But if you are observing their own within the shop around you get censor and the censor means stack frames are going to be removed from the ground. The stock produced by the hose. Again, we do not have anything about Our stock error stocks in 262, this is all outside of 262 but do we can provide guidance on and certain information in the spec that can be used by the by implementers to, to carry on this kind of censoring. to be more specific about it. this is the text that is going that is in the pull request right now. and hopefully we can agree on it or find a better solution. They highlight the important parts, This is again, only when you are observing an error inside, a shot of wrong intents. and the error should only contain any stock information about Inside. The. the chat around, meaning all the functions, all the frames that you can create from within the ShadowRealm that you can observe there anything for not side, you should not be able to see it. And the reason for push for this is because we have no other ways for virtual environments to hook into this process and be able to implement censorship. in userland for this type of error. So this is the second one time. I'll get back to it in a bit in the last one. This is the last slide. There was a request from Google. to clarify in the explainer the story around security and I've been sensitive topic since the very beginning. I believe, I personally believe that this time we are very specific. about what you could do with the shadow realm in terms of security. We have an explainer section now that contains the details about integrity, availability and confidentiality. with the details of why ShadowRealm does or does not provide guarantees around different vectors and hopefully this is sufficient to to get everyone on the same page. I'm not to confuse. users of the ShadowRealm on what the guarantees, are. that's the objective of that. So that's pretty much it I wanted to now open for questions and then specifically wanted to go into the to the normative changes and and get feedback. back from on the plenary about it is two things. SYG. Question about The normative change number one Number one we talked about this error thing and talked about a division between kind of user errors created by user code and errors created by the system. And that you should be able to kind of transparently stitch together, a better message, if the error comes from the system like file, not found during module loading, but the way I understand that spec text to mean, is that if you cannot observe the data property, access you are allowed to stitch together. A user are like, if it's if you have a user error, that's a pure data property, it's not observable that, you know, there's no getter, there's no proxy trap. you are allowed to that. This together. but if it is a getter or if it is a proxy trap, now, you are not allowed to to stitch together. @@ -919,7 +885,7 @@ CP: That is exactly the intention. That is exactly the implementation in Firefox SYG: Yes, Okay that's I'm fine with that. It's kind of weird, I guess, but I have no real complaints. like it's kind of weird in that. that. depend on the how well? The programmer Grox, the JS module so that so the division is pure data properties. I guess that's okay. Yeah to be more beam or to be more specific. -CP: I don't know if there is anyone from Mozilla here that can speak about it, but if you attempt to install a getter on an error for a message or name property, Mozilla will still use the original data value for those two properties instead of calling the getter. It is a proxy, then they just simply do not consider the object to be an error because it does not have the error data internal slot, it's considered not an error and then, the new error will have a generic message. That's what Mozilla is doing. which I think, is better, the generic message in that case is used. +CP: I don't know if there is anyone from Mozilla here that can speak about it, but if you attempt to install a getter on an error for a message or name property, Mozilla will still use the original data value for those two properties instead of calling the getter. It is a proxy, then they just simply do not consider the object to be an error because it does not have the error data internal slot, it's considered not an error and then, the new error will have a generic message. That's what Mozilla is doing. which I think, is better, the generic message in that case is used. SYG: I see. in the final question is that this is a non normative note, right? @@ -943,11 +909,11 @@ CP: Yes, I mean that not an error fun day holds any error that you have access t JRL: I’m still confused, but we can go on. -MAH: the reason CP just mentioned is that it's actually impossible for the ShadowRealm creator to modify the environment inside the ShadowRealm, to do this censorship has to be done by the host. and as CP just mentioned, as currently proposed. the implementation is allowed. and we actually encourage the implementation to restore the full stack frame when the it crosses the code boundary. So that the incubator realm has the full stack trace available and the reason for that is to keep current error reporting tools and other are introspection working in the incubator Realm. +MAH: the reason CP just mentioned is that it's actually impossible for the ShadowRealm creator to modify the environment inside the ShadowRealm, to do this censorship has to be done by the host. and as CP just mentioned, as currently proposed. the implementation is allowed. and we actually encourage the implementation to restore the full stack frame when the it crosses the code boundary. So that the incubator realm has the full stack trace available and the reason for that is to keep current error reporting tools and other are introspection working in the incubator Realm. JRL: About where this applies to, the censorship here, I don't understand. You said that, it needs to be censored because you can't censor it yourself in userland. But I don't understand the motivation to censor at all. -CP The motivation is that in some use cases like virtualization, you might want to prevent the program that is running inside the shadowRealm to notice that it is running inside a ShadowRealm or to not detecting which context this program is being evaluated and executed. For those reasons you want to censor information that is not related to the functions that are being in the call stack from within the realm. You don't want to see the rest of it. +CP The motivation is that in some use cases like virtualization, you might want to prevent the program that is running inside the shadowRealm to notice that it is running inside a ShadowRealm or to not detecting which context this program is being evaluated and executed. For those reasons you want to censor information that is not related to the functions that are being in the call stack from within the realm. You don't want to see the rest of it. JRL: I understand it now, I'll put another topic on, but I think this violates some common cases. There's more topics, I already see more topics about this exact topic. So I'll let that go on. @@ -967,15 +933,13 @@ a proposal with these use directives a while back and these different kinds of c CP: Yes, I see. I see. if you are and you capture the error in the cache. and you process that error before, given into the actual code, code. I find any case, too. - - SYG: I don't want it again. I don't want to rat go into the specific design of the other censorship capture a proposal But I am open to that. All right. -MAH: A very quick point. confidentiality is not by default. possible with is not, by default protected against by ShadowRealm. However, it is possible to protect and achieve in production. at least. by removing all sorts of time measurement. we do explain that it is possible and we do it in a way that proven, it's possible. So, yeah. Anyway, back to the stack censorship, ShadowRealm does provide a strong guarantee that you cannot access an object reference from anotherIShadowRealm. or. from a ShadowRealm, or onto your incubator realm from another Realm. V8 has a that Construction mechanism that does expose structured objects so V8 will have to find a way in no matter what to construct its errors. differently. that doesn't expose direct references to objects in other Realms to the stack Construction. Construction. It's I any so your statement that any kind of censorship is it's not acceptable. Goes against the requirements of ShadowRealm in the first place. +MAH: A very quick point. confidentiality is not by default. possible with is not, by default protected against by ShadowRealm. However, it is possible to protect and achieve in production. at least. by removing all sorts of time measurement. we do explain that it is possible and we do it in a way that proven, it's possible. So, yeah. Anyway, back to the stack censorship, ShadowRealm does provide a strong guarantee that you cannot access an object reference from anotherIShadowRealm. or. from a ShadowRealm, or onto your incubator realm from another Realm. V8 has a that Construction mechanism that does expose structured objects so V8 will have to find a way in no matter what to construct its errors. differently. that doesn't expose direct references to objects in other Realms to the stack Construction. Construction. It's I any so your statement that any kind of censorship is it's not acceptable. Goes against the requirements of ShadowRealm in the first place. CP: Okay. Can you elaborate more on this? -MAH: That quadratic error capture, stack frame mechanism that the v8 has references can provide references to be up to the the functions of understanding. And if those functions are in other realm, they shouldn't you shouldn't be able to have a direct reference. +MAH: That quadratic error capture, stack frame mechanism that the v8 has references can provide references to be up to the the functions of understanding. And if those functions are in other realm, they shouldn't you shouldn't be able to have a direct reference. CP: oh, because of the callablet boundary, got it! @@ -987,7 +951,7 @@ Yeah, the thing is about sensor into like, okay, MM: Keep in. mind that for V8 specifically. the textural stack is constructed from the structured stack. So if V8 as she just agreed has to do re-engineering on the structured stack API any way to prevent cross realm object linkage. If the if the form of engineering they do there is to censor the cross frame stackable structure. Nurse crackle, stack frames that it will simply fall out of that implementation. That the textual stack derives from the structured stack. propagates the same sensor. and we (?) about that? -SYG : I mean. I can't really speak to that right now +SYG : I mean. I can't really speak to that right now MM: It’s a suggestion to investigate. @@ -997,9 +961,9 @@ MM: Well, if you're normative if your if your reluctance is in order to avoid do SYG: I think out of the things I said. is not the the most Salient reason that I am opposed the most Salient reason. I am opposed, I would say is It. gets us closer. to The. security kind of territory that we were trying to avoid with the callable boundary. I, do. If? -YSV: if I may, I'd like to jump in with my comment in support of Shu's. she's argumentation here because in fact, Mozilla has the same concerns and I'm sort of echoing the concerns that were raised by our implementer Matthew. Go do it specifically. We are opposed to perhaps opposes too strong but we would rather not see stuck censoring in the specification. It's a complication at that. the use of additional resources, It introduces a false sense of security, specifically what SYG was been saying here that giving the sense that there is confidentiality that when there is none particularly, since we don't have a origin boundary here, we can't guarantee that. So we're not entirely convinced that introducing. This footgun is beneficial for the web. and with it. We see sort of limited benefit for the web so we would like to strongly Recommend that it's not included. +YSV: if I may, I'd like to jump in with my comment in support of Shu's. she's argumentation here because in fact, Mozilla has the same concerns and I'm sort of echoing the concerns that were raised by our implementer Matthew. Go do it specifically. We are opposed to perhaps opposes too strong but we would rather not see stuck censoring in the specification. It's a complication at that. the use of additional resources, It introduces a false sense of security, specifically what SYG was been saying here that giving the sense that there is confidentiality that when there is none particularly, since we don't have a origin boundary here, we can't guarantee that. So we're not entirely convinced that introducing. This footgun is beneficial for the web. and with it. We see sort of limited benefit for the web so we would like to strongly Recommend that it's not included. -DE: I agree with what YSV said about the merits of censorship, I wanted to make a more Scopes comment about what kinds of things we can include in this specification text So in the in the development of this proposal, I was pushing the Champions towards Writing something in the in the specification text that referred to this document that went into more detail. Overall, I think it's important that we didn't do something like publish. I'm documenting it on GitHub and you know, maybe everyone's not on the same page about it and maybe the Champions think that everyone has to do it and other people. and the Center Stone. That would be bad. It’s kind of a misunderstanding. overall. I think it's I think it should be okay for us to make normative requirements. that is not completely. spelled out algorithmically It's not good if the normative requirements have to be kind of solved like an equation. like if we just have a callable boundary invariance and people are supposed to infer, okay? That means I have to use stack censorship and it's all completely implicit. I think that means the specification won't form a good communication device or what form an effective coordination device. device. so, I, I think it's better. These things are spelled out more. There was an earlier draft where there was a note. And the note said, “implementations should do this' '. In general we don't have notes in specifications. We can't have notes that say implementations must or should do that. That has to be normative text outside of a note. I'm happy with the idea of not including this normative text, but I would still kind of defend that it's the kind of thing that if we were to have, it would be in the specification. +DE: I agree with what YSV said about the merits of censorship, I wanted to make a more Scopes comment about what kinds of things we can include in this specification text So in the in the development of this proposal, I was pushing the Champions towards Writing something in the in the specification text that referred to this document that went into more detail. Overall, I think it's important that we didn't do something like publish. I'm documenting it on GitHub and you know, maybe everyone's not on the same page about it and maybe the Champions think that everyone has to do it and other people. and the Center Stone. That would be bad. It’s kind of a misunderstanding. overall. I think it's I think it should be okay for us to make normative requirements. that is not completely. spelled out algorithmically It's not good if the normative requirements have to be kind of solved like an equation. like if we just have a callable boundary invariance and people are supposed to infer, okay? That means I have to use stack censorship and it's all completely implicit. I think that means the specification won't form a good communication device or what form an effective coordination device. device. so, I, I think it's better. These things are spelled out more. There was an earlier draft where there was a note. And the note said, “implementations should do this' '. In general we don't have notes in specifications. We can't have notes that say implementations must or should do that. That has to be normative text outside of a note. I'm happy with the idea of not including this normative text, but I would still kind of defend that it's the kind of thing that if we were to have, it would be in the specification. SYG:I'll respond to that, so in the spirit of what you said, I agree with Dan and along that Spirit. I'm happy with normative changed one even though it's not an excuse me, completely formerly spelled out. I think the difference for normative change to even ignoring the merits to the scope Question to the scope point about just agreeing to normative text my bar is How likely do I think it is that desperate implementers will read this and arrive at an interoperable implementation and if I feel like know, then I'm going to pose that normally text. Yeah, @@ -1013,7 +977,7 @@ RPR: Any objections right?. Okay. Did we have any messages of support on this fi DE: I support this change. I want to say I'm exposed to be fine with it, even if I previously expressed support for slightly different semantics there, it's all good. And so the last question would be for the committee for us to get to stage 4, maybe, perhaps a next meeting. the things? Is that you would you look forward to to see? beyond the HTML integration, which is our our hands -SYG Well, I'm looking for two shipping implementations and I think that might be quick for next next meeting. I like it. +SYG Well, I'm looking for two shipping implementations and I think that might be quick for next next meeting. I like it. DE Sorry. great humility gration. as we've discussed his action from the Champions music where the the categorization of, which interfaces are exposed to expose the ShadowRealm should be at least a motivation, should be better better documented. for some of the cases that that on a raise. I don't understand the motivation. So, that's that's the action at the HTML. People are waiting for and then hopefully we can get further reviews after that. @@ -1021,9 +985,9 @@ CP: Yeah, that's the plan. Maybe we should have something very similar to the er DE: yeah, this suggestion of the other ways to put this actually in the web IDL text. Put guidance on whether when to use, expose people stuff. So, I think this is, this is probably the biggest clotting issue before. the proposal is kind of ready to ship in browsers that we that we really get clear definition that we're confident in which interfaces Are exposed. -MM: I added one item to the TCQ. Just a brief note, there is an additional normative invariant that we need in the ShadowRealm spec. We've talked about but I didn't realize till now had not yet been included. Which is the one that was implicit in previous conversation about the structured stock price, that the object graphs must remain disjoint. that the post cannot hosts are implementations cannot introduce things. that Object. create Direct. references across, the callable boundary called The Boundary must remain intact. and I think that needs to be normative invariant that that's in the ShadowRealm spec before, it goes to sleep. +MM: I added one item to the TCQ. Just a brief note, there is an additional normative invariant that we need in the ShadowRealm spec. We've talked about but I didn't realize till now had not yet been included. Which is the one that was implicit in previous conversation about the structured stock price, that the object graphs must remain disjoint. that the post cannot hosts are implementations cannot introduce things. that Object. create Direct. references across, the callable boundary called The Boundary must remain intact. and I think that needs to be normative invariant that that's in the ShadowRealm spec before, it goes to sleep. -CP: Yes, It is in there Mark, it is spec'd right now. First of all, I believe there is a motivation for all implementers to preserve that, some implicit motivation there, but there is a piece of the text here. Let me find it. I didn't put it in the slides because I don't believe it's controversial at all. But this is the actual text. +CP: Yes, It is in there Mark, it is spec'd right now. First of all, I believe there is a motivation for all implementers to preserve that, some implicit motivation there, but there is a piece of the text here. Let me find it. I didn't put it in the slides because I don't believe it's controversial at all. But this is the actual text. SYG: That reads might be worth expanding that to a more blanket thing of like, forbidden extensions that we can't add anything that would end up. produce direct references to another realm direct object references. @@ -1031,15 +995,9 @@ MM: Yeah, it's a really important variant in the fact that that, even if it's im CP: MAH was working on this with me. - - ?: Okay. But, but yes, I agree that it that it's worth highlighting. What to do We want to have more details on the East Shore. different wording of this or this is sufficient. - - - - -MM: oh, not about the. It's not about errors. it's about the texture highlighting talks about errors on top, I'm saying that there's a invariant with regard to shadow Realms as a whole. the host sent you know that that the object graphs must remain disjoint. that must be an invariant. then you can force implementations that it's impossible to have an object reference that bypasses the call the boundary. +MM: oh, not about the. It's not about errors. it's about the texture highlighting talks about errors on top, I'm saying that there's a invariant with regard to shadow Realms as a whole. the host sent you know that that the object graphs must remain disjoint. that must be an invariant. then you can force implementations that it's impossible to have an object reference that bypasses the call the boundary. SYG: Yeah, like concretely, I think it should be a clause in Forbidden extensions for inside Shadow realms. You can't do this like for the host. Should not be able to introduce say we had get intrinsic to your student. Should be able to introduce intrinsics from incubator realm as a host API. that would break everything so should be a forbidden extension. @@ -1047,27 +1005,23 @@ MAH: There is an open issue for this. It's so issue 324 and I have some proposed CP:Yeah, in any case we have to split the pull request, or modify the pull request to only have the first normative change. I can add that to the same for requests or a new PR and ask for feedback from some of you in the pull requests. - ### Conclusion/Resolution Stage 3 - - -* No to Normative Change wrt error stack censorship (remove it) -* Yes to Normative Change on non observable error message, keeping no-user-code -* Champions to improve note about forbidden extensions inside the shadow realm - +- No to Normative Change wrt error stack censorship (remove it) +- Yes to Normative Change on non observable error message, keeping no-user-code +- Champions to improve note about forbidden extensions inside the shadow realm ## Intl NumberFormat V3 - Stage 3 Update Presenter: Shane Carr (SFC) -- [proposal]() +- proposal -- [slides]() +- slides -SFC: So new information from when we discuss this on Tuesday, so I asked the committee about their feelings on the limiting, the ranges of of until mathematical value and some new information we have is thanks to RCA for doing some some good research on this is that the current rib reality is that Chrome. chrome. implementation. Actually, Firefox and Safari don't agree on that limit. So my question is, because I only have five minutes, I'll jump straight to what I would like to propose, which is that updating the Spec to require a minimum amount of precision, which we can set according to web reality. um, if browsers, support more than that, that's fine. But the sec will only require a minimum amount of precision, and I wanted to see if that… So that's Option 1, as I'm showing here on this slide. This slide is the last slide in the main presentation. So I wanted to see if there's any objections to using this path forward. +SFC: So new information from when we discuss this on Tuesday, so I asked the committee about their feelings on the limiting, the ranges of of until mathematical value and some new information we have is thanks to RCA for doing some some good research on this is that the current rib reality is that Chrome. chrome. implementation. Actually, Firefox and Safari don't agree on that limit. So my question is, because I only have five minutes, I'll jump straight to what I would like to propose, which is that updating the Spec to require a minimum amount of precision, which we can set according to web reality. um, if browsers, support more than that, that's fine. But the sec will only require a minimum amount of precision, and I wanted to see if that… So that's Option 1, as I'm showing here on this slide. This slide is the last slide in the main presentation. So I wanted to see if there's any objections to using this path forward. SYG: You said web reality. Is this stuff shipped? @@ -1079,7 +1033,7 @@ USA: I also wanted to say that given that we already have feedback from implemen SYG Is there guidance on the minimum? is the minimum meant to be aligned with ICU4X changed? -SFC: that, that was the intent except that now that Chrome gives us what a minimum might be. We might just match what that is. +SFC: that, that was the intent except that now that Chrome gives us what a minimum might be. We might just match what that is. SYG: I mean, I think there's some wiggle room depending on how recently we shipped it. What we think that usage numbers are like I don't think this is as set in stone but I'll defer to folks doing the actual field research in the FYT here. @@ -1087,11 +1041,11 @@ EAO : +1 for the 1st option SFC: Great. So if we're all okay, with having a minimum limit that will move forward and you know, make that change to the specification. -RPR: Another +1 from DE. +RPR: Another +1 from DE. RPR lists the proposal deferred to the next meeting: -* Prototype pollution mitigation / Symbol.proto for Stage 1 -* Async Contexts for Stage 1 -* Documenting Stage 3 proposals which are not ready to ship -* A procedure for multiple active supporters in committee to achieve consensus +- Prototype pollution mitigation / Symbol.proto for Stage 1 +- Async Contexts for Stage 1 +- Documenting Stage 3 proposals which are not ready to ship +- A procedure for multiple active supporters in committee to achieve consensus diff --git a/meetings/2022-11/nov-29.md b/meetings/2022-11/nov-29.md index b5c423f1..31544d82 100644 --- a/meetings/2022-11/nov-29.md +++ b/meetings/2022-11/nov-29.md @@ -4,8 +4,7 @@ **Remote and in person attendees:** - -``` +```text | Name | Abbreviation | Organization | Location | | -------------------- | -------------- | ------------------ | --------- | | Waldemar Horwat | WH | Google | Remote | @@ -49,8 +48,6 @@ | Istvan Sebestyen | IS | Ecma | Remote | ``` - - ## Intro USA: [standard housekeeping stuff] @@ -59,13 +56,10 @@ USA: So first up we need to approve the last meetings minutes assuming that you' USA: Next up we need to adopt the current agenda. Please let us know if you have any objections to the current agenda. [no objections] Perfect. - ## Secretary's Report Presenter: Istvan Sebestyen (IS) -- [proposal]() - - [slides](https://github.com/tc39/agendas/blob/HEAD/2022/tc39-2022-046.pdf) IS: I will be very careful in order not to go over the 15 minutes. So what happened lately? Yes, so we had the last meeting in September. and I will very quickly show the list of the documents for TC39 and the actual GA documents and then if there were any membership movements since the September meeting and then status of the TC39 meeting participation; and this TC39 standard downloads and access statistics. Tthere is one point which this time I would like to talk about a little bit in more details about the ISO/IEC JTC1 periodic review of the fast tracked TC39 standards - because it already started to come up but also next year we will have one and it is rather important for TC39 if we want to be presented with our standardization work also in the ISO/IEC JTC1. It is very important that we take some action. Then the list of the next TC39, GA and ExeCom meetings. Then I have taken out some of the results from the October 2022 Execom meeting which are relevant for us and then I have also some news related to the upcoming ECMA GA meeting. @@ -78,15 +72,14 @@ IS: Okay. The next point here is, what kind of movements regarding TC39 membersh IS: Okay. now TC39 meetings participation: This is a continuous table. I started with July 1990 and now we are here the last one it was the September 2022 meeting. It was a remote only meeting because in Japan we couldn't have it because of the Japanese Visa specialties, etc. We had to decide to have a remote meeting. Now you can see that after the approval of ES 2022 and this is also normal the participation it was lower. So 44 there was zero local participants, remote participants were everybody from 20 companies. So a little bit lower than it was at the July San Francisco meeting which was already a mixed meeting. So those local participants you can see that are there were much higher. so this is still a good participation and I would not be worried at all about the lower participation we had. By the way here you can see what we had a year ago, and then you can see the October 2021 remote participation it was also 54 - so 10 less now. So these are about the meeting participation. -IS: Now regarding the ECMA standards downloads. I can already say you know that nothing has changed in terms of the tendency. So the latest status is 16th November and at this point in time we had a download of 75,000 for all Ecma standards and these are below the TC39 download statistics and I have calculated that 56% of all of the downloads are coming from TC39. So still TC39 is the absolute champion in terms of downloads statistics, and obviously the ECMA 262 is here the champion. JSON has dramatically improved over the years. So now it is second here with 11000 and and this is good and here ECMA-402 is much less and then the ECMA specifications which we have transferred also for fast track to JTC1. ECMA-414: this is rather important on the ISO side and we have to actually renew it next year within the 5 years reconfirmation procedure. So we have to be very careful that this actually gets a reconfirmation next year in ISO because that's the way how we are presenting our self to ISO. Now the next one is here the usually the html access statistics. I would say that here the last three years’ figures are which I would consider that this are true figures and not made by bots. So here the latest one at the moment is 34,000 and and 12th edition 50,000 Etc. So these are here the last three. is is is 31,000. +IS: Now regarding the ECMA standards downloads. I can already say you know that nothing has changed in terms of the tendency. So the latest status is 16th November and at this point in time we had a download of 75,000 for all Ecma standards and these are below the TC39 download statistics and I have calculated that 56% of all of the downloads are coming from TC39. So still TC39 is the absolute champion in terms of downloads statistics, and obviously the ECMA 262 is here the champion. JSON has dramatically improved over the years. So now it is second here with 11000 and and this is good and here ECMA-402 is much less and then the ECMA specifications which we have transferred also for fast track to JTC1. ECMA-414: this is rather important on the ISO side and we have to actually renew it next year within the 5 years reconfirmation procedure. So we have to be very careful that this actually gets a reconfirmation next year in ISO because that's the way how we are presenting our self to ISO. Now the next one is here the usually the html access statistics. I would say that here the last three years’ figures are which I would consider that this are true figures and not made by bots. So here the latest one at the moment is 34,000 and and 12th edition 50,000 Etc. So these are here the last three. is is is 31,000. This is what was mentioned earlier we have the TC39 meeting schedule for 2023. So here you can just copied it from the TC39 GitHub Etc. -Just for reading at home: There is a requirement that every five years there is a periodic review of the faster extent that takes place and and this involves us. Two important standards: one is the JSON standard and then here you find the number for iso a number for the JSON and the other one which comes up next year, and this is the ECMA-414 Ecmascript suite. So we have to be very careful that the JTC1 SC22 P member national member bodies approve it. So if you have any chance to influence your National member bodies that they give a positive reconfirmation vote for both the JSON standard, which is up until next march (2023) and then we will have a similar one for the ECMA-414 and reconfirmation of that would be very very important. One point is for the Ecma-404, which is the JSON standard, which is very stable to get in ISO the “stabilized” status. Rex Jaeschke - out SC22 liaison - has pointed out this possibility and I fully agree with him and which means that it will after that it can never be changed in JTC1. ECMA-414, this is just a preparation for next year. it contains automatically all the “undated” (latest) normative references to ECMA-262 and 402. So that's the reason why this exercise is very important, unfortunately we cannot influence that JTC1 here from TC39, you have to do it within your National SC22 Body. As I see it SC22 does not have a working group dealing directly with the ECMA standards and this is a little bit dangerous in my opinion. So we have to make it sure that the fast-tracked TC39 standards get reconfirmed. +Just for reading at home: There is a requirement that every five years there is a periodic review of the faster extent that takes place and and this involves us. Two important standards: one is the JSON standard and then here you find the number for iso a number for the JSON and the other one which comes up next year, and this is the ECMA-414 Ecmascript suite. So we have to be very careful that the JTC1 SC22 P member national member bodies approve it. So if you have any chance to influence your National member bodies that they give a positive reconfirmation vote for both the JSON standard, which is up until next march (2023) and then we will have a similar one for the ECMA-414 and reconfirmation of that would be very very important. One point is for the Ecma-404, which is the JSON standard, which is very stable to get in ISO the “stabilized” status. Rex Jaeschke - out SC22 liaison - has pointed out this possibility and I fully agree with him and which means that it will after that it can never be changed in JTC1. ECMA-414, this is just a preparation for next year. it contains automatically all the “undated” (latest) normative references to ECMA-262 and 402. So that's the reason why this exercise is very important, unfortunately we cannot influence that JTC1 here from TC39, you have to do it within your National SC22 Body. As I see it SC22 does not have a working group dealing directly with the ECMA standards and this is a little bit dangerous in my opinion. So we have to make it sure that the fast-tracked TC39 standards get reconfirmed. IS: now here the GA venues and dates They have not changed. The ExeCom meeting venues and dates have also not changed. and I think now I am just over my time, so please just read my remaining slides. - ## Ecma262 Update Presenter: Kevin Gibbons (KG) @@ -103,7 +96,6 @@ KG: Yeah; this refers to a couple of things. One of them is that a few of the al WH: Thank you. - ## Ecma 402 Status Update Presenter: Ujjwal Sharma (USA) @@ -112,7 +104,7 @@ Presenter: Ujjwal Sharma (USA) USA: Hello. and welcome to the Ecma 402 status update. There is a number of normative changes that I'll quickly go over. not to take too much period of time. first up we just know we talked about a new version of Unicode this has deeper implications for Intl, specifically there's new numbering systems that have been added. so Andre made up a request based on ICU 72 and has been approved by the TG2, but it adds these new numbering systems to the spec at the moment. We're working on a long term solution to periodically do this automatically so that the lists are updated because since this is uncontroversial and needs to be done periodically anyway. But at the moment this is an individual, normative pull request. -USA: next one we have 715 this pull request is by RCA. It updates the fractional seconds digits in date-time format in preparation for Temporal. So at the moment date-time format accepts only zero till three in order to format sub second values when formatting. Temporal allows greater precision, so this PR would allow the formatter to accept values from 4 till 9 fractional digits, which is the end of the limit when it comes to Temporal. But at the moment it behaves like those additional digits were all set to zero. This is also been approved by TG2. +USA: next one we have 715 this pull request is by RCA. It updates the fractional seconds digits in date-time format in preparation for Temporal. So at the moment date-time format accepts only zero till three in order to format sub second values when formatting. Temporal allows greater precision, so this PR would allow the formatter to accept values from 4 till 9 fractional digits, which is the end of the limit when it comes to Temporal. But at the moment it behaves like those additional digits were all set to zero. This is also been approved by TG2. USA: Next we have. a PR by ABL this is canonicalizing GMT to UTC. at the moment UTC. / GMT is canonicalized to UTC in the 402 spec. After the new edition in the TZ data now one of the other possible values is GMT. So we're essentially expanding that behavior to both of them since they're semantically equivalent. This is also been approved by the TG2. @@ -124,8 +116,8 @@ FYT: Yes, I have to point out PR 715 have never reached consensus is clearly sta USA: All right. I’ll take it that you objected to this particular PR. -FYT: Yes. \ - \ +FYT: Yes. \ +\ USA: All right. Okay. is there anything else on the queue? [no] All right, so that's all in the queue. I take it that 715 does not have consensus within this group. So I would like to ask for consensus on the rest of the PRs: 714, 724, 729. Any objections to those? DE: the explanations you gave all will make sense to me and I support. support consensus on these changes. @@ -140,11 +132,8 @@ USA: Oh, yeah. well this would. this would be presented to TG2 to but I take it DE: I like the way that you've been giving summaries explaining why the PRs make sense? and so it'll be great if you can give that next meeting. - - ?: you know the ordering of it goes to TG2 TG2 and then it goes to TG1. I like that better. -

>>>>> gd2md-html alert: Definition term(s) ↑↑ missing definition?
(Back to top)(Next alert)
>>>>>

USA: Okay. Okay, then I guess I would like to ask for consensus on two of the PRs 714 and 24. so 714 and seven. Oh. example 724 so these have already gone through TG2. and so asking for do we do we have any supporters for those specifically those two? I'd Daniel does your your previous support - @@ -155,37 +144,26 @@ FYT: I support 714 and 724. RPR: We have two two active supporters. Are there any objectors for those? No. okay, then those two PRs have consensus then. Thank you very much. I think you SFC as well on the the queue. Right. Does that conclude the 402 status update? - ### Conclusion/Decision - - -* Consensus on 714 and 724 -* No consensus on 729 at this point - +- Consensus on 714 and 724 +- No consensus on 729 at this point ## ECMA-404 status update Presenter: Chip Morningstar (CM) -- [proposal]() - -- [slides]() - CM: The status of ECMA-404 remains very boring, which is exactly how we like it. - ## Test262 status update -Presenter: Ujjwal Sharma (USA) +Presenter: Ujjwal Sharma (USA) USA: Hello. the test 262 maintainers unfortunately weren't able to attend this call, but they have sent us a summary. So I'll quickly read that: - - -* The ESMeta team approached us about integrating their tools with test262's continuous integration. (As a reminder, ESMeta is the ECMAScript interpreter generated directly from the specification.) We had a productive discussion and identified some next steps towards running new tests with the ESMeta interpreter in CI. We also identified some difficulties around integrating the text of Stage 3 proposals into ESMeta. -* There's an RFC (our first, as a trial of the new process) about adding some facilities to test262 for improving the experience of writing asynchronous tests. We'd love to have some feedback from the perspective of implementers maintaining a test262 runner, and from (potential) test writers. -* There are now tests for several more stage 3 proposals: change Array by copy, Symbols as WeakMap keys, RegExp duplicate named capture groups. +- The ESMeta team approached us about integrating their tools with test262's continuous integration. (As a reminder, ESMeta is the ECMAScript interpreter generated directly from the specification.) We had a productive discussion and identified some next steps towards running new tests with the ESMeta interpreter in CI. We also identified some difficulties around integrating the text of Stage 3 proposals into ESMeta. +- There's an RFC (our first, as a trial of the new process) about adding some facilities to test262 for improving the experience of writing asynchronous tests. We'd love to have some feedback from the perspective of implementers maintaining a test262 runner, and from (potential) test writers. +- There are now tests for several more stage 3 proposals: change Array by copy, Symbols as WeakMap keys, RegExp duplicate named capture groups. SYG: What were you looking for feedback on from implementors? I missed it. @@ -202,27 +180,21 @@ RPR: and then Dan has a comment. DE: I think that governance model for test262 been a bit too conservative. it looks like that RFC is for comments on a small support Library that would be used to write tests that it wouldn't take changes by test all by just getting better by implementers to use. Then I see a lot of negative comments from somebody there. there kind of. on not liking the form of that I think people who write tests should be. encouraged to commit tests and also commit support files without this kind of without this kind of gatekeeping and I worry that the RFC process and the governance discussions have led to a kind of impasse here that's not helpful. USA: I'm not sure if I can answer that question. Maybe you can raise this to the maintenance directly in Matrix \ - \ +\ DE: Sorry. I'll follow up with them and get up. \ - \ +\ USA: Thank you. That's all. Right - ## Updates on Code of Conduct committee Presenter: Firstname Lastname (FLE) -- [proposal]() - -- [slides]() - RPR: next we have JHD. with updates from the code of conduct committee. JHD are you online? Maybe not. Is there anyone? else from the code of conduct committee that would like to give an update? Okay. looks not. -RPR: I will say. related to this topic and something that istvan mentioned in his section. is that we still have the NVC funding requests. So that's non-violent communication training, and that is an active request from TC39. that has been discussed at execom. I think most recently at the well and first in well I said this this dates back about something like 2018. I think the original request came up, but it was discussed more recently this year. The current status is that we have feedback from execon that they wanted us to investigate more about the reasons and like the fundamentals of why this is needed. as well as also like reviewing the CoC as itself see it that could be improved. at the moment, I think that the chairs are looking for who can champion this. CoC side originally it was the inclusion group that were pushing this forwards the most. but I know that obviously not everyone has everyone has the time for that at the moment. So if you would like to be the champion for this and help us to kind of do it do the full due-diligence on this please say. Otherwise, I think that this request has kind of been in a stasis for quite some time now, for years, so if we don't get an active champion by February time, I think that we will probably withdraw this request. so yeah if you would like to help out with NVC funding, please contact the chair group. +RPR: I will say. related to this topic and something that istvan mentioned in his section. is that we still have the NVC funding requests. So that's non-violent communication training, and that is an active request from TC39. that has been discussed at execom. I think most recently at the well and first in well I said this this dates back about something like 2018. I think the original request came up, but it was discussed more recently this year. The current status is that we have feedback from execon that they wanted us to investigate more about the reasons and like the fundamentals of why this is needed. as well as also like reviewing the CoC as itself see it that could be improved. at the moment, I think that the chairs are looking for who can champion this. CoC side originally it was the inclusion group that were pushing this forwards the most. but I know that obviously not everyone has everyone has the time for that at the moment. So if you would like to be the champion for this and help us to kind of do it do the full due-diligence on this please say. Otherwise, I think that this request has kind of been in a stasis for quite some time now, for years, so if we don't get an active champion by February time, I think that we will probably withdraw this request. so yeah if you would like to help out with NVC funding, please contact the chair group. RPR: Okay. that is the COC update. - ## Speccing liveness of template objects Presenter: Shu-yu Guo (SYG) @@ -262,8 +234,8 @@ SYG: Yes. and this directly defines that. like right now we don't really talk ab MAH: If the parse node is unreachable - SYG: but what is unreachable mean? That's just a note, right. \ - \ -MAH: Yeah, if the program is never going to execute that template literal tag, the program is never going to be able to observe the Frozen array. +\ +MAH: Yeah, if the program is never going to execute that template literal tag, the program is never going to be able to observe the Frozen array. SYG: That's right. But if for example you're implementation implementation does something like bytecode flushing and it blows away the compiled byte code on memory pressure because it can reparse the source code that suddenly affects that implementation choice on certain interpretations of what that note means can affect. the lifetime of these template objects. @@ -271,7 +243,7 @@ DE: So I’m on the queue next. I have to agree with MAH actually thinking about SYG: so why is that clear to you if it gets re-parsed ?because it talks about parse nodes not the parse nodes corresponding to a site. -DE: in some way. It would be a Fool's errand to start tracking all sorts of different spec internal constructs. because there are lots of them and we could think about liveness for lots of them. But it's a question of whether there's an execution path that can lead to what should return the same frozen array. and it clearly should cross the operation of you know, clearing out the byte code and then reparsing it. because it's the same it's the same considered the same parse node. +DE: in some way. It would be a Fool's errand to start tracking all sorts of different spec internal constructs. because there are lots of them and we could think about liveness for lots of them. But it's a question of whether there's an execution path that can lead to what should return the same frozen array. and it clearly should cross the operation of you know, clearing out the byte code and then reparsing it. because it's the same it's the same considered the same parse node. SYG: Where in the spec does it do you draw that conclusion, I guess like I agree. That's yeah, but that was that's exactly the confusion @@ -281,23 +253,23 @@ SYG: I don't know if I agree right now a strict reading of the current spec as i DE: Yeah, I think it's an editorial decision whether we make this change or not. The note is clearly incorrect because it is observable via via WeakRefs but I think the current spec implies the same the same semantics -SYG: Okay. it's clear to you WH. \ - \ +SYG: Okay. it's clear to you WH. \ +\ WH: I disagree with the DE’s conclusion that this change is not observable. Consider what happens if you use template objects as keys in a WeakMap — depending on what we decide here, these keys may or may not cease to be live, which means that the values of that WeakMap may or may not be garbage collected. DE: Could you clarify what you? mean? I'm not sure if any of us are disagreeing about what the semantics actually should be. WH: What I'm saying is that this is not a transparent change. Whether we garbage collect parse nodes or not is observable. \ - \ +\ DE: can you walk through that a little more concretely? WH: Yes. Let's say you build yourself a weak map with template objects as keys. And then use weak references to see if any of the bindings in the weak map ever get garbage collected. If they did then you know that a template object has gone away. -DE: I don't think we're disagreeing about whether the template objects go away or like when they should go away. but just about whether the current spec text implies the same thing. At least that's what MAH and SYG and I were talking about. can you elaborate on what you mean? +DE: I don't think we're disagreeing about whether the template objects go away or like when they should go away. but just about whether the current spec text implies the same thing. At least that's what MAH and SYG and I were talking about. can you elaborate on what you mean? WH: Yes, and I disagree with the conclusion you made that this is just an editorial change — whether we do this change or not is not observable. I produced a counterexample to that conclusion. - \ +\ DE: Hmm I don't understand yet. Sorry. SYG. I guess this was addressed you. Do you have any thoughts here? SYG: I think we already disagree DE. I disagree with you, DE and MAH about a strict reading is the collection of parse nodes that we the intended collection of partners that we all agree on Is that covered by the strict reading of its back and I already disagree because I don't really see. that for reasons I said I already understand. I still don't really understand why why you think it is. covered by the strict reading so I'm I mean like I think this is normative which is why I put it on here. @@ -310,7 +282,7 @@ MAH: Yeah. I just want to clarify there is already a note Note 5 in the liveness SYG: That talks about language values. Parse nodes are not language values. -MAH: But it's an internal concept. That is not observed anywhere, and I think it's a level of indirection to the template object. To answer WH, yes, you can observe that the template objects might be collected through WeakRef or finalization registry. So, yes, we do need to remove that current note because it's not correct anymore. However I don't think we need to do anything further than clarifying when those can be collected because that parseNode is never evaluated anymore. +MAH: But it's an internal concept. That is not observed anywhere, and I think it's a level of indirection to the template object. To answer WH, yes, you can observe that the template objects might be collected through WeakRef or finalization registry. So, yes, we do need to remove that current note because it's not correct anymore. However I don't think we need to do anything further than clarifying when those can be collected because that parseNode is never evaluated anymore. SYG: My understanding of your disagreement MAH and where I disagree, is that on a strict reading of the liveness section it talks about objects that can be referenced to weak refs. parseNodes are not language values and cannot be referenced to weak refs. It is very natural to read a generalization of the liveness section to cover parse notes. Which are strictly speaking is not what it says currently is my contention. therefore it's a normative change that we need to extend it to cover parse nodes in this scoped way. Does that make sense or you disagree with that characterization? @@ -325,11 +297,11 @@ SYG: What? it is observable. DE: I want to see if we could draw a - like I think MAH and I agree with you on what the semantics should be and we're just disagreeing editorially how to State it. So I think here in in plenary what we need to draw strong consensus on is what the semantics should be and then we can work out the editorial details offline about how how to merge that into the specification Is that is that a fair Yeah happy. way to go forward? SYG: Yeah, I guess \ - \ +\ MAH: Happy to work on this offline. BFS: I was I think on the same vein. I was just gonna I was just gonna sort of Wonder aloud if this would if it would help anything to just clearly state that the the guarantee is supposed to be that every time a template literal a specific template literal is executed the the array the static array that is passed to the tag function must be exactly the same one as every other time. That's that's the intended behavior and maybe we could maybe it's enough to just say what that observable Behavior must be and you don't have to get too much into the details of exactly how that is made to work. Just a suggestion. \ - \ +\ YSV: So my question is, I tried to test this behavior on Firefox with what I could understand was the STR [steps to reproduce] from the bug using the lit framework with the repeat directive and calling that multiple times under different circumstances, but I'm not 100% sure that I got the STR correctly. I'm wondering does V8 have - I didn't notice it in the bug - does V8 have a test related to this that other implementations could use to verify their behavior. SYG: Not a reliable reproducible one because it like as far as I understand It depends on memory pressure. and the triggers for that area different from engine to engine. @@ -342,16 +314,14 @@ YSV: Um, there are a couple of diffs on the V8 that demonstrate the behavior, bu SYG: Not that I'm aware of for non-V8 runtimes. \ - YSV: Okay. So for now I guess the status on our side is that we haven't been able to reproduce behavior in our engine. and I'm checking with our GC folks if they think that this might also exist as a misunderstanding within Firefox because that might be interesting in terms of if other implementers misunderstood the specification here as well. \ - SYG: Yeah. I think check. your life check the lifetime of the keys to this actual map. YSV: I’ll pass that on. BFS: I think YSV covered everything. I was just concerned. whether this is anything that is known to happen and other engines or you just happened to discover that this happened in v8 and it don't know if it happens anywhere else. Maybe I was just gonna suggest more communication to other engines might be needed than just fixing the spec. That's all. \ - \ +\ SYG: Well, I mean I hope like right now is the communication to other engines. JRL: Earlier you mentioned that the confusion came from bytecode flushing and trying to determine if that kept the template object alive. I'm not sure how the proposed change actually fixes that because we don't mention that a particular site maps to a parse node (or maybe we do and I'm not aware of it). @@ -366,11 +336,11 @@ USA: We have two minutes left and two items in the queue. WH: I agree with SYG’s intent, but the formalism here is broken, specifically there are issues about what “a valid future execution” actually means. I can come up with examples where a parse node keeps itself alive, or a parse node keeps other objects alive which then keep the parse node alive. The property I would want with garbage collection is that if there are cycles of objects which keep themselves alive but you can't get to the cycle then you can collect all of them. The “valid future execution” kind of definition doesn't currently work if template objects are part of such a cycle. -SYG: I think I see where you're getting at. Um, this is why we ended up with a non-maximal set of things for the current definition of liveness. and to yeah, Okay. Yeah, but I don't quite see I think I see where you intend to cycle to be. But there seems to be resistance to having formalism here more formalism here. anyway, but I want to I guess okay, so let's clear the queue and then I want to have some. next steps. +SYG: I think I see where you're getting at. Um, this is why we ended up with a non-maximal set of things for the current definition of liveness. and to yeah, Okay. Yeah, but I don't quite see I think I see where you intend to cycle to be. But there seems to be resistance to having formalism here more formalism here. anyway, but I want to I guess okay, so let's clear the queue and then I want to have some. next steps. JHX: Okay. so, could we have a note to describe the high high level? intention of the templates array objects, I mean it's it's a frozen array and from the developer. perspective I'm not sure. if they have other use case but it seems the only use case of it is to use it as a key. to cache the cache for the results of the tag function. So I think I think if we can have a note to make it a much clearer it might help as my helpful for the incrementer too. not only implementors but also developers to understand what it's for and what what the desired behavior and also are I mention this because actually I don't see many tags using this use it as you use the cache. that the for example the the Dedent proposal also We also have a topic about the dedent. cache problem and actually all the all the dedent implementations I have seen never use the cache. Maybe it's not important important in the dedent case because it do not it do not have a big performances but I really hope the spec could have a note to clear. say what what it's for. -SYG: Thank you for your thoughts. I take it there is disagreement what it is collectible designed the for. So the current the disagreement both is increments and is whether it is editorial or normative, but I guess that particular question is moved because we agree on what the semantics are supposed to be but we the disagreement with MAH and DE and myself and WH is whether it falls out from the current definition and we will work offline to resolve that and find a wording. I guess. Does that sound accurate? +SYG: Thank you for your thoughts. I take it there is disagreement what it is collectible designed the for. So the current the disagreement both is increments and is whether it is editorial or normative, but I guess that particular question is moved because we agree on what the semantics are supposed to be but we the disagreement with MAH and DE and myself and WH is whether it falls out from the current definition and we will work offline to resolve that and find a wording. I guess. Does that sound accurate? WH: Yeah, I wonder if, rather than trying to make parse nodes be live or not live, there is instead a way to change the way that we access those template objects such that GC falls out of traditional object GC semantics. @@ -380,14 +350,10 @@ WH: I'm hand-waving a lot here, and this might not work, but suppose each parse SYG: Yeah, I like what that direction's going. I might kill two birds with one stone to clear up Justin's. thing earlier too on like what exactly is the lifetime of parse nodes is it you know is is reparsing allowed that kind of thing. So Okay. That's enough concrete next steps for me. Thank you everybody. - ### Conclusion/Decision - - -* Normative semantics have agreement (can’t collect template objects if the code which produced them might get evaluated) -* Discussion of how to specify normative semantics to continue offline - +- Normative semantics have agreement (can’t collect template objects if the code which produced them might get evaluated) +- Discussion of how to specify normative semantics to continue offline ## Array grouping WebCompat issue @@ -439,12 +405,12 @@ JRL: Yah, any possible renaming. it just kind of like a smiley face or a frown f YSV: there arguments in favor of the static methods. Do you want to hear those first? -JRL: Sure, Sorry. I didn't realize, is there something else in tcq? \ - \ -EAO: If we go with static methods we end up doing the same thing that we're already doing with Object.fromEntries and Array.from, so there is consistency there that we would be matching. \ - \ -RPY: I did actually yes. so I think because and object helpers don't live on the Prototype whereas array ones do I think that's a slight difference. so help us for a raise typically live on the array prototype, right like \ - \ +JRL: Sure, Sorry. I didn't realize, is there something else in tcq? \ +\ +EAO: If we go with static methods we end up doing the same thing that we're already doing with Object.fromEntries and Array.from, so there is consistency there that we would be matching. \ +\ +RPY: I did actually yes. so I think because and object helpers don't live on the Prototype whereas array ones do I think that's a slight difference. so help us for a raise typically live on the array prototype, right like \ +\ EAO: But those prototype helper methods modify that instance, while here we're creating new instances. SYG: I was swayed when we discussed this internally to the consistency thing for the same reason that was previously.. I'm in favor of the static methods for the same reason that was just mentioned. @@ -453,19 +419,19 @@ YSV: I prefer static methods to groupByToMap on the prototype. I think it's actu JRL: I guess, can we do a temperature. Check with three options and I can come back. next time with either a rename or formalize spec for the static methods? -USA: So you mean the temperature check on naming not naming whether or not we choose renaming or the static method. method. Unfortunately that's that's too wide perhaps you could limit it to something more specific, okay? Could we get a temperature check on, just the static methods, \ - \ +USA: So you mean the temperature check on naming not naming whether or not we choose renaming or the static method. method. Unfortunately that's that's too wide perhaps you could limit it to something more specific, okay? Could we get a temperature check on, just the static methods, \ +\ JRL: All right? So I suppose we will do a temperature check on static methods versus instance. Okay. [discussion of how temperature check should work] -JRL: I mean it's there's only Option 2 and 3 now. I think we’ve ruled out Option 1 to begin with. So there's only 2 and 3, another rename or static method. I think the way that we phrase this will give us the same result anyway. If you're positive on using static methods, then I would come back with a definition for static methods. if you're unconvinced by static methods I would come back next time with a rename. Or if you're indifferent then I would choose what happens, I guess. Whatever has more positive results or negative results. So the question, the temperature check question is, are we happy with a static method definition? \ - \ +JRL: I mean it's there's only Option 2 and 3 now. I think we’ve ruled out Option 1 to begin with. So there's only 2 and 3, another rename or static method. I think the way that we phrase this will give us the same result anyway. If you're positive on using static methods, then I would come back with a definition for static methods. if you're unconvinced by static methods I would come back next time with a rename. Or if you're indifferent then I would choose what happens, I guess. Whatever has more positive results or negative results. So the question, the temperature check question is, are we happy with a static method definition? \ +\ Strong Positive: 3 \ Positive: 11 \ Indifferent: 7 \ Unconvinced: 5 \ - \ +\ JRL: Because we only have a 15-minute time box, I’m going to call it here. It looks like we're mostly positive on using a static method and only a couple people who would like to see a rename. YSV: I think also given that we aren't 100% one way or the other, it may make sense for us to give this some time to bake ith the community. So that they also can give input, and let it socialize for a period of time before going ahead with the decision, @@ -475,7 +441,7 @@ JRL: Do you think we should go back to Stage 2 since I'm going to be vastly chan YSV: So, stage 2 to stage 3 the advancement means that we agree with the shape of the proposal and the problem space, that is solving problems based on is valid. It's the solution shape, that is changing. I still think that this is a strong proposal so I'm not opposed to the staying in stage 3, but I can move into Stage 2 given what the stage two to three transition means is also valid. JRL: I was so specifically if this would allow time to bake with the community. If we signal that going back to stage two, that we have to rethink the solution because the naming doesn't work out. Well, maybe that would - I don't know. I can't think very well, it’s too early. Maybe that would allow us more time to bake instead of stage 3, essentially their next meeting or the following to see if this would be implemented already in browsers. And if we could Advance the stage for I guess if we go back to stage two than just means more time that we'd have to wait before if this actually lands in browsers. \ - \ +\ USA: You. have a queue but you're on time. I request you to be quick. if we have Daniel. DE: I, think we should in general avoid demoting, things to stages, especially if it's something so minor as a naming issue, as long as we're clear in documenting that this isn't ready to ship until we have we allowed this time to bake as YSV said. @@ -489,33 +455,28 @@ SYG: Okay. JRL: Especially because I haven’t specced the static Methods at all. I don't know how they'll actually work or what the semantics would be. The confusion about whether this was array.groupBy or object.groupBy was because I haven't through through it fully - but I can just come back next meeting with a formalized spec, but not actually ask for advancements until we get community feedback. SYG: Okay, I'll wait for Bradford's saying, I guess I was the massively was - like is the spirit of the behavior normatively changing? Because now the more things are in play. \ - \ +\ YSV: Yeah, my thinking is just thinking a little bit more about the stage to think. First, we don't know yet for sure that the API is changing. We want to give it time with the community to sort of see how people react. It might be that they really push us towards a prototype method in spite of what we internally think is best there might be a large discussion there. Secondly, this came up as implementer feedback due to web compatibility concerns and if we need to test this then that needs to be done. Probably as an implementation. We probably won't be able to touch that outside of this. Unless we do, for example, a big data query. and even that it's not entirely clear of will get exactly the information that we need in order to make good decisions. So I think it's, I think what might make sense is like have a back pocket proposal which is the static methods that if we come to the Inclusion that yes, this is the corrected shape of the proposal. Then we advance that to stage three and move ahead with that rather than doing a demotion right now, okay? JRL: Okay. that's fine to me. JHX: Yeah, I think if we do renaming, I think there's no need to go back to stage two, But if we change to the static method, I support go back to Stage 2, because we've not only it's actually not, not only. instant methods, for example, we may have Record groupBy so yes \ - BSH: Just briefly one of the in favor of static methods is if is a static method, then the argument can be any iterable, not just an array, which seems like a more generally useful structure so that might be feature creep for this proposal. I realize, but it could be with this -JRL: Because iterables could be infinite, they’re a bit different. Here you're only taking a fixed amount, the behavior will collates all keys. I would say like if you have a span of 1, 2,1,1 those all 1s get grouped into the same key. But in an iterable, if you have 1, 2, 1,1 you have 3 separate groups, [1], [2], [1, 1]. The behavior for iterator groupBy has to return itself an iterator which does not have all keys collated together, only contiguous runs collated. +JRL: Because iterables could be infinite, they’re a bit different. Here you're only taking a fixed amount, the behavior will collates all keys. I would say like if you have a span of 1, 2,1,1 those all 1s get grouped into the same key. But in an iterable, if you have 1, 2, 1,1 you have 3 separate groups, [1], [2], [1, 1]. The behavior for iterator groupBy has to return itself an iterator which does not have all keys collated together, only contiguous runs collated. BSH: Okay, thanks. USA: All right. That concludes this topic. I would like to reaffirm that this proposal stays stays at stage 3. - \ +\ JRL: Yeah. so I would like to remain at stage 3. I'm going to come back next meeting with a formalized spec for static methods and we can agree on that point if we want to approach, if we want to switch to the static method approach, we could either demote to stage two and let it bake with the community at state stage 3. - ### Conclusion/Decision - - -* Proposal stays at stage 3 -* Tentatively going for static methods on Object/Map - +- Proposal stays at stage 3 +- Tentatively going for static methods on Object/Map ## Should we set a lower bound on the resolution of timers @@ -525,24 +486,24 @@ Presenter: Shu-yu Guo (SYG) SYG: OK. This is a pedantic thing that I don't think has any real world impact one way or another currently, but it'd be good to to get clarified in the spec. So RGN here filed an issue a while back currently. `Atomics.wait` is one of those things where we hand wave some stuff about what time actually is and the way that it does the blocking. It is a blocking call that suspends the thread of execution until it is notified. This cannot be called on the main thread because it blocks execution. And the way it does this, it can have a timeout that goes to this AO called SuspendAgent, and it just says like, wait for that time. The issue is that we don't really put any normative guidance around what exactly that ‘wait for that time’ means. Does it mean a compliant implementation must wait exactly that time? Does it mean it can wait at a coarser granularity of time? We don't say. So I think we should say something about that. Specifically, RGN brings up the point of fractional milliseconds, and you observe different things on different implementations. As he shows in this. example here. So, what should we do? -SYG: My preference here - So, the choices here, I think are: one, we say something like when the spec says wait for n milliseconds implementations can be considered compliant and wait for any finite length of time. That is greater than N milliseconds. Meaning it can coarsen. the timer resolution as much as it wants. This is basically what HTML does. So when you do a set timeout for example it says wait that amount of time and then there's another step that says then wait and implementation defined amount of time. So if for whatever reason, the host or the implementation, Besides, that needs to course and timers do to Enter security. implementation. constraints or anything. It can choose to do. So and still be considered compliant. or we can say in the JavaScript spec that compliant implementations must have some granularity. some resolution Like if we say Ms you cannot question Beyond milliseconds. Beyond whole milliseconds or something. I don't think I think that's the thing that we can say but that is I contend that it's not a thing we should say because it binds Implementations hands too much both current and future. So, That's basically it. My preference is +1 from DLM. What's the last? The question. What exactly it means and I think my preference is something like this, it basically means that if someone calls atomics that weight X for any finite X, the implementation can choose any possibly different actual point of the time and still be considered compliant. so long as the weights at least X. The. have done on the queue. \ - \ -DLM: We have discussed this internally SpiderMonkey team. and we explicitly support option. One. Six. up, \ - \ -MAH: Yeah, how is that? I didn't really go through this. How, how is the time observed. Is it just a simple Date look up? +SYG: My preference here - So, the choices here, I think are: one, we say something like when the spec says wait for n milliseconds implementations can be considered compliant and wait for any finite length of time. That is greater than N milliseconds. Meaning it can coarsen. the timer resolution as much as it wants. This is basically what HTML does. So when you do a set timeout for example it says wait that amount of time and then there's another step that says then wait and implementation defined amount of time. So if for whatever reason, the host or the implementation, Besides, that needs to course and timers do to Enter security. implementation. constraints or anything. It can choose to do. So and still be considered compliant. or we can say in the JavaScript spec that compliant implementations must have some granularity. some resolution Like if we say Ms you cannot question Beyond milliseconds. Beyond whole milliseconds or something. I don't think I think that's the thing that we can say but that is I contend that it's not a thing we should say because it binds Implementations hands too much both current and future. So, That's basically it. My preference is +1 from DLM. What's the last? The question. What exactly it means and I think my preference is something like this, it basically means that if someone calls atomics that weight X for any finite X, the implementation can choose any possibly different actual point of the time and still be considered compliant. so long as the weights at least X. The. have done on the queue. \ +\ +DLM: We have discussed this internally SpiderMonkey team. and we explicitly support option. One. Six. up, \ +\ +MAH: Yeah, how is that? I didn't really go through this. How, how is the time observed. Is it just a simple Date look up? SYG: In the current spec? MAH: How would a program observe the time that was actually spent waiting? As far as I know, the only way to do that with Ecma 262 defined operations is through Date. SYG: With 262, Yes. the host can of course, choose to inject other things, like monotonic clocks or something. \ - \ +\ MM: I think I'm just confused. number two, what I thought I understood you to say. was that the number two suggestion, which is would be that you would imply that you can't before sin Beyond milliseconds but course but But what it would mean to course in Beyond Ms, would simply be that the wake up happened sometime later. sometime more later than you asked for. and it's already the case that the the You know that? there's there's no there's no upper bound on the time that anything in the language takes So in the absence of an upper bound on, when you can see that an atomic woke up, how can it mean anything to constrain and implementation from further coarsening?: \ - \ +\ SYG: So, I agree but I think so that that is exactly what I seek to clear up pedantically. I think because the current language is so imprecise. It just says, like wait for n ms. One could naively read that to constrain implementations in this unrealistic way, but I agree in general with you. \ - \ -MM: Okay, good. \ - \ +\ +MM: Okay, good. \ +\ USA: That's all. of the queue. SYG: Cool. Thanks. All right. Sounds like we have consensus for option one. and, so, I guess that means - Is this an issue or a PR? just an issue. So then I need to write a PR and then but we have consensus and that just needs editorial review. @@ -551,13 +512,9 @@ DE (on queue): Looks good. SYG: All right. that's it. - ### Conclusion/Decision - - -* Consensus to allow any finite amount of waiting which is longer than the specified time - +- Consensus to allow any finite amount of waiting which is longer than the specified time ## IPR Clarification for past commits @@ -577,35 +534,31 @@ DE: SYG. Do you express? concern in the admin and business repo? Do you have any SYG: Let me page it back in. I think my concerns were, I don't want any delegate to be in the business of tracking down. exact employment dates of past employees of any company. -DE: Well if you don't want to do that, that's okay. But I think, as a committee we have to put in this effort because the past employment dates imply, whether the contribution was licensed under the IPR are agreements. so, you know there's some current googlers and some former googlers on the IPR exceptions list. So someone will have to track down whether their contributions were under the agreement. Did you have any idea how we could avoid that? that? \ - \ -SYG: I do not. \ - \ -DE: Okay, So what is it like is it is that is that supposed to be be public info? You know. really editors shouldn't merge Patches from people who they aren't sure. our licensing. Their things According to I, the IPR paintings. So what I was what I was vice chair, I was checking for this stuff all the time and then I expressed to other people that they should check for this stuff. Now, we generated in exceptions list over time. because it wasn't being checked for. So I want to I want to go back and fix the situation. Like someone. so one of these items here is, maybe we should make lighter weight processes for noting the status of people. but in the w3c there's all these Bots and tools that the W3C produces to enforce exactly this thing. Just because we don't have that infrastructure in place fully doesn't mean that the motivation for it doesn't apply to us. \ - \ +DE: Well if you don't want to do that, that's okay. But I think, as a committee we have to put in this effort because the past employment dates imply, whether the contribution was licensed under the IPR are agreements. so, you know there's some current googlers and some former googlers on the IPR exceptions list. So someone will have to track down whether their contributions were under the agreement. Did you have any idea how we could avoid that? that? \ +\ +SYG: I do not. \ +\ +DE: Okay, So what is it like is it is that is that supposed to be be public info? You know. really editors shouldn't merge Patches from people who they aren't sure. our licensing. Their things According to I, the IPR paintings. So what I was what I was vice chair, I was checking for this stuff all the time and then I expressed to other people that they should check for this stuff. Now, we generated in exceptions list over time. because it wasn't being checked for. So I want to I want to go back and fix the situation. Like someone. so one of these items here is, maybe we should make lighter weight processes for noting the status of people. but in the w3c there's all these Bots and tools that the W3C produces to enforce exactly this thing. Just because we don't have that infrastructure in place fully doesn't mean that the motivation for it doesn't apply to us. \ +\ SYG: Well, I'm confused. My concern is about I thought we were talking about past contributors trying to attribute and trying to see if IPR cover them for crew for emerging current contributions. That is not in question, right? -DE: The hole we have for current contributions around proposal repos where we're not really checking that these only come from Members or people who signed the forms. For past contributors, many patches were merged where we haven't yet traced, what the, what The agreement status is. But those are equally important, I think. you know, these are recent past contributions. They're like in the past five years \ - \ +DE: The hole we have for current contributions around proposal repos where we're not really checking that these only come from Members or people who signed the forms. For past contributors, many patches were merged where we haven't yet traced, what the, what The agreement status is. But those are equally important, I think. you know, these are recent past contributions. They're like in the past five years \ +\ USA: One thing that I wanted to note was that was quickly scanning through the list. I found out that a number of names on the list I could recognize as node.js contributors, if you like this idea I could post on their internal GitHub Discussions list. I guess is the is the thing, but get a movement DE: Yes it would be really great if you did encourage those people. to go and sign the form. then we can remove them from the exceptions list. Also, I believe that a number of the people on the exceptions list are employed or were employed by member organizations. And so if you can take the time to look and kind of note those people, then we can you know, including in the, in the issue. That's linked from the the Agenda item. then we can put them in the appropriate category either delegate or or Emeritus, We might want to improve processes for this. And so that's kind of a work item here listed. That would be really helpful. Arguably nobody should be responsible for this, like whose job is it? But well, we have to make sure it gets done. -SFC: It looks like I'm next on the queue. I'll just note that the Unicode Consortium we went through a fairly large revamp of all of our IP policies involving specs and especially our libraries this summer. The thing that we largely landed on was just using the Apache CLA and using the CLA assistant for all of all contributions to Unicode repositories. It's been a fairly smooth roll overall. Just speaking from, you know, some other experience working with the standards body, that's what they landed on. So I don't know if that's relevant or not but I thought I'd point that out since their topic is here. \ - \ +SFC: It looks like I'm next on the queue. I'll just note that the Unicode Consortium we went through a fairly large revamp of all of our IP policies involving specs and especially our libraries this summer. The thing that we largely landed on was just using the Apache CLA and using the CLA assistant for all of all contributions to Unicode repositories. It's been a fairly smooth roll overall. Just speaking from, you know, some other experience working with the standards body, that's what they landed on. So I don't know if that's relevant or not but I thought I'd point that out since their topic is here. \ +\ DE: Yeah, I think it's an interesting idea. Yes, CLAs are very complicated. Yeah, if someone wants to work on instituting a CLA then let's be in touch. but I guess they don't see an exact need for one right now. Please get in touch about some parts of this topic or offline with me, like on Matrix. If you're interested in this topic, I would really appreciate your help. - ### Conclusion/Decision - ## Motivating use cases for Module Harmony proposals Presenter: Kris Kowal (`KKL)` -- [proposal]() - -- [slides]() +- slides KKL: All right. so today I wanted to give a brief update on module Harmony and it's motivating use cases. This is largely the same presentation I gave last time I spoke except transposed to emphasize the motivating use cases and then tracing them back to the layers and other proposals regarding modules. By the end of this presentation I'm hoping that you will have an a better appreciation for the coherence of all of the proposals that are involved in this and and be better prepared to understand the the four proposals touching on modules presented this week, so this is highly abbreviated and emphasizes the use cases. @@ -615,7 +568,7 @@ KKL: Hot module replacement, like bundling, benefits from a lot of the same the KKL: That's that's a good place to start. Let's go on. We can also do non-JavaScript modules as long as they're host-provided, like JSON and Wasm but this puts hosts in a position where they get to gatekeep what languages are supported, for a coherent with the language ecosystem. So what if we wanted to do non-host-defined or non-JavaScript modules and some of the examples of this would be JSON when the host doesn't provide it (which presumably won't be long for any of them), but there are holdouts. Wasm when it's not host defined. and then the biggest win of having a solution for non host defined non-JavaScript modules also provides us an opportunity to solve the assets and modules in the ecosystem and without adding other things that to the language. we that we can't And then of course. anticipate today the largely and then also I specifically say scoped experiments because I think it's also important to note that these kind what what kinds of modules and are involved in a module graph can be very application-specific and need not be generalized to the host. -KKL: And yes in order to do this you need a minimal virtual module source protocol, as opposed to a maximal on. I'm striking a distinction here between what's easy and what's difficult. Easy that unlocks most use cases and difficult one that unlocks everything up to and including virtualization of JavaScript itself. +KKL: And yes in order to do this you need a minimal virtual module source protocol, as opposed to a maximal on. I'm striking a distinction here between what's easy and what's difficult. Easy that unlocks most use cases and difficult one that unlocks everything up to and including virtualization of JavaScript itself. KKL: CommonJS: one of the motivating use cases for module proposals to improve the participation of CommonJS and these and module and there are a number of ways to do this and I'm expecting them to evolve in our heuristics to evolve, and in fact for those heuristics the necessarily be application specific in some cases. I do not believe that CommonJS participation in esm generalizes well enough that it will ever make it to 262. I don't wish to try. @@ -633,14 +586,11 @@ SYG: Seems like this is just an overview. So thank you for that KKL. Unsurprisin RPR: All right. nothing more in the queue. so, we shall advance. - ## Is ECMA402 allowed to extend ECMA262 prototypes? Presenter: Richard Gibson (RGN) -- [proposal]() - -- [slides]() +- slides RGN: So, what we are looking at is an issue that came up in the context of Temporal. Just a little bit of background for it. There are a number of properties on the prototypes of various Temporal classes such as PlainDateTime and ZonedDateTime and YearMonth that are necessary for the ISO 8601 calendar: things like `year`, `month`, `monthCode`, `daysInYear`, and so on. Temporal also defines non-ISO 8601 calendars in ECMA-402, some of which have aspects to them that lack analogs in the 8601 calendar. And in particular, some of those calendars have the concept of eras. The most familiar example to people is probably going to be the Gregorian calendar, where you might have learned about BC and AD or BCE and CE. You know, how are years counted before year one? And because such calendars are required for an ECMA-402 implementation, the current state of the Temporal proposal is that property accessors for that data are defined in 402 rather than 262. An issue was raised regarding whether or not that is acceptable. In general, ECMA-402 extensions take the form of a subgraph of properties and objects that are accessible from the Intl object, or of methods that are explicitly called out into 262 as being overridden in 402 such as toLocaleString. So, even though hosts in general are allowed this kind of arbitrary extension, it's not something that currently exists inside of our specification cluster itself. And this being unusual, an issue was raised in Temporal, it was discussed a little bit in TG2 and a decision was made to bring it to TG1 plenary, which is why I'm here today. @@ -686,7 +636,7 @@ MM: So whether the 262 spec itself currently allows this in some sense, we're th WH: I see this kind of thing as just factoring the spec into parts which are internationalization-related and parts which form the core of the language, and sometimes those parts interact in a way where it is useful for Ecma 402 to be able to stick properties onto objects defined in Ecma 262. I see nothing wrong with that. It's just another abstraction mechanism, and we can coordinate in the committee to make sure that our two TGs are not working against each other. So I see nothing wrong with allowing this. -DE: Yeah, I guess I want to agree, sort of, ad MF was saying that the important thing isn't what we can or can't do, but we want to do. In a lot of ways I just sort of disagree with the way that ES6 was was specified in terms of what host and implementations can do, with some ideas that implicitly you could extend the syntax and everyone would kind of have to do that for certain cases. I think we move more towards a model of a complete description of where you have an embedding API. That's the whole state, and I'm happy with that Evolution path. even within that we can still decide how much we expose embedding API. We want to encourage adding to existing APIs, like in web APIs where IDL permits ways of adding to existing APIs, but this is mostly used to add to globals or maybe sometimes to add to specific other things. So yeah we could decide what kind of balance we want. in multiple ways. +DE: Yeah, I guess I want to agree, sort of, ad MF was saying that the important thing isn't what we can or can't do, but we want to do. In a lot of ways I just sort of disagree with the way that ES6 was was specified in terms of what host and implementations can do, with some ideas that implicitly you could extend the syntax and everyone would kind of have to do that for certain cases. I think we move more towards a model of a complete description of where you have an embedding API. That's the whole state, and I'm happy with that Evolution path. even within that we can still decide how much we expose embedding API. We want to encourage adding to existing APIs, like in web APIs where IDL permits ways of adding to existing APIs, but this is mostly used to add to globals or maybe sometimes to add to specific other things. So yeah we could decide what kind of balance we want. in multiple ways. PDL: Hello. just to clarify my understanding, because I think this there's a misleading thing here about implementations versus hosts vs. Whatever. Because to my mind and extension is not an extension of a host or an extension of its implementation, within extension of a specification. So, if an implementation decides to implement 262, that's great, and if it also implements the extension 402, for internationalization, and that's also great, but that the 402 is an extension to the specification. So the real question is: where is it written down? Is it written down in the core specification that everybody has to implement? Or is it written down in an extension that's also subject to the quote, unquote terms and conditions of this committee, right and it is part of this committee, and it is an extension to the specification. So, it's literally a question of, where do we write it down? And in that case, and that's why I'm actually contrary on a strong yes. Because I don't want a 262 specification or an implementation that intends to implement 262 to be obliged to implement a whole bunch of properties on built-in globals, because it knows it will never need because it decided not to implement 402. I'm imagining here and embedded like Moddable: why would it ever want to implement the getters, Even if they do nothing, and if they do nothing, what would they do? Return undefined probably, I don't know. Why would we want to force them to implement that by forcing it to be specified in 262 when we have a quite logical place is 402, which is also under the purview of this committee. So to me this is you know, I was even asking when we came up when or when we hit this I was even asking should we even put this onto the agenda? Because to me the answer seemed to logical or so obvious. I guess I was wrong on that one, but I'm still of the same opinion that the answer should be obvious. @@ -718,25 +668,19 @@ RGN: Very much. Remaining TCQ items: - - -* (HAX) It's possible an impl only support some calendars. - +- (HAX) It's possible an impl only support some calendars. ### Conclusion/Decision - - -* Consensus for the "narrow yes" solution. - +- Consensus for the "narrow yes" solution. ## Intl NumberFormat v3 Presenter: Shane F. Carr (SFC) -- [proposal]() +- proposal -- [slides]() +- slides SFC: I'm not going to give a full detailed overview of the proposal. I'm just going to give an update on the parts that have changed. So, since the last proposal we have, made the following change to the roofing, you numb to the use grouping option, based on feedback from Kevin Gibbons, among others. We converged on special casing, the strings, true and false, and then throwing an exception on anything that is not one of these explicit choices that's shown here in the table. That PR landed, and that's one change that has been made. @@ -760,23 +704,19 @@ SFC: Yes. a good question. The removal of the ordering was already presented in SYG: I see. Okay, that's that sounds good to me. I just want to make sure what the motivation was. - ### Conclusion/Decision - - -* Consensus for the presented PRs - +- Consensus for the presented PRs ## eraDisplay option for Intl.DateTimeFormat Presenter: Shane F. Carr (SFC) -- [proposal]() +- proposal -- [slides]() +- slides -SFC: Okay. So in intl-related proposal. What's the problem? So, the problem that we're trying to solve is this. What I showed here on the screen, this is code, I'd ran this code in Chrome and Firefox and other browsers do the same thing. I can put into completely different dates and I get the same output. Oh my goodness, there's something broken here. What's broken? What's broken? Is that the date on the top is, you know, is A positive date and the one on the bottom is a negative date. So it's like 2021 years before Year One in the Gregorian calendar, this is in the Gregorian calendar output. Yet ecma 402 does not currently permit implementations to add that. The bottom one is actually BC or BCE. So the eraDisplay fixes that. So how does it fix it? It adds a new option to Intl.DateTImeFormat. with three choices. First, we have always to always show the era, in this case, being DC or AD. we have never, which is always hide the era. I don't know why you'd want to do that before. But perhaps, if there's enough context Elsewhere on the page, you might want to hide the era. And then the third one is auto, which is to show the era if and only if it is different from the reference era - the reference era being defined in the specification as the era of the current dates. Temporal that now or whatever. So, that's the environment reference era. So currently of course, you know, you just saw that the default is basically never, because we don't show the era. So the change with this proposal is to have the default behavior be auto. So it's a change to the current behavior as well. In addition to giving an option that you can fiddle around with and change to your pleasing. So what makes this proposal interesting? +SFC: Okay. So in intl-related proposal. What's the problem? So, the problem that we're trying to solve is this. What I showed here on the screen, this is code, I'd ran this code in Chrome and Firefox and other browsers do the same thing. I can put into completely different dates and I get the same output. Oh my goodness, there's something broken here. What's broken? What's broken? Is that the date on the top is, you know, is A positive date and the one on the bottom is a negative date. So it's like 2021 years before Year One in the Gregorian calendar, this is in the Gregorian calendar output. Yet ecma 402 does not currently permit implementations to add that. The bottom one is actually BC or BCE. So the eraDisplay fixes that. So how does it fix it? It adds a new option to Intl.DateTImeFormat. with three choices. First, we have always to always show the era, in this case, being DC or AD. we have never, which is always hide the era. I don't know why you'd want to do that before. But perhaps, if there's enough context Elsewhere on the page, you might want to hide the era. And then the third one is auto, which is to show the era if and only if it is different from the reference era - the reference era being defined in the specification as the era of the current dates. Temporal that now or whatever. So, that's the environment reference era. So currently of course, you know, you just saw that the default is basically never, because we don't show the era. So the change with this proposal is to have the default behavior be auto. So it's a change to the current behavior as well. In addition to giving an option that you can fiddle around with and change to your pleasing. So what makes this proposal interesting? SFC: It seems like a fairly obvious proposal but it's a little bit interesting because it has runtime pattern selection. So to do run what I mean by runtime pattern selection is that since all date-time format needs to be able to choose based on the input date which pattern to use to do the internationalisation of the date. In this case, is there an era field or not? And the error could, you know show up before and after these localized patterns. And unlike things like our cycle, we actually have to be able to choose the pattern at runtime, like, we can't choose it in the constructor, using the format method. There's some precedents for doing this. DateFormatRange, which this committee approved for stage for a couple of years ago now, is the first proposal that did this where it actually the way that we implemented it was that in sold a time format gains an additional internal slots one for the like default pattern and then add additional internal slot for the extra pattern for range formatting. Temporal is is doing this again to supports formatting of the different Temporal types, like PlainDate and PlainDateTIme and ZonedDateTime those are all getting their own internal slot. So our fault, we're following the same pattern set forth in those in those examples by adding an additional internal slot. So here's what the suspect looks like a little bit, the stuff toward the bottom is a added. So we add a new pattern era field that contains the era-specific pattern. And then this is the code for how we selected at runtime. The spec text for how we selected at runtime shown here. @@ -784,30 +724,26 @@ SFC: So, the status of the proposal. So the stage 2 entry requirements. Are you SFC: So yes, I'm asking now for, I'd like to ask for TG1 consensus on moving forward with this proposal at stage 2. -RPR: There is no one on the queue. No one's asking me any questions today. I like when people ask me questions, obviously, as we do at any stage of constant, which were also asking, anyone in support, support. We? have a plus one from USA. Thank you. +RPR: There is no one on the queue. No one's asking me any questions today. I like when people ask me questions, obviously, as we do at any stage of constant, which were also asking, anyone in support, support. We? have a plus one from USA. Thank you. -SFC: I also want to ask for stage three reviewers stage, three reviewers you know? I think maybe this proposal can go for stage 3 in a couple meetings from now. So I'm also looking for maybe stage three reviewers if you're interested. In. learning more about 204and the Ecma ecma 402, you can be a stage three. Reviewer would be nice to get a couple people to vocally. sign up for that. Luke. now, objections. +SFC: I also want to ask for stage three reviewers stage, three reviewers you know? I think maybe this proposal can go for stage 3 in a couple meetings from now. So I'm also looking for maybe stage three reviewers if you're interested. In. learning more about 204and the Ecma ecma 402, you can be a stage three. Reviewer would be nice to get a couple people to vocally. sign up for that. Luke. now, objections. RPR: So I think we can we can say that this has stage two. and then, Joe question of stage three reviewers did anyone anyone put their hand up? Oh, here we go. EAO has said, they can review. - ### Conclusion/Decision - - -* Advancement to stage 2 -* Stage 3 Reviewers: - * EAO - * DLM - +- Advancement to stage 2 +- Stage 3 Reviewers: + - EAO + - DLM ## Resizable buffers bug fixes (#104, #106, #108) and transfer future proofing Presenter: Shu-yu Guo (SYG) -- [proposal]() +- proposal -- [slides]() +- slides SYG: So table of contents for this agenda item is there are a few bugs that I want to quickly go through. That are bugs, so they are normative to get consensus on those bug fixes and then a design question about the transfer method, I like to spend most of the 30-minute time slot on. So there are three bugs. that are that need to be fixed in the suspect pointed out by constellation, who is a who works for Apple JavaScriptCore to my understanding. So thank you very much to Apple folks for sussing out some remaining spec bugs here. @@ -819,18 +755,18 @@ SYG: All right. Fix number. two. Is the screen updated to fix number to locate a (queue is empty) -SYG: And the final one, which I merged already because it was super wrong, before getting consensus here, which I hope is uncontroversial. So there are two arrays in the spec that assigns values from one typedarray into another TypedArray. This is my typedarray: I got set. and, this just reads the modified copy of that TypedArray and initializes TypedArray from the library. So there is type the right that set which kind of sets the values, which splices the values of one type of array into another type of the right, and there's initialize decoration type array, which is maybe make a TypedArray and you pass another TypedArray to its constructor and ask for values of that into the new time period, both of these arrays are similar and and they were incorrect in the draft spec on, similar ways along the two bullet points. I list basically one is that if the source type through a that you're copying from is out of bounds, they should be treated like they were detached. Plus throw instead of setting the length,???. That I honestly don't remember what exactly happened here. And there is a more mechanical bug in set TypedArray the way from TypedArray where because the source type is in fact now resizable, it should not be reading as you're going to be resizable, it could be length tracking of resizable, buffer is should not be reading the array length field directly. Instead, the issue is that you calling an AO that computes the length if its length, which is what this PR fixes. Okay. I see nothing on the queue. So I hope that is also uncontroversial. +SYG: And the final one, which I merged already because it was super wrong, before getting consensus here, which I hope is uncontroversial. So there are two arrays in the spec that assigns values from one typedarray into another TypedArray. This is my typedarray: I got set. and, this just reads the modified copy of that TypedArray and initializes TypedArray from the library. So there is type the right that set which kind of sets the values, which splices the values of one type of array into another type of the right, and there's initialize decoration type array, which is maybe make a TypedArray and you pass another TypedArray to its constructor and ask for values of that into the new time period, both of these arrays are similar and and they were incorrect in the draft spec on, similar ways along the two bullet points. I list basically one is that if the source type through a that you're copying from is out of bounds, they should be treated like they were detached. Plus throw instead of setting the length,???. That I honestly don't remember what exactly happened here. And there is a more mechanical bug in set TypedArray the way from TypedArray where because the source type is in fact now resizable, it should not be reading as you're going to be resizable, it could be length tracking of resizable, buffer is should not be reading the array length field directly. Instead, the issue is that you calling an AO that computes the length if its length, which is what this PR fixes. Okay. I see nothing on the queue. So I hope that is also uncontroversial. SYG: Which were direct implementation feedback from j.c. So again, thank you very much to JSC now. DLM: So I just wanted to say let those bug fixes look very reasonable, we explicitly support them. -SYG: Of course, thank you. All. right, now into the meat of this agenda item that I want to talk about. So it was brought up recently. now that we're getting ready to ship in Chrome. trying to tidy up some loose ends. Namely: HTML integration, and how this could be used in other web specs like streams. And one of the questions that came up is how this interacts with the transfer function. The transfer method is kind of orthogonal to the rest of the proposal. the proposal at its core is about resizing buffers and allowing typed arrays to track these responsible beverage in the transfer method was something that that kind of fitted in the same space that went in and I put it in the same proposal and what it does is it gives you an API to transfer your buffers. It detaches the source buffer, and it returns you a new buffer of the same contents or basically same contents with a length that you specify and transfer. And this can be implemented as a realloc, or zero-copy. And the idea of the transfer originally as originally motivated was basically realloc when needed, or zero-copy when they could. This proposal is originally proposed and got to stage 3 with the transfer semantics currently our that when you transfer an array buffer, transport does not preserve resize ability. The original use case that I posited, was that you would have resizable buffer, and what you want to do is you would transfer it after you'd like finish your work load, that no longer needs to resize it. You will transfer it to a new length or to the final length. and that would fix it into a new fixed length of our career. In a way that the implementation can optimize as a zero cost move, and then you can free up. The virtual memory space that you allocated ahead of time for the It's come up from the HTML folks, and the streams folks that would have a more natural use of transfer is basically an API to do what you currently do with transfer and structuredCloning, and then postMessage, and they're the most natural thing is not to do these fixed semantics where transfer always produces a fixed length buffer. But you would have transfer preserve the resizability and return a new buffer with the same responsibilities as the source buffer. The reason I bring this up now is that now that we're getting ready to ship, if we ship, the current behavior as specified, that kind of closes that. That’s not future-proof if It turns out there the majority use case in the future should be to preserve resizability. So we should make this decision now before we ship transfer. And, the API design questions here are (1) how to talk transfer what kind of array buffer to return? How do we tell it to return a resizable one or how to cook Ware reports, we turn a fixed length 1 and (2) should transfer preserve and restore the behavior? that we should give up its receiver by default and if so How do you override that default behavior? One way of thinking. of is you can mirror the constructor and admin options bag that specifies the next byte length which tells you, whether the result array buffer ought to be resizable or not, but this kind of has interactions with (2) about what the default which are the building should be what the behavior be. Should it preserve the resizability? If you take the default option, meaning when the options bag is not passed to mean that it should get its value from the receiver, preserve resizability. Then there's really no way to tell it exactly that you should produce a fixed length of something. So, if this preservation semantics, is in fact, that most natural one and judging from the new use cases that come up, and kind of convinced me that most people think the preservation semantics is the most natural one. Then we can't have this behavior, which means that the current specified behavior is not future-proof. I think there are few choices moving forward. (A) is to stick with the current design of transfer, ie always return fixed-length buffers, when we extend transferring the future to produce resizable buffers as a follow-on proposal. This explicit options bag must always be passed. It does not preserve resizability. We have another option, where we make transfer preserve resizability, but this means is that we cannot transfer from your size into fixed length. and we need a new method. Perhaps, trying to follow proposal. We have another option option where there's a, you can consider transfer to have a special overload, if it's not worry, if neither. parameters, not the options back, nor the initial length one or two new lines are cast but then you consider that a special overloading and you preserve your behavior, and otherwise you produce a fixed length or resizable buffer depending on whether the options bag was present, if there's at least one parameter present. That's feels a little bit weird, I guess. But I think these are the options going forward. But before we dive into that and I want to see your folks’ opinions on this. I'm going to just preempt this saying that since transfer is barely orthogonal, what I'm proposing. is the first things you get consensus on is to split out transfer from the resizable buffers proposal and not ship it with resizable buffers, demote the transfer thing to stage 2. And depending on the result of the discussion, here, I come back at a future meeting. with a separate program proposal for just transferred to try to advance the stage 3. Does anyone have concerns about that? I know, moddable was to the first implementor to do resizable buffers. +SYG: Of course, thank you. All. right, now into the meat of this agenda item that I want to talk about. So it was brought up recently. now that we're getting ready to ship in Chrome. trying to tidy up some loose ends. Namely: HTML integration, and how this could be used in other web specs like streams. And one of the questions that came up is how this interacts with the transfer function. The transfer method is kind of orthogonal to the rest of the proposal. the proposal at its core is about resizing buffers and allowing typed arrays to track these responsible beverage in the transfer method was something that that kind of fitted in the same space that went in and I put it in the same proposal and what it does is it gives you an API to transfer your buffers. It detaches the source buffer, and it returns you a new buffer of the same contents or basically same contents with a length that you specify and transfer. And this can be implemented as a realloc, or zero-copy. And the idea of the transfer originally as originally motivated was basically realloc when needed, or zero-copy when they could. This proposal is originally proposed and got to stage 3 with the transfer semantics currently our that when you transfer an array buffer, transport does not preserve resize ability. The original use case that I posited, was that you would have resizable buffer, and what you want to do is you would transfer it after you'd like finish your work load, that no longer needs to resize it. You will transfer it to a new length or to the final length. and that would fix it into a new fixed length of our career. In a way that the implementation can optimize as a zero cost move, and then you can free up. The virtual memory space that you allocated ahead of time for the It's come up from the HTML folks, and the streams folks that would have a more natural use of transfer is basically an API to do what you currently do with transfer and structuredCloning, and then postMessage, and they're the most natural thing is not to do these fixed semantics where transfer always produces a fixed length buffer. But you would have transfer preserve the resizability and return a new buffer with the same responsibilities as the source buffer. The reason I bring this up now is that now that we're getting ready to ship, if we ship, the current behavior as specified, that kind of closes that. That’s not future-proof if It turns out there the majority use case in the future should be to preserve resizability. So we should make this decision now before we ship transfer. And, the API design questions here are (1) how to talk transfer what kind of array buffer to return? How do we tell it to return a resizable one or how to cook Ware reports, we turn a fixed length 1 and (2) should transfer preserve and restore the behavior? that we should give up its receiver by default and if so How do you override that default behavior? One way of thinking. of is you can mirror the constructor and admin options bag that specifies the next byte length which tells you, whether the result array buffer ought to be resizable or not, but this kind of has interactions with (2) about what the default which are the building should be what the behavior be. Should it preserve the resizability? If you take the default option, meaning when the options bag is not passed to mean that it should get its value from the receiver, preserve resizability. Then there's really no way to tell it exactly that you should produce a fixed length of something. So, if this preservation semantics, is in fact, that most natural one and judging from the new use cases that come up, and kind of convinced me that most people think the preservation semantics is the most natural one. Then we can't have this behavior, which means that the current specified behavior is not future-proof. I think there are few choices moving forward. (A) is to stick with the current design of transfer, ie always return fixed-length buffers, when we extend transferring the future to produce resizable buffers as a follow-on proposal. This explicit options bag must always be passed. It does not preserve resizability. We have another option, where we make transfer preserve resizability, but this means is that we cannot transfer from your size into fixed length. and we need a new method. Perhaps, trying to follow proposal. We have another option option where there's a, you can consider transfer to have a special overload, if it's not worry, if neither. parameters, not the options back, nor the initial length one or two new lines are cast but then you consider that a special overloading and you preserve your behavior, and otherwise you produce a fixed length or resizable buffer depending on whether the options bag was present, if there's at least one parameter present. That's feels a little bit weird, I guess. But I think these are the options going forward. But before we dive into that and I want to see your folks’ opinions on this. I'm going to just preempt this saying that since transfer is barely orthogonal, what I'm proposing. is the first things you get consensus on is to split out transfer from the resizable buffers proposal and not ship it with resizable buffers, demote the transfer thing to stage 2. And depending on the result of the discussion, here, I come back at a future meeting. with a separate program proposal for just transferred to try to advance the stage 3. Does anyone have concerns about that? I know, moddable was to the first implementor to do resizable buffers. RPR: At the moment there no one on the queue. ABO: Yeah. so, I was wondering whether the idea was that that transfer would work the same as structuredClone, when transferring an array buffer. So if there, if we had the options to make it resizable, to set the possible sizes and so on, like is the idea to also change the HTML spec to add those options to structuredClone and postMessage? \ - \ +\ SYG: Not exactly. I think the only sensible behavior for structuredClone serialization and deserialization is to preserve the resizable buffer and indeed that is some folks’ mental model that it should mirror structuredClone and therefore transfer should also by default preserve resizability. What structuredClone cannot do is to say, I want to serialize this array buffer but with a new length, which is part of what transfer gives you the ability to give you a new line array buffer so you can like shrink it. but you don't have to copy it in the implementation because it gives you that information. Does that make sense? ABO: Yeah thanks. @@ -881,22 +817,18 @@ MLS: and we just did a simple copy. We didn't copy-on-write. SYG: Okay. think we just just like, get a realloc and depends on what realloc wants to be. Okay. Yeah. okay, and I invite Matthew and other folks who expressed opinions here. I'll try to help make a after binary. Make it make a new proposal. Repose a split this up, you can collaborate there. - ### Conclusion/Decision - - -* Proposal split, .transfer goes to stage 2 -* No objections to any normative bugs - +- Proposal split, .transfer goes to stage 2 +- No objections to any normative bugs ## Intl MessageResource for Stage 1 Presenter: Eemeli Aro (EAO) -- [proposal]() +- proposal -- [slides]() +- slides EAO: So yeah, effectively this is is what I'm looking to. do is split the existing in pulled out, messageformat stage one, proposal into two separate stage one proposals. for clarity in this context. The. meaning of a couple of terms ought to be clarified, because of course, we use the same words for many things. So what I'm saying, message here, that really means it's a message that's meant for human consumption, rather than a message going between computer systems of various descriptions. And when a resource is in this context, it's a collection of related messages which might have internal hierarchy to it as well. Now, the idea here is that separate the concerns that we have around single message formatting versus the formatting a whole resource a set of related messages at the same time and also to reflect the fact that as this foundational work is ongoing in the Unicode Consortium. Their resource work has been separated from the signal message work, and it makes sense for these two also match in this sort of structural way in what we are doing in global JavaScript. There is a difference in the advancement of language where these are, which these are so the single message, message format to specification is at the level where ICU72 includes a technical preview for the ICU4J. And there is a existing polyfill for the JavaScript implementation for the same. A message. Part of the whole a specification, but the resource is much more under development at this time. And also because these are effectively separate changes, it would be nice for these to be atomic. So, you know, we don't end up with a huge one thing when we could have one thing and then a second thing building on top of that. Now what does this actually mean? I'm now going through some slides with code. Please feel free to look at the slides. afterwards, but if this proposal of spitting is accepted, then effectively was left in the in pulled out message for my proposal is this sort of a structure where we can build a single instance of an Intl.MessageFormat around a single message, and then resolve that and then work with it at that level. The code for interacting with this API looks as you see on the screen, roughly with therefore the source this is another format to syntax. I'll show a little bit more of that later. And what sort of a message, when resolved and for method, does that look like and and really, how do you read the end quite often? How do you get a string out of that with the formatted values? And now, the proposal that I'm asking for here is that this parse resource would effectively, which was or already in what was earlier presented and accepted for stage 1, as a part of one big a proposal is to have this ability of having a static method parseResource which is able to take a string representation of an entire resource of messages and to build from this was is actually a JavaScript map object that might have hierarchical levels within it ending up with MessageFormat instances within. The way that you would end up using this sort of interface, is that you have the source such as the two messages arrive and depart they're just as examples and when you pass this source to the the forest, Resource method, you would end up with a map effectively and from this, you can get the actual messages that you have in there and then resolve and format them. Now, a very valid question at this point to ask is why do we need a new format for this in the first place? And unfortunately, there are actual proper answers to this the main of the main three of them are mentioned here. One is that we really need a format that establishes a good way and a solid formal way of communicating between developers and translators, who explicitly are not developers themselves. We need something like comments and metadata that we can attach to these messages and pass them on to the translators who need that context to do their work. And actually, if you start looking at the specifications, it's interesting to note that really almost none of the existing specifications for configuration file formats, for instance, really do not define how comments and metadata attach to specific messages or nodes of or values. In any case here explicitly, we want to format where this is well defined. Secondly we want that format supports organizing messages to some level. And fairly, given that we are specifically needing to to have a format for message format to messages, which have a certain kind of a structure and a shape that is a little bit different from what strings in general or messages in existing other formats, or look like in particular that they end up often it's being multi-line. And having an interesting shape. but that's beyond the discussion. @@ -906,7 +838,7 @@ EAO: So the syntax that the very much work in progress syntax for what does some EAO: And then there's a bunch of links to where this is progressing. and that's it for my presentation. And yes. I am asking for stage 1 advancement for the MessageResource proposal here. -SFC: Yeah, thank you EAO for the presentation. I definitely support having two proposals for this, for reasons we've discussed previously. I think that it's going to be a lot better to focus the main MessageFormat stage 1 proposal on message strings, and it also corresponds to the output of the MessageFormat working group. So I support at the splitting, the proposals. One question, I wanted to bring up to the group because I thought, maybe this group would have some thoughts on it. is regarding whether a TC39 is the right body to for the bundle side of the proposal. I think that's you know, like, for example, you know, one way that, you know, bundled message models have been described to me in the past is like it's sort of like a CSS format, where it's like if it's a file format. And maybe I'm maybe I'm outdated in that perception, but you know, is TC39 the right sort body? Iis ecma402 to the right place to put the definition of the of that syntax in the parser for that syntax? you know and does this fit better in some other place? like you know in the in tag or something like that? So, yeah. But yeah, I provide support the proposal so I definitely supported going to stage 1 because it gives us room to explore these questions. +SFC: Yeah, thank you EAO for the presentation. I definitely support having two proposals for this, for reasons we've discussed previously. I think that it's going to be a lot better to focus the main MessageFormat stage 1 proposal on message strings, and it also corresponds to the output of the MessageFormat working group. So I support at the splitting, the proposals. One question, I wanted to bring up to the group because I thought, maybe this group would have some thoughts on it. is regarding whether a TC39 is the right body to for the bundle side of the proposal. I think that's you know, like, for example, you know, one way that, you know, bundled message models have been described to me in the past is like it's sort of like a CSS format, where it's like if it's a file format. And maybe I'm maybe I'm outdated in that perception, but you know, is TC39 the right sort body? Iis ecma402 to the right place to put the definition of the of that syntax in the parser for that syntax? you know and does this fit better in some other place? like you know in the in tag or something like that? So, yeah. But yeah, I provide support the proposal so I definitely supported going to stage 1 because it gives us room to explore these questions. KG: I'm not necessarily opposed to this going to stage 1, but I know that the sort of upstream proposal is itself only stage 1, and is not likely to advance in the near future. And I don't see much motivation to be worrying about the exact surface of what goes in what proposal right now given that all stage 1 represents is the identification of a problem which in this case is sorting out some method for handling multiple language resources. And like that is all we have committed to, by getting MessageFormat to stage 1, and this seems like - if we had advanced MessageFormat to stage 2 with a particular API in mind, then a follow-on proposal might then make sense. But we do not have anything like a commitment to an API shape for MessageFormat it right now. So I'm not sure it makes sense to be trying to add follow on proposals, at this stage. Like I said, I'm not opposed, I just think that this seems a little bit premature, given the very early stage of all of this. @@ -942,10 +874,7 @@ EAO: Excellent, thanks. ### Conclusion/Decision - - -* Stage 1 for MessageResource - +- Stage 1 for MessageResource ## Temporal status overview and normative changes @@ -993,7 +922,7 @@ RKG: Yeah, if we want to not worry about time zones syncing we can make use of t YSV: Sounds like we're good to move on. -USA: I just wanted to talk a bit about the time calendars. I think it would be a mistake to disregard PlainTime calendars. In order to speed up work and not get distracted we decided to split this out of the current proposal and defer this until later. But well, first of all, what we talk about here, when we talk about time calendars, in the general calendaring spaces as time scales. So, these do exist even outside of fun examples we talked about (Mars time and stuff) - there's atomic time, there's mean solar time, GM1 and so on. So it is okay not supporting them at the moment. But I'd be against removing future the pathways we could add them in the future or add support for time scales in the future. +USA: I just wanted to talk a bit about the time calendars. I think it would be a mistake to disregard PlainTime calendars. In order to speed up work and not get distracted we decided to split this out of the current proposal and defer this until later. But well, first of all, what we talk about here, when we talk about time calendars, in the general calendaring spaces as time scales. So, these do exist even outside of fun examples we talked about (Mars time and stuff) - there's atomic time, there's mean solar time, GM1 and so on. So it is okay not supporting them at the moment. But I'd be against removing future the pathways we could add them in the future or add support for time scales in the future. PFC: Okay, that's good data. @@ -1013,17 +942,17 @@ DE: Yeah, it's fair to you to disagree potentially, with the concepts that that PFC: Well, I see the ready-to-ship agenda item has been bumped to the next meeting. Sorry, I didn't know about that. -DE: Anyway very happy about all this progress. This is great. +DE: Anyway very happy about all this progress. This is great. SYG: This may not be directly related. This might go into the ready to ship this question as well, but V8's experience, I want to back up FYT here. I think it's not exaggerating to say that we have not had a good time implementing this proposal. And I think it's not really the fault of the proposal pushing against the limits of the staging process and how well it works. For smaller more bite-sized things, at stage 3, we have much higher chances of getting to a point of stability on stage 3 entrance. For something of this size we just can't do that. You need to have a back(?) will be established with you actually try to implement it. To get the things, the bugs, the issues that will result in this high volume of normative changes that seem to keep going. So I would appreciate that we as a committee do some post mortem here eventually. How do we approach proposals of this size in the future? I think the current working mode has not been the most productive that it could be. FYT: Yes, I agree with the point. I think I mentioned earlier the issue with this proposal. One of the reasons is because of the size, which I totally agree with, but I think the other issue is actually... I want to bring this up because I know in the future there will be people making proposals this way. I do think a lot of issues in retrospect are because it was originally based on a JavaScript polyfill, as prototype. In that approach, there's a lot of things that cannot be surfaced. When you write a spec text, which is the ???, right? I'm not against working via a polyfill, but I do want to point out that with that approach there are issues very difficult to spot if that is the reference from beginning. So I would encourage people who want to try to do some things you propose, to be careful of that. I mean, honestly, it's not a wrong thing to do. I'm just saying there are things you cannot do that way. -USA: FYT I'm afraid what we're talking about here might be a cyclical problem. You might be correct that a lot of problems might have stemmed from the fact that the source of truth for some time was a JavaScript proof of concept, but at the same time that approach was chosen by the champions group because of the size. Because it was such a large amount of spec work to be done, without really many pathways to verify that. Maybe in the future with better tooling we wouldn't need that, but at the moment it was the only way to write such a huge proposal. +USA: FYT I'm afraid what we're talking about here might be a cyclical problem. You might be correct that a lot of problems might have stemmed from the fact that the source of truth for some time was a JavaScript proof of concept, but at the same time that approach was chosen by the champions group because of the size. Because it was such a large amount of spec work to be done, without really many pathways to verify that. Maybe in the future with better tooling we wouldn't need that, but at the moment it was the only way to write such a huge proposal. PFC: I think we've been going for 40 minutes. I can't see the queue. How many things are left on it? -YSV: We have one item left on the queue, but that may be better handled separately, as we're already getting into the topic of the post-mortem on a large proposal, like records and tuples. If it's all right, I'd be happy to allow you to move on with your slides. +YSV: We have one item left on the queue, but that may be better handled separately, as we're already getting into the topic of the post-mortem on a large proposal, like records and tuples. If it's all right, I'd be happy to allow you to move on with your slides. PFC: Okay, let's do that. (Slide 19) I'll quickly present the normative PRs that we have for this time, and ask for questions on those and then ask for consensus. (Slide 20) All right. We have a change to how we handle non-numeric inputs to APIs that expect numbers. So the problem that we are trying to solve, you can see here, where you try to construct a Temporal.Duration, or Temporal.PlainTime, or any number of other APIs that expect numbers here, with things like NaN, a string, a regular expression, or the Temporal object. All these would silently convert to zeros, so that you would get a zero-length duration or midnight time. That seems like it's not good for programmers. What we've done in this PR is to align the conversion better with Web IDL conversions for doubles and for [EnforceRange] long long types, which are two of the four types that Web IDL recommends new APIs use. I did an informal poll in Matrix about a change like this, a few months ago and there seemed to be enough positive support. I know that previously for APIs on Array.prototype we've chosen not to do this, but at the same time we've been happy to do this for new APIs that are not on an existing prototype in the language. So that's what this change would do: it would align with Web IDL conversions by throwing RangeError for all these non-numeric inputs, where a number is expected. And then treating arguments with a default value by first assigning the default value to cover up any undefined that you pass it. @@ -1057,7 +986,7 @@ SYG: It seems like a reasonable reason to me, thanks. SFC: I think PFC went into fairly good detail on this. If I had to summarize, preventing foot guns based on educator feedback is something that we've previously discussed in other proposals at stage 3, about valid feedback for addressing in stage 3. Implementer feedback is definitely one, and then educator feedback because we don't get that feedback before proposals reach stage 3, and preventing foot guns is totally in the ballpark. And we we definitely took a scalpel change, as PFC just just said, we could have done this in a much bigger way. We we took a scalpel in order to cut out the foot gun, and I think that's totally in scope for a stage 3 change. -YSV: Just one quick note on the time we are getting into the last five minutes on this topic. And we have one item on the queue from DE. Please go ahead. +YSV: Just one quick note on the time we are getting into the last five minutes on this topic. And we have one item on the queue from DE. Please go ahead. DE: Overall, I think the changes that are happening in that are proposed today are well-motivated. I'm in favor of them. It is unfortunate that changes of this size are still occurring, but I think the roadmap that PFC set out at the beginning of the presentation shows that hopefully these are drawing to a conclusion. I was worried for a while that the changes were just kind of an endless flow, But it seems like these are well scoped. About what kinds of changes make sense. Regarding being at stage 3, I do think it's regrettable to have educator feedback only coming at stage 3. In the future I hope that we can encourage more educator feedback in earlier stages and I think the work that the work that RCA is is doing in terms of building a more active educator group should hopefully bring this to happen. As far as which things procedurally make sense to be in scope to happen at this stage: remember that the committee can come to consensus on normative changes for even the final specification, for things that are at stage 4. So the committee can agree on a semantic change for something that's at stage 4 as well. But it is held to a high bar. The default is that we wouldn't add anything. I do think that we should remember that stage 3 does mean that we all made our best efforts to fully design this and it's really just because of the magnitude of this proposal that the things worked out a little bit differently. I think it makes sense to keep things stage 3 and not propose a demotion as we were discussing with with groupBy, because in the past such demotions have have just inhibited progress, and not recognized that things are still incrementally moving forward, and increasingly solid and complete. So, I'm happy with this state of progress, even if it's complicated to conceptualize and talk about. @@ -1069,21 +998,17 @@ PFC: Yeah, thanks. I understand. I think that stage 3 is the compromise between YSV: Do we have any explicit statements of support other than DE and DLM? Anyone objects to the changes that have just been presented getting consensus? It sounds like consensus. - ### Conclusion/Decision - - -* No objections to any normative changes (and explicit support for DE and DLM) - +- No objections to any normative changes (and explicit support for DE and DLM) ## Is ECMA402 allowed to extend ECMA262 prototypes? - What does "narrow yes" mean? Presenter: Shane F Carr (SFC) -- [proposal]() +- proposal -- [slides]() +- slides SFC: What does "narrow yes" actually mean? We should write that down in the notes. Okay. Our understanding of it is that we as a committee commit to not make such extensions without a note in 262, or other text into 262, describing that 402 is specifying what they are. @@ -1093,14 +1018,11 @@ SYG: Is it going to be exactly like toLocaleString because toLocaleString is, is SFC: Yeah, that that allegedly understanding is want to make sure that that was what we agree on. a point of order. Can we have a queue item? -FYT: So so my question is that with that conclusion, does it means that the era and eraYear need to be defined or not need to be defined in the first 14 chapter of Temporal? I just try to understand that If. +FYT: So so my question is that with that conclusion, does it means that the era and eraYear need to be defined or not need to be defined in the first 14 chapter of Temporal? I just try to understand that If. RGN: We did not ask that question and we did not answer that question. - ### Conclusion/Decision - - -* We as a committee do not allow such extensions by specifications under our control except where indicated in the text of ECMA-262. -* The form has not been defined yet (note vs spec text) +- We as a committee do not allow such extensions by specifications under our control except where indicated in the text of ECMA-262. +- The form has not been defined yet (note vs spec text) diff --git a/meetings/2022-11/nov-30.md b/meetings/2022-11/nov-30.md index ed048653..415fce14 100644 --- a/meetings/2022-11/nov-30.md +++ b/meetings/2022-11/nov-30.md @@ -4,8 +4,7 @@ **Remote attendees:** - -``` +```text | Name | Abbreviation | Organization | Location | | -------------------- | -------------- | ------------------ | --------- | | Frank Yung-Fong Tang | FYT | Google | Remote | @@ -46,15 +45,13 @@ | Istvan Sebestyen | IS | Ecma | Remote | ``` - - ## Intl Enumeration for Stage 4 Presenter: Frank Yung-Fong Tang (FYT) -- [proposal]() +- proposal -- [slides]() +- slides FYT: Okay. hi everyone. My name is Frank, I work for Google on V8 internationalisation and also last year also spend a lot of time there are working on Temporal and today, I'm going to talk about two different proposals. The first ones are asking for stage four advancement. This one is called Intl Enumeration API for stage 4. So the charter of this API is Is to let Intl, which already exists for about 10 years to able to return to the caller the list of supported values of certain option that's already pre-existing. this in ecma402 API. including calendar, collators, currency, numbering systems time zone and unit. @@ -84,19 +81,17 @@ FYT: So if there are no other questions or feedback as well like to formally ask BT: Right. Frank is asking for stage four, and do we have any any objections? I hear a lot of explicit support. All right, I hear no objection. So I think that is stage 4 granted Congratulations. - ### Conclusion/Resolution -* Stage 4 - +- Stage 4 ## Intl Locale Info stage 3 update Presenter: FYT -- [proposal]() +- proposal -- [slides]() +- slides FYT: So the next was Intl Locale Info API. Originally when tomb, I think probably one have month ago when I put on agenda. This particular API. I was thinking about asking for state or advancement but about a month ago I think we find a tissue so we are not asking for stage 1 of the assessment today. but we do need your advice abouthow to resolve Issue together. @@ -106,7 +101,7 @@ FYT: So again, the history, we advanced to stage 1. in to September 2020 and Jan FYT: Basically, in this API would be adding seven getters to the Intl Locale object. here is one of the issue we try to get your help from that. Currently those things are getters and we - let me talk about the recent change. So one of the recent change, which I think we got resolved, we handed over while is when we try to return what the particular locale when (?) localization is co our billable for that locale. We return a list, an array. We used to say that array is stored in the order of the preference usage of that collation in the Locale. And I think Andre has pointed out, They're actually currently do not have such information the CLDR data, we actually available to us or do not have the preference. There are order, but the order is not guaranteed to be. preference order with a locale. So after a lot of discussion at the the committee we reached agreement, that we make a change to. I think we can come back here last night, about advice for General picture And I think the agreement is that in that particular case it will sorted in alphabetical order for that. So, we make that change and that got resolved in pull request 63, that's one of the reason original widest Gap kind of extend for quite a long time. -FYT: But then very recently we find another issue which I think is a blocking issue, the issue is this. so we currently that's seven function, we currently implement as a getter. The problem is that the return value are array, or object of object, okay? And so every time it gets called, we create this object. but it got pointed out that actually we don't cache that object, everything go time. those are getter, there are not function. function. And it will create new object return, right? Those shouldn't be changed. So the issues that got filed, you can look at issues 62 the issues that we believe that maybe some issue there. I'm not 100% sure how important is that issue. I feel that could be important. So I think I don't want to rush it. Seems like they are two solutions, one is I'm not sure that to are good. One solution is that instead of of a getter we change to a function. So he every time I will create an object and return, a different object the other solution will just freeze it. So every time we create this object with freeze it so nobody can change it. but that things like not enough, right? So you you're still real current different objects, would freeze it. And, another part of that is, could be maybe internally with your cache is always always return, the same thing and never got change because it’s frozen, but that is something I really don’t want to do because that means the engine have to be able to remember that. already created thing in a cache, which will waste memory for something that really you don't need to use it, right? So I really try to avoid that thing. I don't have a good solution, So I do want tc39 to give advice first. Is that that issue 62 reasonable. to address, or is a getter which returns a new object okay? Right, there's the first issue I want to ask advice. +FYT: But then very recently we find another issue which I think is a blocking issue, the issue is this. so we currently that's seven function, we currently implement as a getter. The problem is that the return value are array, or object of object, okay? And so every time it gets called, we create this object. but it got pointed out that actually we don't cache that object, everything go time. those are getter, there are not function. function. And it will create new object return, right? Those shouldn't be changed. So the issues that got filed, you can look at issues 62 the issues that we believe that maybe some issue there. I'm not 100% sure how important is that issue. I feel that could be important. So I think I don't want to rush it. Seems like they are two solutions, one is I'm not sure that to are good. One solution is that instead of of a getter we change to a function. So he every time I will create an object and return, a different object the other solution will just freeze it. So every time we create this object with freeze it so nobody can change it. but that things like not enough, right? So you you're still real current different objects, would freeze it. And, another part of that is, could be maybe internally with your cache is always always return, the same thing and never got change because it’s frozen, but that is something I really don’t want to do because that means the engine have to be able to remember that. already created thing in a cache, which will waste memory for something that really you don't need to use it, right? So I really try to avoid that thing. I don't have a good solution, So I do want tc39 to give advice first. Is that that issue 62 reasonable. to address, or is a getter which returns a new object okay? Right, there's the first issue I want to ask advice. FYT: The second thing is that if that's not a stat of a, we need to change it. Which route is better? Is that we talk about TG2 and the conclusions that we should bring in here to ask what we feel is better to have TC39 to give us a guidance. That is so so there are two options so far. maybe there are other option, but two option is we know about is, first, to change all all the getter to just function to every time. caught it returned to create the object returned it. Although, every time you're going to say anything, but you don't need to remember it and the user cannot expect That will be exactly same object, you can expect content not to change, but not the same object. The second thing is that you freeze it and return, which I have a pull request but I don't really think that is self solve the issue. I suspect that will be a showstopper for going to stage 4 because that mandates days. Maybe a surgical move for this thing in a limited way, but I want to ask for feedback. @@ -142,7 +137,7 @@ FYT: All right, that's my mistake. Have you found a stoic? I didn't realize that BT: Dan has another item on this topic as well. -DE: Yeah. When I went to speak to both of those questions that Matthew raised. On the first one, why use a getter instead of just having a property that's secretly lazy. I think such an implementation would work better in some engines than others, at least historically. I think people can correct me if I'm wrong. SpiderMonkey I think had to make a decision for such secretly lazy properties, and V8 kind of didn't tend to do that. That particular thing. At least not in cases like this. so, we want something that's going to be efficiently implementable across engines. So that's a reason to avoid expecting that engines will make a property secretly lazy. About the lazy allocation of these properties. I think there's two different kinds of memory usage that we should analyze a little bit differently from each other. One kind is the memory usage of like ten words of storage or so that are initially, all undefined for these. for these things, it seems really important to me to lazily allocate the actual arrays that holds the information, because that's a whole lot of allocations. But on the other hand, oversizing the allocated object a bit to hold these initially null pointers, I think that's a lot cheaper and I think that that is the kind of thing that we could afford. So, I'm not sure if we have to be, you know, completely optimized with respect to that. So finally, Frank mentioned that getter should only return Primitives and not objects And this is a - I'm not convinced that we should adopt this invariant in general. I think in you know we have a pretty small standard libraries right now. So there aren't a lot of gators that we can look at for reference but in the web platform, there's lots of usage of getters that return objects. and I think that's a - I think that's a useful pattern +DE: Yeah. When I went to speak to both of those questions that Matthew raised. On the first one, why use a getter instead of just having a property that's secretly lazy. I think such an implementation would work better in some engines than others, at least historically. I think people can correct me if I'm wrong. SpiderMonkey I think had to make a decision for such secretly lazy properties, and V8 kind of didn't tend to do that. That particular thing. At least not in cases like this. so, we want something that's going to be efficiently implementable across engines. So that's a reason to avoid expecting that engines will make a property secretly lazy. About the lazy allocation of these properties. I think there's two different kinds of memory usage that we should analyze a little bit differently from each other. One kind is the memory usage of like ten words of storage or so that are initially, all undefined for these. for these things, it seems really important to me to lazily allocate the actual arrays that holds the information, because that's a whole lot of allocations. But on the other hand, oversizing the allocated object a bit to hold these initially null pointers, I think that's a lot cheaper and I think that that is the kind of thing that we could afford. So, I'm not sure if we have to be, you know, completely optimized with respect to that. So finally, Frank mentioned that getter should only return Primitives and not objects And this is a - I'm not convinced that we should adopt this invariant in general. I think in you know we have a pretty small standard libraries right now. So there aren't a lot of gators that we can look at for reference but in the web platform, there's lots of usage of getters that return objects. and I think that's a - I think that's a useful pattern FYT: I do want to clarify, I'm not suggesting what that that I just think currently I don't see any, at least I cannot find anything it to Susan to and forward to doing that. And I think that it have some Kind of additional issue, for example, just got raised. Comparing to you've gathered to return primitive type And I saying that, can I do it? I just think that's something you. I can't lack of Precedence to follow. @@ -156,14 +151,8 @@ MAH: I would rather keep simple spec steps and maybe have editors notes clarifyi DE: Yeah, I don't think that getters are especially complex, I think they're a great pattern to use and make sense here. sense here. - - ?: sorry, I didn't see the queue. Anyone still on the queue? - - - - BT: Yeah, there's a couple folks on the Queue still. We got EAO. EAO: Yeah. Just wanted to point out that if we had Records and Tuples that would be a great use case for this, but as we don't, it rather sounds like we avoid so much difficulty here, We just use something like it. getWeekInfo. As a function here. So it's really a method rather than getters. @@ -186,7 +175,7 @@ PFC (from queue): no need to speak strong preference for function. BT: And that exhausts the queue. -FYT: Okay. So let me propose this way. So I'll proposing to the everybody seems like a lot of people express function and then they are. I think Daniel mentioned both are ok, so I go to change it, trunk, Adder to function, for all those maps that would anyone object me to do that? Okay. thank you for your device. That's exactly what I need from you guys. I’ll go make a pull request. I'll ask couple people to review it but basically you're just changing from getter to function now. +FYT: Okay. So let me propose this way. So I'll proposing to the everybody seems like a lot of people express function and then they are. I think Daniel mentioned both are ok, so I go to change it, trunk, Adder to function, for all those maps that would anyone object me to do that? Okay. thank you for your device. That's exactly what I need from you guys. I’ll go make a pull request. I'll ask couple people to review it but basically you're just changing from getter to function now. FYT: Still, I want you to go through this for whatever. the state 3 activity, So currently around March 2022, which is half a year ago, Chrome 99 and Edge and Opera and Safari all ship it. Of course now we have the change a little bit from from getter to function that they have the change. Mozilla has a bug open, we haven't see clearly what is holding them back. So old like as Mozilla see, is there anything else a blocking function thing we need to help them in order to reach that. So I would like to figure out from that. And then MDN has been edited. test 262 to have the feature for that already. already. Here are some of the tests 262 sure. show. Sorry, this isn't this. The MDN showing that basically just Firefox have an environment that yet have a launched it yet. And this is the to put together, but this little function (?) There are no single place I kind of do sound Photoshop to put. I think all different colors together but they are there to have, will have passed but that you can click on the test 262 to see that I think it will be nice if I extend more of a testing there. @@ -216,14 +205,10 @@ YSV: I think you can find a clarification about his concern about issue 30 in th BT: All right. Thank you, YSV. Thank you. Awesome. Thank you, Frank. - ### Conclusion/Resolution - - -* Getters to become functions -* DLM to follow about with Andre offline about remaining blockers for Mozilla - +- Getters to become functions +- DLM to follow about with Andre offline about remaining blockers for Mozilla ## Records and Tuples @@ -255,7 +240,7 @@ RRD: Yeah, YSV, I wanted to give you a short answer. so it's mostly multiple asp PDL: So we started talking about this a couple years back before Robin introduced the problem. The problems I was encountering was that I was really looking for a structured primitive. So, something that I can request as a parameter to a function that I know that no matter where it goes after that, the function would never mutate. Because we were doing a lot of copying as part of passing things, into a function as parameter because there was never a guarantee, that the Callback that we're calling with that data, wouldn't alter that data. So you know, hey, I was writing in a library. I was getting a client call and I was returning data in a callback and I wouldn't have no guarantee that that wouldn't be altered. So I would have to do a copy every time because my big data tree shouldn't be at risk from a user of my library altering, it. So that was the problem that I was trying to solve but and that goes further because it's the same problem as returning multiple data points from a getter. Yeah, I can do that by combining it in a string, right? But now I have to parse that string again, right? So to me it was the structured primitive aspect that was relevant. and structured primitive when you think about it leads to consequences, So immutability is a consequence of structured primitives. All our Primitives are immutable, right? It leads to accurate triple equals because all our primitives work with accurate triple equals. And, you know, it doesn't matter whether it's deep or not, because we don't have any Peak restructured, growing tips, So those were sort of the core goals. to avoid. action at a distance. An easy way to avoid actions of this. That was the immediate need that I was trying to fill a Kentucky. -BT: To Robin's point, I just want to break in real quick. Sorry. Yeah. I To make sure. that ACE you’re watching your time box here. And if you feel like that if this discussion is going to come up later in the presentation, then let's definitely prefer to go through the presentation. So feel free to aggressively postpone the discussion items until we're through the slides. +BT: To Robin's point, I just want to break in real quick. Sorry. Yeah. I To make sure. that ACE you’re watching your time box here. And if you feel like that if this discussion is going to come up later in the presentation, then let's definitely prefer to go through the presentation. So feel free to aggressively postpone the discussion items until we're through the slides. ACE: Yeah, yeah, let's do that. Thanks BT. @@ -269,7 +254,7 @@ ACE: so, When? we talk about equality, these are kind of some of the attributes ACE: And then, do they have side effects? So you know if this if you have an equality operation that's going to trigger proxy hooks or getters then that's an equality that could potentially have side effects. -ACE: and then also does an equality operation preserve encapsulation, So if this equals operation was saying that these two things weren't equal not because they're two different objects and it's doing referential comparison. But if it's somehow, seeing that the private field is a 1 and then a 2 and for that reason, we're seeing these things are not equal then this equality operation doesn't preserve encapsulation. and then, is it terminating. So again, if it's going to trigger getters or proxy hooks or things like that, then potentially it's going to be trying to we know, it's comparing iterators. Like maybe it's going to compare So in this infinite. so it would never terminate. +ACE: and then also does an equality operation preserve encapsulation, So if this equals operation was saying that these two things weren't equal not because they're two different objects and it's doing referential comparison. But if it's somehow, seeing that the private field is a 1 and then a 2 and for that reason, we're seeing these things are not equal then this equality operation doesn't preserve encapsulation. and then, is it terminating. So again, if it's going to trigger getters or proxy hooks or things like that, then potentially it's going to be trying to we know, it's comparing iterators. Like maybe it's going to compare So in this infinite. so it would never terminate. ACE: and then, when we, look at what we would imagine, like a if you want, can I come in background two maps and sets, if you want your maps and set to be kind of well behaved? and what we would imagine what you'd want from the kind of the default way, that a map or a set would work, is that you would want an operation that it is consistent over time? that is symmetric doesn't have side preserves encapsulation and then terminates. then that's kind of why that when we being designing the quality in this proposal, we've really focused on an operation that has those attributes. And then like, in addition then overload the existing ones because those things aligned with those operators, So it kind of gives you those benefits. I mentioned earlier of working with the existing, Ecosystem. system uses those. operators. One way, I kind of like to think that this proposal is that there's kind of two ways you can go. You could either say because of how immutable these things are. Now that gives us this great opportunity. to also Define their equality in this way. All we're kind of defining something that has this consistent high quality. and then because of that, they need to be immutable to the (?) @@ -295,7 +280,7 @@ ACE: so, we're kind of in this place of week. bro where that while we were kind ACE: so, yeah, so that's the slides and now hopefully I've left time that we can actually chat about this proposal. -MLS: JSC has the same complexity concerns as the other browser engines. +MLS: JSC has the same complexity concerns as the other browser engines. BT: All right. Thank you. @@ -341,25 +326,19 @@ DE: WH, I completely agree with you here. The idea is that this design is becaus BT: I think thank you, DE. I want to point out that the time box is running relatively short here, based on the length of the queue is growing. I think, concision is important, but also I think the champions would probably get more value out of a breadth-first exploration of the topics then depth first. I am willing to help but I think probably the champions are in a better place to time box to individual topics. So yes, just to say, feel free to move the discussion along to another topic and I can advance the queue when you when you think it's appropriate to do. So and we can Circle back. Also if there's time. - - ?: Maybe Shu if you can make your point quickly because it's answering to them. - - - - SYG: Right there. is in response to DE saying, deep immutability and deep equality are linked I believe the link is one that there's an implication for one way which is if you really want to give you equality then yes you do want deep immutability but. I don't see the link to the other way. That's all. DE: I agree. -MAH: This was originally. motivated by something you lie. I said for equality there's really two different places where equality matters one is in mapping sets where we could imagine ways. of making that work with if slightly expanded API for mapping sets. However as ACE pointed at in presentation, that requires the use of the creator of the map and set it's added to to be able to adapt their implementation, so it wouldn't be compatible with existing. libraries. if we want triple equal to work that's a slightly different problem. And there it's more of the creator of the value that needs to do something to make triple equal work, not the consumer, and I again it goes back to to the goal here. What do we want? Do we want seamless ===, do we want seamless usage in maps and sets for these values. and, and I guess I just wanted to tell you that a concern, That's what we need to solve here is that we won't solve or not any (?) cancel that problem. which does require immutability maybe there is a smaller proposal. +MAH: This was originally. motivated by something you lie. I said for equality there's really two different places where equality matters one is in mapping sets where we could imagine ways. of making that work with if slightly expanded API for mapping sets. However as ACE pointed at in presentation, that requires the use of the creator of the map and set it's added to to be able to adapt their implementation, so it wouldn't be compatible with existing. libraries. if we want triple equal to work that's a slightly different problem. And there it's more of the creator of the value that needs to do something to make triple equal work, not the consumer, and I again it goes back to to the goal here. What do we want? Do we want seamless ===, do we want seamless usage in maps and sets for these values. and, and I guess I just wanted to tell you that a concern, That's what we need to solve here is that we won't solve or not any (?) cancel that problem. which does require immutability maybe there is a smaller proposal. ACE: Yeah. I know you're on the queue Robin. I would really like to come to Shane Shan has been on the queue for a really long time. RRD: Yeah, I know. Just wanted to add something here, === also matters for libraries as ACE made in the presentation and the analysis that is important to us. -SFC: Structured map keys are a really big use case that come up a lot in internationalization, especially dealing with things like language identifiers, and many other structured identifiers, that can serve as cache keys or data look up, various things like that comes up. You know, very, very frequently. In terms of triple equals like, it's better than having to call an equals function, but there are less-ergonomic workarounds. But I think that Structured Map is a very big thing that this proposal uniquely solves. and, you know, the other one is, you know. is immutability. And in terms of the aspect of you know, you have data that you need to reach that you need To return out. And you don't want to have clients have to be able to mutate that because you want to be able to share that among many clients of a particular class. Like, I think is another aspect that this proposal uniquely solves. I definitely think that the direction that this proposal has gone since it's been at stage 2 I support the shape of the proposal at stage 2 but if we're, but if we must go back to the fundamentals, I think that these are sort of the biggest things. At least from my perspective that make me really the most excited about it. +SFC: Structured map keys are a really big use case that come up a lot in internationalization, especially dealing with things like language identifiers, and many other structured identifiers, that can serve as cache keys or data look up, various things like that comes up. You know, very, very frequently. In terms of triple equals like, it's better than having to call an equals function, but there are less-ergonomic workarounds. But I think that Structured Map is a very big thing that this proposal uniquely solves. and, you know, the other one is, you know. is immutability. And in terms of the aspect of you know, you have data that you need to reach that you need To return out. And you don't want to have clients have to be able to mutate that because you want to be able to share that among many clients of a particular class. Like, I think is another aspect that this proposal uniquely solves. I definitely think that the direction that this proposal has gone since it's been at stage 2 I support the shape of the proposal at stage 2 but if we're, but if we must go back to the fundamentals, I think that these are sort of the biggest things. At least from my perspective that make me really the most excited about it. BT: All right. thank you, Shane. In the interest of breadth, would you mind if we went to romulo First? @@ -379,7 +358,7 @@ MLS: Well, I will quickly respond for JSC value types is the main driver for a c SYG: That is also V8’s position. I'm asking is if comes if the ready implementers excluded, say that value types is the value, would that be hard blocks from you all? -MS: Well, if it's in the standard, we should implement it. We know we can do it. It's just a lot of work and we're not sure it's worth it. It is a question of ROI. +MS: Well, if it's in the standard, we should implement it. We know we can do it. It's just a lot of work and we're not sure it's worth it. It is a question of ROI. SYG: None of us as implementers are sure. It's going to pay off. Like, we're kind of skeptical that it is and you know, it's up to us I think to really give a go or no go on the stage 3 here. If value types are the direction. So I think we should should ask that question internally. and get some get a more definitive answer? @@ -403,11 +382,9 @@ Remaining queue items: 3 (SFC) (reply to DE) A Map that runs JS during lookup raises questions about the underlying map impl (HashMap vs BTreeMap) that value types can avoid - ### Conclusion/Resolution -* Implementers and champions to further discuss tradeoffs - +- Implementers and champions to further discuss tradeoffs ## Module and ModuleSource Constructors @@ -469,7 +446,7 @@ LCA. what do you suggest? Yeah. no more than 10 to 15 certainly. RPR: All right, We'll finish. this at quarter to the hour. so CP. This is please manage to discussion and figure out if you want to ask for stage 2. and and do. So at with at least two minutes remaining, thanks, so that so, can't guarantee I was giving guidance. that it's up to you to manage the remaining time box, you can tell us which items in the queue you want to address but if you're going to ask for stage 2, you will need to do it at least by 13:42. Is that clear? -CP: Yeah. Okay. So we talked about Source, strings, Some replies there. There's only three more new topics. The second argument discussion, we can talk about that. It’s one for you, one from Yulia as well about the modules or something extreme Exposé. So security +CP: Yeah. Okay. So we talked about Source, strings, Some replies there. There's only three more new topics. The second argument discussion, we can talk about that. It’s one for you, one from Yulia as well about the modules or something extreme Exposé. So security DE: Why don't you go ahead and call on somebody? @@ -489,9 +466,9 @@ CP: Correct. SYG? SYG: My topic now. Okay, what? So I want to better understand the motivation, I can suspect the motivation for the hooks is virtualization. I'm wondering if there's a multi-part question but the first part is do the hooks solve other use cases and problems other than virtualization. -CP: Yeah. fact that the me one is now because they shouldn't like to think about one colors. You want to load a humongous amount of code in one, one single file. you probably will be using module declarations or module expressions there and you have to connect the dots between the end. So the developer experience is the same as if they were separate modules that are linked it. They have dependencies between them. there is not a rewrite of the original source of the actual module, you get the actual source delivering one, one single file and then you have to have a way to connect during the linkage process these different instances that you already have. And the only way you have to do that is by having a hook that allow you to, for each of the instances that you might create, be able to resolve the source that instance will provide. +CP: Yeah. fact that the me one is now because they shouldn't like to think about one colors. You want to load a humongous amount of code in one, one single file. you probably will be using module declarations or module expressions there and you have to connect the dots between the end. So the developer experience is the same as if they were separate modules that are linked it. They have dependencies between them. there is not a rewrite of the original source of the actual module, you get the actual source delivering one, one single file and then you have to have a way to connect during the linkage process these different instances that you already have. And the only way you have to do that is by having a hook that allow you to, for each of the instances that you might create, be able to resolve the source that instance will provide. -SYG: so, so I guess that that then that helps them to the second part of my question is my concern. is ultimately, about performance but it's really about about so, I have performance concerns here because one, it's exposing something to user code, or letting user code hook into a place where it cannot hook into before. And that just, without looking really deeply at how this is implemented, not just in V8, but because this has such a host involvement in the web engine as well, I feel like calling arbitrary JS here, it's probably going to be a different performance characteristic than the default use case, where there is no hook. and that gives me concern is that if the use cases here are designed around DX, that puts us in a hard place of recommending something for people to use that could be drastically slower. And then that makes us not want to recommend that for people to use. I don't know if you have any thoughts on how we can reconcile that, like I don't want a situation where, you know, one of the advantages of bundling now is that, funnily enough, it sidesteps the ESM loading process sometimes and that makes it like much faster. If you re introduce the ESM loading process here and let the user code hook in that, you know, people might not use that at all, because it would just be much slower. But I'm also concerned about things like, I'm going to say, like, cosmetic Frameworks but I don't know exactly what I mean by that but it's something like 44. where the behavior they're trying to get with import hooks, like Auto appending suffixes or something like that, like, it's not worth the performance cost. and I feel like the hooks could incentivize this the wrong way in producing in folks. Making Frameworks that do these cosmetic effects that are might be nice for the X but drastically altered. Improve the performance. +SYG: so, so I guess that that then that helps them to the second part of my question is my concern. is ultimately, about performance but it's really about about so, I have performance concerns here because one, it's exposing something to user code, or letting user code hook into a place where it cannot hook into before. And that just, without looking really deeply at how this is implemented, not just in V8, but because this has such a host involvement in the web engine as well, I feel like calling arbitrary JS here, it's probably going to be a different performance characteristic than the default use case, where there is no hook. and that gives me concern is that if the use cases here are designed around DX, that puts us in a hard place of recommending something for people to use that could be drastically slower. And then that makes us not want to recommend that for people to use. I don't know if you have any thoughts on how we can reconcile that, like I don't want a situation where, you know, one of the advantages of bundling now is that, funnily enough, it sidesteps the ESM loading process sometimes and that makes it like much faster. If you re introduce the ESM loading process here and let the user code hook in that, you know, people might not use that at all, because it would just be much slower. But I'm also concerned about things like, I'm going to say, like, cosmetic Frameworks but I don't know exactly what I mean by that but it's something like 44. where the behavior they're trying to get with import hooks, like Auto appending suffixes or something like that, like, it's not worth the performance cost. and I feel like the hooks could incentivize this the wrong way in producing in folks. Making Frameworks that do these cosmetic effects that are might be nice for the X but drastically altered. Improve the performance. CP: Yeah, quick comments. it's on that, The first one is that. the chorus effect is based on the spec refactor from NRO. introduced with the memorization process and so on for specify the second comment is is more interesting. I believe, which is that you only you don't have a way – I think we talked about that before – We don't have a way to go back to this resolution through hooks, you ever go into the default Behavior. So if you enter one of the sub-trees of the module graph, if you enter into the default behavior of browsers, and engines in general, you have no way to have a hook in that subtree. So that eliminates the possibility of today having a particular module graph. That will in the future have extra step that goes into user land. Executing code on a hook. So it's only when you choose to use the hook that you get the hook to be evaluated and the performance of it is described by The refactor from NRO. So it's going to be called only one per specifier. but if you choose to have one dependency that have the default Behavior. Then at that point, point, that subtree is automatically out of the calls to the hook. @@ -511,11 +488,8 @@ USA: I'm sorry, folks, we're out of time again. So we'll have to move this discu CP: Yeah. just to test the water. I think there's some pushback from KG and SYG, we will follow up. - ![alt_text](images/image57.png "image_tooltip") - - ## String.dedent for Stage 3 Presenter: Justin Ridgewell (JRL) @@ -524,7 +498,7 @@ Presenter: Justin Ridgewell (JRL) - [slides](https://docs.google.com/presentation/d/1zq5uG-ckUxOlOdxP5X1lSfwKAzgyyTyonQgVjezQ5KE/) -JRL: All right. So this is String.dedent for stage 3, to recap essentially String.dedent allows you to write pretty source code and receive pretty output code. In this case here we have a Content block from lines, four through seven, it is indented inline with our source code. It looks and feels like this is an actual part of the source code. But when we output it through console.log, we don't have any of that leading indentation, it has been removed so that the output looks like it was written specifically for an output text while being pretty as source code. The only noticeable changes that we made at stage 2 was the decision to treat escape characters that cook into whitespace, we're no longer treating those as indentation characters for dedenting. So now instead of removing this `\x20`, which is an escaped space character, it will leave that character in the output. So our cooked output in this case would contain two spaces, the escaped Space character which cooked into a real space character followed by a real space space character. And this also affects the new lines. so if you have an escaped newline as we do in this case, that will not be treated as a literal line that we need to dedent, it'll just be treated as the same continuation of a line that is currently on. So the changes here, you can see the cooked changes here. it. Now it mirrors what the source text actually is. The raw output remains the same, it's only the cooked strings that have changed. Essentially, it's just instead of cooking and then de-denting, we dedent and then cook it. +JRL: All right. So this is String.dedent for stage 3, to recap essentially String.dedent allows you to write pretty source code and receive pretty output code. In this case here we have a Content block from lines, four through seven, it is indented inline with our source code. It looks and feels like this is an actual part of the source code. But when we output it through console.log, we don't have any of that leading indentation, it has been removed so that the output looks like it was written specifically for an output text while being pretty as source code. The only noticeable changes that we made at stage 2 was the decision to treat escape characters that cook into whitespace, we're no longer treating those as indentation characters for dedenting. So now instead of removing this `\x20`, which is an escaped space character, it will leave that character in the output. So our cooked output in this case would contain two spaces, the escaped Space character which cooked into a real space character followed by a real space space character. And this also affects the new lines. so if you have an escaped newline as we do in this case, that will not be treated as a literal line that we need to dedent, it'll just be treated as the same continuation of a line that is currently on. So the changes here, you can see the cooked changes here. it. Now it mirrors what the source text actually is. The raw output remains the same, it's only the cooked strings that have changed. Essentially, it's just instead of cooking and then de-denting, we dedent and then cook it. JRL: so, if we to recap just what the the common indentation rules are without going through all the individual terminology for it. we find the most common leading indentation that matches on every single line that contains text. So, in this case, lines four through seven contain actual text. Line 3 is an empty line, It's ignored. Line 8 is a whitespace-only line. It is actually turned into an empty line in the output. So line 17. So the first non-whitespace character, which happens on line 4. Also the template expression, which occurs on line six stops the leading indentation at the dollar sign. The escape character, which we just discussed, on line 7, also stops leading indentation. and so the leading indentation is examples just four spaces and any of those three lines would have stopped the common indentation. and any other indentation excessive of that, so line 5 here, continues to have that extra whitespace.Template expressions are not dedented. So even though third here contains white space, that white space will never get removed because it's a part of the expression and not the literal static text of the template expression. And as I explained the cooking of an escape character never affects dedenting. So we have this this output @@ -538,15 +512,15 @@ JRL: So the issue here, as described in the issue thread. Babel, when you're doi KG: I don't think we should worry that much about Babel’s loose mode transform for ES6 features. That's going to see less and less usage over time and this API will exist forever. And, I don't know, not getting the thing in the cache because you were using a loose mode seems like the sort of thing that you should expect when you are using loose mode. It's like - you have opted into getting the wrong semantics. This is like an edge case that you should not be surprised to run into if you are still in the position of needing to use Babel for template strings. I'm just not that worried about it. -USA: Next up is SYG \ - \ -SYG: That's exactly it, I don't understand why we care about something that breaks semantics in the name. \ - \ +USA: Next up is SYG \ +\ +SYG: That's exactly it, I don't understand why we care about something that breaks semantics in the name. \ +\ JRL: because it's widely used. I mean, it's as we just stated in the last proposal, ESM is not used extensively in production, and so people are still transforming with Babel constantly. Unfortunately, loose mode is extremely popular. So if we do propose a breaking change here where we're not preserving the expected caching semantics means people will unexpectedly get the wrong result constantly. if they're using something like lit-html. They're going to be constantly wiping away the DOM tree and then reinitializing, it gets the brand-new template strings every time. If they're, I don't have good examples of other template tag template strings, at the moment, but essentially it's considerably more expensive for the user to do this. I don't think we should not preserve caching semantics, if we want to change the behavior here, I'd much rather we throw in this case. SYG: a stupid question. Why can't the lose transform Not be loose in this case. - \ +\ JRL: it would require the dev to understand what is actually happening. If it's transparently working for them, but it's working badly, they may not notice the issue, so they'll just get incorrect caching semantics, and they're going to be doing a lot more expensive work without understanding why it's happening. SYG: No, I mean, why can't the Babel loose transform use frozen arrays. @@ -572,30 +546,24 @@ SYG: why would caching trigger the extremely expensive case, wouldn't it just be JRL: Caching is the current behavior that Kevin just mentioned but it's the incorrect Behavior because it gives you this incorrect output, a surprising result, but at least the surprising result will show you that you have a bug. SYG: That's not that's not the extreme just clarifying that that incorrect result is like something that incorrect string is displayed. Not that it's extremely slow. and, apparently, \ - \ -JRL: yes, that is the current is KG is suggesting that we we change this so that it's not caching for immutable array. which means you will get the correct result but it'll be extremely expensive for you,because whatever tag you’re wrapping will perform its initialization logic for a new template strings array. Because it can’t find a cached result that’s already done that work. +\ +JRL: yes, that is the current is KG is suggesting that we we change this so that it's not caching for immutable array. which means you will get the correct result but it'll be extremely expensive for you,because whatever tag you’re wrapping will perform its initialization logic for a new template strings array. Because it can’t find a cached result that’s already done that work. SYG: I am so confused. Okay, I thought what was happening was? it what KG said was? ideally, he would prefer, we only freeze Frozen arrays, it's not frequently cache Frozen arrays. Yes. arrays. Yes. But He would prefer. After that, in order first is only cache Frozen. second is cache, everything. The current behavior and last is throw prevent. Yes. Yes. Is that correct? -KG: That is my preference ordering. \ - \ +KG: That is my preference ordering. \ +\ SYG: So that sounds like the compromises. cache everything which is the current behavior. JRL: I'm okay. with that. USA: Next up, we have two more items in the queue but note that you don't have much time left. - - ??: Okay. - - - - USA: next up, there's WH. - \ +\ WH: I agree that we don't want the outcome of caching disappearing and the user not being aware of caching disappearing. JRL: Okay. so I think that rules out option one here, where we’re not caching an unfrozen array, which I agree with. I think that is the incorrect choice to make here. @@ -614,13 +582,12 @@ KG: Yeah. I marginally preferred not throwing but I don't really have that stron JRL: Okay. I'm happy leaving it as is, where we have the caching behavior in all cases. -BSH: I would not be happy with that, that's why I'm on the queue, but it's really bad. Do you whirring liquid or vendors? \ - \ +BSH: I would not be happy with that, that's why I'm on the queue, but it's really bad. Do you whirring liquid or vendors? \ +\ USA: You were on the queue but we don't have time left. Unfortunately, JRL: okay, let's take this to the issue and I'll bring this back at the next meeting. So, this is issue #75 on the proposal repo. - ## Set Methods Presenter: Kevin Gibbons (KG) @@ -671,8 +638,8 @@ SYG: I see given that, then I still fully support stage 3 because that's explici KG: Sure. -USA: Next up, we have WH. \ - \ +USA: Next up, we have WH. \ +\ WH: For intersection, you said that you construct the result first and then you sort it according to insertion order in the receiver. What happens if somebody has deleted some result entries from the receiver by the time you do the sort? KG: Excellent question. In this case you keep the relative order but move them to the end. So the assumption is that you're doing a stable sort that treats keys missing in the receiver as basically mapping to infinity. @@ -681,55 +648,54 @@ WH: Okay, thank you. USA: YSV is next in the queue. -YSV: Yeah, I was just checking our notes. We did notice the question for implementers but didn't have a chance to get to it. So, I need to double check that with my team, but beyond that, for the shape of the proposal, as it is now, we support this. \ - \ +YSV: Yeah, I was just checking our notes. We did notice the question for implementers but didn't have a chance to get to it. So, I need to double check that with my team, but beyond that, for the shape of the proposal, as it is now, we support this. \ +\ KG: Great. Okay. Well, I'd like to formally ask for stage 3 with the understanding that there is this open question about whether the specified semantics are actually implementable, that you will only find out when implementations go to implement. - \ +\ USA: That sounds like stage 3. - ### Conclusion/Resolution -* Stage 3 - -* During stage 3, need to ascertain if resulting order of intersection as currently specced is possible by implementers, and for champions to keep work in sync with each other on this matter +- Stage 3 +- During stage 3, need to ascertain if resulting order of intersection as currently specced is possible by implementers, and for champions to keep work in sync with each other on this matter ## String.isWellFormed \ -Presenter: Michael Ficarra (MF) \ - \ + +Presenter: Michael Ficarra (MF) \ +\ + - [proposal](https://github.com/tc39/proposal-is-usv-string) - [slides](https://docs.google.com/presentation/d/1YXHuZ46ZwzR2zZs1V2FdT1oEGH13b6E6bpfX0w9i1EA) \ - MF: Okay. so this is the well-formed Unicode strings proposal. I say update in the slide title, but this is looking for stage 3. This is the whole proposal. So as a reminder, the goal was to determine if a string is well formed. This had a lot of use cases, Everywhere you need to interact with anything that will have alternatives string encoding or you know requires well-formed strings. All sorts of things like file system interfaces, network interfaces, etc. So the proposal, as presented last time, was just the first method there: isWellFormed. And during that presentation, I presented as well an open PR to add toWellFormed, which everyone seems to like so we incorporated that as well into the proposal. So toWellFormed takes a string that is not well-formed. Oh and to remind people what a well-formed string is: a well-formed string does not have lone surrogates, including out of order surrogates. So all surrogate pairs are in the correct order. So, going back to what toWellFormed does. It takes a string and, if it is not well-formed, replaces any of those lone or out of order surrogates with a replacement character U+FFFD. This is a very common operation. This is the same operation used within the HTML spec and very many other places This is the recommended character that is defined for this purpose. So that's the whole proposal. It has had stage two reviewers. I think it was JHD and JRL. I have to look into the – \ - \ -JRL: Yep, I approved. \ - \ +\ +JRL: Yep, I approved. \ +\ MF: Okay. so that's the whole proposal and I would like stage 3. \ - \ +\ USA: Is JHD on the queue? JHD: I want to say I strongly support it. I've already implemented polyfills, and I wrote the PR for test262 tests, so I'm extremely confident in the spec text. USA: Next up we have DLM. \ - \ -DLM: Yes. SpiderMonkey team. Also strongly support society makes a lot of sense to be included in the action. \ - \ -USA: Next, we have KG, who says he also supports the proposal. Right. \ - \ -MF: All right. Well, thank you everyone for the explicit support and it sounds like we have no objections. \ - \ +\ +DLM: Yes. SpiderMonkey team. Also strongly support society makes a lot of sense to be included in the action. \ +\ +USA: Next, we have KG, who says he also supports the proposal. Right. \ +\ +MF: All right. Well, thank you everyone for the explicit support and it sounds like we have no objections. \ +\ USA: Yeah. Congratulations on stage 3. MF: All right. Thank you. \ - \ -### Conclusion/Resolution +\ -* Stage 3 +### Conclusion/Resolution +- Stage 3 ## Import Reflection @@ -739,7 +705,7 @@ Presenter: Luca Casonato (LCA) - [slides](https://docs.google.com/presentation/d/1TjS7tXSffAUsSwPEN6AWE4a4Ax4-4ssQKvEitJdoxJo/) -LCA: Okay. Yeah. Yeah. yeah. So I'm LCA I'm going to be giving update on the import reflection proposal that's currently at stage 2. GB may join us here. Yes. So what is import reflection? So import reflection is a new syntax or a new feature for JavaScript that we propose to add to JavaScript. that allows you to import a reified representation of a compiled source of a module, if a host provides such representation. This allows you to import, for example, with webassembly modules, you could import the underlying compiled but unlinked and uninstantiated webassembly module to be instantiated later, for JavaScript you could import a module Source object like was discussed in the previous presentation with CP. This is also supported in dynamic import with through an options bag on the dynamic import with a currently proposed a ‘reflect’ key that takes a module string. But we may be open to changing that if there's feedback. the primary motivation here is to allow importing webassembly modules into ecmascript without actually instantiating the modules as part of the module graph. So the current best approach we have is to not use the ESM module system at all and to instead instead fetch webassembly, for example, with fetch(). For example, with Node, you'd have to read it from disk. Or in Dino you'd have to do the same thing. So this is not very portable. It requires special handling on a bunch of different platforms, not all platforms can fetch all protocols. So some platforms need special handling there, it's not the library statically analyzable As you can see, this is a expression and it's a relatively complicated one at that. It can be broken out into into many different. pieces like this fetch could be assigned to a binding, the URL could be assigned to a binding. It's difficult for bundlers to statically analyze this to be able to move the wasm around for example. Especially with this. special casing that some platforms required because they don't support fetch() makes this much harder. It essentially means that a lot of tooling has to hardcode the output from a bunch of wasm tooling in their parsers to be able to understand drill to statically Analyze This. For end users this is very easy to get wrong. If you forget the new URL, for example, with the hooting provider URL, your portability is gone. So this is using new URL() but the browsers, for example, support, import.meta data resolved now. I don't think that's tiptoe. Oh that's a different thing that needs to be. Penalized if he's a Fed. Trapper that could break the whole thing. All in all, not very portable. +LCA: Okay. Yeah. Yeah. yeah. So I'm LCA I'm going to be giving update on the import reflection proposal that's currently at stage 2. GB may join us here. Yes. So what is import reflection? So import reflection is a new syntax or a new feature for JavaScript that we propose to add to JavaScript. that allows you to import a reified representation of a compiled source of a module, if a host provides such representation. This allows you to import, for example, with webassembly modules, you could import the underlying compiled but unlinked and uninstantiated webassembly module to be instantiated later, for JavaScript you could import a module Source object like was discussed in the previous presentation with CP. This is also supported in dynamic import with through an options bag on the dynamic import with a currently proposed a ‘reflect’ key that takes a module string. But we may be open to changing that if there's feedback. the primary motivation here is to allow importing webassembly modules into ecmascript without actually instantiating the modules as part of the module graph. So the current best approach we have is to not use the ESM module system at all and to instead instead fetch webassembly, for example, with fetch(). For example, with Node, you'd have to read it from disk. Or in Dino you'd have to do the same thing. So this is not very portable. It requires special handling on a bunch of different platforms, not all platforms can fetch all protocols. So some platforms need special handling there, it's not the library statically analyzable As you can see, this is a expression and it's a relatively complicated one at that. It can be broken out into into many different. pieces like this fetch could be assigned to a binding, the URL could be assigned to a binding. It's difficult for bundlers to statically analyze this to be able to move the wasm around for example. Especially with this. special casing that some platforms required because they don't support fetch() makes this much harder. It essentially means that a lot of tooling has to hardcode the output from a bunch of wasm tooling in their parsers to be able to understand drill to statically Analyze This. For end users this is very easy to get wrong. If you forget the new URL, for example, with the hooting provider URL, your portability is gone. So this is using new URL() but the browsers, for example, support, import.meta data resolved now. I don't think that's tiptoe. Oh that's a different thing that needs to be. Penalized if he's a Fed. Trapper that could break the whole thing. All in all, not very portable. LCA: One solution that people came up here is to do the WASM and JS integration directly which essentially allows you to import WebAssembly.Module instances. So instantiated and linked webassembly webassembly modules. But this doesn't solve all use cases. First of all, a lot of webassembly imports. They are specifiers with what we would call them. So if you want to import them portably from within a library, you would need to specify a global import map. So your end users if they import a Imports. The webassembly. library which assembly module that uses raspberries, they have to specify an import map to remap this bear specifiers. Also, if you want to do multiple instantiation, which is relatively common for webassembly because there's a lot of webassembly out there. That is a single essentially single pass, it's like CLI tooling that's very difficult to do here. You can't send webassembly modules between workers if you can't get access to the wasm module Object, which you can't do with the web and GSM articulation. And if you want to express it, if you want to pass memory to the webassembly modules, you cannot do that Using the welding ASM educational. Either you need to do manual. Instantiation are which you need the webassembly that module object for so, with our proposal, this would be solved by allowing you to import the WebAssembly.Module instance directly using static syntax the module, keyword here, would differentiate this import from a regular static import. This is now very easily statically analyzable, tooling can the with a very simple. for. Well, not that simple, But with a single pass parts of the JavaScript tasty can now find all references to webassembly Imports. It can now move them around which is very ergonomic for users. Makes it the tooling much nicer, and it has security benefits as well. Namely that we don't need to do a dynamic fish anymore, which is can make it the CSP policies simpler for importing, webassembly much better. If you have a strict CSP policy, I'm not going to go into too much detail here. you have questions about that. Please, do not like you and I can elaborate elaborate @@ -748,7 +714,7 @@ LCA: Another motivation is allowing JS module reflection, so with the previous p LCA: To clarify the scope of the proposal, the proposal currently adds the module keyword on the import declaration syntax and the reflect option in the dynamic import options bag, and the spec mechanisms to return the right. the module Source object. the reflected representation of the imported modules. when either of these is specified. It does not specifically add the module or Module Constructors. Nor does it add any WebAssembly-specific integration to ECMA-262, those are all host-defined behaviors. Well, the module and module which ones aren't but those are those can be done in a separate proposal. So this proposal by itself, actually does essentially nothing it adds. a keyword and an option in an important in this options bag and the actual wasm integration needs to happen in the wasm ESM integration specification. The spectacular spec text for this is now ready. It's built on top of NRO’s excellent ECMA-262 module loading refactor, which changed loading to only use a single post-import hook. I don't know what the pure number is for that view. Yeah. I'm fine. it, it's not quite emerged yet, but I think it's getting pretty close. The, you can find this back here. There's a couple. things I wanted to bring up namely that we're not adding any new host hooks Instead, the module Source object. So that's the reflected representation of a module is a new internal slot on the module record as this can be populated by either ECMA-262, for example, for JavaScript, for built in modules to this could be set to the module source. or it could be set by hosts for things like the webassembly integration. There are also module records that do not have a source representation, for example, JSON, or modules that are host-internal, for example, in Node, the fs module, These, if you would try to import them would throw a TypeError because there is no object in the module Source object internal spot. This is useful for the modules that don't have a representation ever, and it also works for modules that do not have representation yet without breaking existing code if we add a representation in the future, so you could imagine that JSON modules may have a module Source representation in the future. and yeah, the default is to throw unless there's a specific host implementation because this module Source object, slot is empty by default. The reflective Imports are not recursive, so importing a module source does not actually attempt to load any of its dependencies. and as such the ordering of loading and also the order of evaluation can be slightly different between import or from between a regular import statement and an import statement with a reflective import statement. this sort of illustrates that if you import a module with the view of reflective import of a module, this does not load any of the dependencies so you can see here that initially modulate is loaded but what do a Imports? Module B? This is not directly loaded yet because reflective Imports are not recursive, same for B and when you put C see Imports of G so G is recursively loaded by C. Then we import A, which itself was already loaded by the reflective import here, but now we're actually recursively importing it. So we also need to load the dependency of a which is e. so you see he is now loaded refer to with A but A is not loaded again because he was already loaded by the in this one, puts him him in here. and then, D is loaded. and A is loaded recursively by D, but you can see that For example, f is never loaded because we never recursively load to be. if there was await. import be at the bottom here, then F would be loaded. LCA: The idempotence. of imports are unchanged. so, importing. if let me see how to phrase this correctly, if two modules, or if you import a module from a specifier and that specifier resolves to the same module instance internally for two different specifiers. Is that guarantee is also preserved for reflective modules over the module Source object. If you import two modules where that do not resolve, that you're not resolved to the same module instance, they may also not return the same module Source. It is possible that for two objects, which resolved to two separate module instances, they do return the same source And one example of this is this #1 here, which is something you can do the browser, you can add a hash to a specifier to create a new specifier and then it instance of the module, but it won't actually load the source again. so here the module Source may be the same. across two different specifies, even though the instance is not the same, But all cases, where we currently guarantee that the instance is the same, the source is also the same. \ - \ +\ LCA: There’s a bunch of layering happening with other proposals. one with compartments layer 0. So that's the for the like data integration returning, the module Source instance, or returning module sources. When you import when you reflectively import JavaScript and being able to instantiate those. using the module Constructor, a layering with the module Expressions because module Expressions, also return module instances from compartments layer 0, Module Harmony layer 0, it's been renamed. which is kind of interesting. It essentially means that a dynamic import. it reflective Dynamic, import of a module expression is equal to the module expression for getting the source of the module expression, which answer makes sense. There's some layering with lazy loading, the main difference being that lazy loading is deep, whereas module reflection is not deep. It's shallow. So there's no recursive load going on. This proposal is not really meant for lazy loading because it does not do recursive load. GB is going to go into this a little bit more in the next presentation. presentation. There's some layering with the export default from proposal. which has been inactive for a little bit but this proposal add Syntax for for exporting the default. From a module. Sort of mirroring from the input statement. current If This Were to land. there would also be an argument made that there should be a reflective default exports as well. Yeah, that's not going to be the case. So this only touches imports and exports. there's the obvious layering with the wasm ESM integration, which I covered earlier. And then, layering with web components. I don't know, is GB here now? Yes, you are. an awesome going to talk about that for a bit. GB: I can mention it briefly. In webassembly, we require all webassembly instantiations to be done explicitly through the imperative webassembly instantiation API, and this is kind of a standard technique where you provide the direct import bindings for the webassembly module. And in this way, webassembly modules, the way that they're used it's often a little bit more like passing function parameters, than exactly aligning with the host import resolution model. So maybe something a little bit more like module expression bundles as the model of WebAssembly linkage. And the webassembly component, module Builds on some of these ideas. I do, actually hope to, in a future meeting, give a little bit more of an in-depth introduction to where some of that work is going, but to briefly just discuss that in the in the scope of the reflection proposal, webassembly components want to be able to get access access to uninstantiated webassembly.module objects, so that they can perform their own linking just like you would for a module expression in a bundle or module declaration in a bundle. And so, by having this integrated into the module system, we would be able to achieve that goal. when you want to integrate directly interest resolution, which is where we want to get eventually. And that's that's kind Of a long road to get there. But that's a very brief discussion on for now and feel free to bring up anything that. if I haven't. explained it, that clearly. Yeah. That's all are mentioned. Excellent. @@ -769,7 +735,7 @@ NRO: Just to clarify what LCA said earlier, I'm one of the people who are pushin LCA: Yeah, sorry, I should have clarified that it did not be linked but the module resolution Hook is used in that module is the one from you. Oh, stroke. So the modules that would get loaded by that module after load cannot be adjusted. through a customer. of different instruments. Next topic is from YSV. -YSV: it was covered a bit in chat and it's feedback. I brought up before but I'm still unconvinced about import module as a syntax because it you know, Developers. what have they been doing with the import keyword this whole time, other than importing modules. So I think it'll be confusing and I think we want to choose a Dresses. +YSV: it was covered a bit in chat and it's feedback. I brought up before but I'm still unconvinced about import module as a syntax because it you know, Developers. what have they been doing with the import keyword this whole time, other than importing modules. So I think it'll be confusing and I think we want to choose a Dresses. LCA: yeah, that is a valid point, I think. this is probably part of a larger discussion around whether module instances should be called modules or module instances. Because I agree that it is confusing. @@ -789,25 +755,25 @@ DE: I mean, like, I take it. BRN might have been referring to the inverse proble RBN: the specific concern. We had was around things like some of the performance related things that some bundlers like esbuild due to hoist exports, or to hoist Imports that are used internally to which avoids, some of the lookups that have to be performed. So this is one of those modules then becomes an actual module block or module expression, that hoisting becomes unusable -DE: My impression is that this proposal should be very readily statically analyzable. So I have a hard time understanding what the concern is. \ - \ +DE: My impression is that this proposal should be very readily statically analyzable. So I have a hard time understanding what the concern is. \ +\ RBN: just a measure of the complexity that would add to bundlers of bundlers. or a significant percentage of bundlers were signed on to the signed up for this complexity and are aware of it and are fine with it. Then I don't have any specific concerns. It's just to make sure that that's being that away. This is being raised with it that Community as well. \ - \ +\ LCA: So, for time, sake, I think, let's take this to an issue on the way home. \ - \ +\ USA: yeah, :LCA and GB, you have less than one minute. What would you like to do? do? What? would you like to do? \ - \ -LCA: I'd love to hear from. the and about the incident resources of thing. \ - \ +\ +LCA: I'd love to hear from. the and about the incident resources of thing. \ +\ USA: DE could you be really quick? \ - \ +\ DE: Yeah. so I feel strongly that module expressions and reflective modules are fundamentally getting at the same kind of run-time construct. This could be a module instance, or it could be a module source with like a base path attached to it for a relative. module specification. Ultimately, these can, these are equivalently expressive because you can use the if we have a module Constructor and a source getter(?), then there they can be expressed in terms of one or the other. So I'm skeptical of the idea of using different runtime representations for the two of them. If we want to hedge our bets. against wasm integration never really happening. I think we could, we could say, like, well for the wisdom integration, the module can just Not. contain not not be importable And the only thing you could do is get its first So yeah, I'd like to like to discuss this more with with GB in the committee but that's, that's my feeling feeling there. USA: Yeah, unfortunately we're on time But would you like for us to capture lightning? LCA: Yeah, that'd be great. -NRO: I just have an answer to DE. ModuleSource and Module are not equivalent in expressiveness, because you cannot easily build a Module from a ModuleSource, unless you pass a hook in a way that matches the behavior of the host. So you can easily get the ModuleSource from the Module, but the other way is harder. +NRO: I just have an answer to DE. ModuleSource and Module are not equivalent in expressiveness, because you cannot easily build a Module from a ModuleSource, unless you pass a hook in a way that matches the behavior of the host. So you can easily get the ModuleSource from the Module, but the other way is harder. USA: All right. Yeah, thanks. @@ -815,10 +781,10 @@ DE: Oh, that's a further argument for going with module for for both of these. USA: right? I suppose. you could continue this. offline. Next up, we have GE and YSV for effort importing valve. -LCA: If I may just ask for more item on the agenda. So while we're still on import reflection is if anyone has any specific specific concerns. For reflection. for for considering where the current specification is for stage 3 review. if they can bring up those points now, so that we don't hit it in the next meeting, that would help with your mouth, Serve anyone wants to speak up. \ - \ - \ - \ +LCA: If I may just ask for more item on the agenda. So while we're still on import reflection is if anyone has any specific specific concerns. For reflection. for for considering where the current specification is for stage 3 review. if they can bring up those points now, so that we don't hit it in the next meeting, that would help with your mouth, Serve anyone wants to speak up. \ +\ +\ +\ JRL: Would you like me to do that in matrix chat, or on the issue tracker? GB: But you can take a minute now. If you like Justin @@ -830,27 +796,27 @@ DE: So that the history here is that I was working with other people on on a pro JRL: it's going to be blocked, either way. we're in a deadlock here DE: So I hope those people who objected previously could engage in this discussion. \ - \ +\ GB: Thanks Justin. It helps the like to hear that. Hopefully we can continue. That discussion will find and it's like that. We've got everyone on the same page in these discussions of this point, at least in terms of understanding where the question marks are that we need to work through. CP: It might be interesting to get more people into the module harmony calls which are biweekly. organized by Chu and YSV. So if you have any interest or any opinions on this, please join us -GB: yeah, that certainly would be recommended because there's there's a lot of cross-cutting concerns that were able to bring up in those meetings. So, that would be a great venue to the discussion. \ - \ -JRL: I can start attending it. Thank you. \ - \ +GB: yeah, that certainly would be recommended because there's there's a lot of cross-cutting concerns that were able to bring up in those meetings. So, that would be a great venue to the discussion. \ +\ +JRL: I can start attending it. Thank you. \ +\ ![alt_text](images/image58.png "image_tooltip") - \ - \ -### Conclusion/Resolution +\ +\ -* List +### Conclusion/Resolution -* of +- List -* things +- of +- things ## Deferred Module Evaluation @@ -860,9 +826,9 @@ Presenter: Guy Bedford (GB) - [slides](https://docs.google.com/presentation/d/10cn4SfVY20no6JmtWL72JLD6rmJ-dnafIfh8XmmC7mA) - \ - \ -GB: Oryx that the third module evaluation. sir. I'm picking this up from you earlier today. and YSV originally presented this little while ago. little while ago. and it's kind of a simplification of YSV’s original proposal to try and get something that we can find agreement on in committee. So to just reiterate, the use case that's being solved here, is that module performance is important and it's something that we're tackling in a number of ways. And with all of this module stuff going on, we mustn't forget modules’ performance is the most important thing at the end of the day for users. And so we must keep focusing on these kind of use cases. So what is the problem that this proposal is looking at, is that we're looking at large code bases, we've got a lot of module code that's executing on initialization, of merchants(?) a large bundle. but kind of when you're looking at module, later modules and you've applied, Applied all loading optimizations. So once all pre-loading water for optimizations, bundling when necessary, and so many banks run for, bringing that up that, you know, bundling continues to remain an important optimization in modules were close. Once, you've done all these things, and you've got this optimized module graph, there Still Remains the synchronous blocking top-level execution, cost of the initialization. So that's a problem that we're looking at. And we want to try and solve this without being forced to ‘async-ify’, the entire code base. +\ +\ +GB: Oryx that the third module evaluation. sir. I'm picking this up from you earlier today. and YSV originally presented this little while ago. little while ago. and it's kind of a simplification of YSV’s original proposal to try and get something that we can find agreement on in committee. So to just reiterate, the use case that's being solved here, is that module performance is important and it's something that we're tackling in a number of ways. And with all of this module stuff going on, we mustn't forget modules’ performance is the most important thing at the end of the day for users. And so we must keep focusing on these kind of use cases. So what is the problem that this proposal is looking at, is that we're looking at large code bases, we've got a lot of module code that's executing on initialization, of merchants(?) a large bundle. but kind of when you're looking at module, later modules and you've applied, Applied all loading optimizations. So once all pre-loading water for optimizations, bundling when necessary, and so many banks run for, bringing that up that, you know, bundling continues to remain an important optimization in modules were close. Once, you've done all these things, and you've got this optimized module graph, there Still Remains the synchronous blocking top-level execution, cost of the initialization. So that's a problem that we're looking at. And we want to try and solve this without being forced to ‘async-ify’, the entire code base. GB: So YSV earlier was able to bring some numbers to those from some Firefox use cases, I don't know the exact details of these tests but in these in these benchmarks, it's looking at performance. characteristics. where 45% of the time is spent on loading and parsing and 54% of the time is spent just executing that top level graph initialization, which has to be done synchronously, which has to block the event Loop when it happens, Can't be of threaded sort of thing. and if you have a large graph and you hit this problem, there's not a lot of things you can do. you basically have one option which is to start. I think things out in. Dynamic Imports. And so you find functions that call things that aren't needed during the initial execution that they're only used during the sort of running. lifetime of the application. maybe a few seconds after the initial page load. and you ‘asyncify’ those functions, so that it will easily import the dependency that it was trying to execute on that conditional branch. And when you do that, you then need to ‘asyncify’ everything up through the parent function stack? And so the question is, is that really what we want to be encouraging people to do as their only option? and then even further, dynamic import doesn't actually solve the entire problem, because dynamic import still needs network optimization. Just have a dynamic import you've now actually created a performance problem because you're now going to have a waterfall problem which you then need to separately preload, and and water parks. - there's a static analysis difficulty for bundlers. If you've got highly dynamic, dynamic Imports. And so it's not necessarily the best solution, but it's the only solution. So the idea is you can have any primitive that can improve that startup performance without sacrificing that API, So you can still write nice modular looking code. And what was specifically brought up last time around Union's proposal Was some of the magic around how bindings were being handled and how evaluation is being handled that it was actually becoming a new kind of binding primitive and so the original proposal was that you could have this kind of lazy initialization. In this proposal we switch between the with syntax and the sort of reflective syntax that we've been using as well, equivalently because we haven't settled on a direction for this proposal, so please don't judge that too harshly. And with the original API that she proposed you could have the full grammar of normally s module Imports. and then you would add this lazy initialization. which on access you would you would execute those findings. And what we're proposing is a simplification where you only get these lazy namespace objects or deferred module namespace objects, which in that example we have got a method that's only used rarely on a dynamic path that that's after their initial initialization of the app you can turn that import into a lazy import and then you access the exports as you would on any namespace. space. But that access becomes a getter. So the getter becomes the evaluation part. And so the import is loaded all the way up. The dependencies are loaded. All the network stuff is done and it's ready to execute synchronously, but that synchronous execution only happens when you when you apply the getter so that to the defer Quantrill namespace, Any get our will execute the entire module and it will only execute once and you're then able to call the function so that you're getting it deferred as needed, and the initial initialization of the application can cut out that execution work entirely. So on your module graph when you add is, what you're getting is, you're creating It's almost like a new kind of top level separation, where that lazy graph is a new sort of top level graph that as if you were top level Dynamic importing it. And if graphs overlap you can, you can race execution, just like you can with normal modules and it can work recursively as well, just like recursive dynamic import and then are some small issues for example with top-level await, we can't do all the network. Sorry we can't do get everything synchronously ready because the the execution might actually be asynchronous itself. And the way we deal with that is, we would either need some kind of special handling where it's not allowed entirely. So, when you do this lazy or deferred import, it would throw right away. Or alternatively. swirl before top level 2. we could even legally handle asynchronous. evaluation. down to the synchronous. direct. Synchronous. supper on, which is, well, defined concept. And that remaining synchronous of graph, could then be evaluated, as the Deferred evaluation of quick different module name space. And then just to go over the I/O benefits. I'm not sure what I'm supposed to go Brown. The slide. and then yeah. The other thing that was brought up was the potential for some kind of stack injection, error injection. So, when when the because of the fact that this this execution is being done as that getter on the deferred module, kind of a new way of running that top level asynchronous evaluation. as opposed to Dynamic import or static. Imports being the only way to do that today. And so if, for example, if there's an error that error is going to potentially be cached in the module graph, if we stick with Eric, aisling, and that means that you the stack of that error would come from the place. Where was evaluated? So there's some discussion around the fact that that could expose the the place where that deferred evaluations happening because it belongs to the execution stack and and also that data(?) becomes the calling position. whereas with dynamic import would do this asynchronously, it becomes a part of the synchronous. evaluation evaluation point. so, the top level await and this kind of injection, or something, the mean, Harry things to work through, but apart from that, it's used to be relatively well, defined. So what we were just looking to find out today is what people think about this reframing of The Proposal, we could still extend the proposal back to some of the most Mark things with bindings and Future. But if we just try and get agreement on this kind of a primitive, an agreement on solving this use case and discuss it while we're discussing all the modules things so that we can make sure that we're handling all the use cases that we need to have. So are there any questions? @@ -877,7 +843,7 @@ YSV: Yeah, I just want to emphasize that the important change here is that we ar USA: before that RPR has a reply to that. RPR: I think this changes the proposal. I understand why it's been made and am generally in favor. Anything that pushes this proposal forwords is amazing. The main thing is it's a small loss of ergonomics because one of the principles of this proposal is that it's something where you can make a surgical change to the performance characteristics of loading. Ideally it's an optimization that you sprinkle in. And the whole reason is that this is much easier than going through and making all your code's callstack async, which is a very non-ergonomic and rippling workaround. Whereas with this change (the loss of named imports) this means developers will have more work to do compared to just adding a keyword, because you'll then have to go through all of the usage sites and prefix them with a namespace (`ns.identifier`). So it slightly moves the proposal away from that goal. But it's definitely not a showstopper. - \ +\ YSV: Yeah, I just want to respond to that Rob. So as some of you know, I've actually built this loader for Firefox. to load our client code. So, previously we had a handwritten custom loader for JavaScript that behave completely differently from the ESM system. Similarly, you can think of it as similar to the CommonJS system before they realize that Order for it to be specified. That they cannot do synchronous loading. So our old version continued to synchronous loading and as we moved to using ESM, we actually ended up for other reasons. being forced to introduce a lazy namespace. So this design now actually reflects the reality in the Firefox codebase. I agree that ideally we wouldn't have to do this but I also respect that we've currently got an invariant in the language where the module bindings are always what you import and never being replaced in the way that I was suggesting. So I do accept that. and I will say our developers did complain when we made this change. It wasn't ideal. They didn't like the fact that they needed to use a namespace. But with linting, we were able to get this across pretty easily. USA: Next up it’s YSV again. @@ -914,20 +880,20 @@ JHD: I think "consensus" may be too strong. I think it's too early to know if we GB: We will certainly bring it back with a longer presentation in one of the committee meetings. but thanks everyone. -DE: I wanted to briefly clarify with the discussion about top-level await that I guess that means preflighting it, fetching all the dependencies and running the top-level awaits eagerly. In the Bloomberg context. I think the the way that some of our applications that would incorporate this work, is that they would use some static analysis to make it cheaper to fetch some of these things or understand which subgraphs have such top-level awaits It at runtime. It's only about evaluating less but also fetching in parsing less is also important. In the web we don't really have a way past fetching less but in tools that optimize things you know, in build systems that use these semantics, our evaluation has been that you can actually parse less at runtime if you calculate a little bit of metadata about how things would fit together. So anyway, that's all just to say. I think this is a great proposal that the changes that you've made now are really good and I support you now I'm optimistic about this eventually moving to stage two. \ - \ -USA: Great. We are on time. I hope you got the conclusion that you needed. \ - \ -### Conclusion/Resolution +DE: I wanted to briefly clarify with the discussion about top-level await that I guess that means preflighting it, fetching all the dependencies and running the top-level awaits eagerly. In the Bloomberg context. I think the the way that some of our applications that would incorporate this work, is that they would use some static analysis to make it cheaper to fetch some of these things or understand which subgraphs have such top-level awaits It at runtime. It's only about evaluating less but also fetching in parsing less is also important. In the web we don't really have a way past fetching less but in tools that optimize things you know, in build systems that use these semantics, our evaluation has been that you can actually parse less at runtime if you calculate a little bit of metadata about how things would fit together. So anyway, that's all just to say. I think this is a great proposal that the changes that you've made now are really good and I support you now I'm optimistic about this eventually moving to stage two. \ +\ +USA: Great. We are on time. I hope you got the conclusion that you needed. \ +\ -* List +### Conclusion/Resolution -* of +- List -* things +- of +- things -## An introduction to the LibJS JavaScript engine +## An introduction to the LibJS JavaScript engine Presenter: Linus Groh (LGH) @@ -939,7 +905,7 @@ LGH: As I mentioned already, we have SerenityOS, which is strongly related to th LGH: All right, so let's quickly address the name because it’s a bit ambiguous, LibJS, JavaScript library, but it's very simple to explain it. So because everything in that SerenityOS operating system is made from scratch we don't need to invent names that stand out or are catchy, like products will try to find a good name for that. So everything has a very descriptive name. We have applications like browser and calculator, and file manager and PDF viewer, and so on. And they are called that both in the code internally and in the user interface. The same is true for libraries. So we have LibGL, LibGfx, LibDNS, LibHTML, [some more listed]. You can guess from all these names, what they do. And all sort of encapsulate one thing. Then we have around 60 of them, and in there is everything you would you need for an operating system. And so very naturally “LibJS” was chosen. It was no real discussion. That only happened later on when we decided to pull out the web engine and everything around it into a cross platform browser project because we no longer wanted to limit to that niche hobby operating system. So that's now called ladybird browser and entails everything from the web engine, CSS parsing, webassembly engine, JavaScript engine, and so on. So both “SerenityOS JS engine” and “Ladybird JS engine” is fine, depends on the context a little bit. -LGH: Let's look at some characteristics. So as I mentioned it’s completely developed from scratch, and that’s not really because we don't think that all external code is not suitable, it's just sort of just a principle because we get really good integration. And once you develop everything from the kernel to the last user space application and like they all share the same standard library. Like you've got the same string type everywhere, the same vector type, and we really don't want to introduce anything from the outside into that because then it falls apart. It's implemented in a way where we started quite late, like two decades behind some engines. Which results in a weird order of implementing stuff. So, at times, we had very modern bleeding edge JavaScript features implemented way before some obscure legacy thing that every other browser has got. One example is the ‘with’ statement. Yes, no one wants to use it, but you kind of still need it, but that came later on. The historical timeline is a bit distorted and it's all mixed. One thing we find very important is staying very close to the spec. I'll get into that in a bit, but its source code just looks very similar to what you would see in the ECMAScript specification and the proposals, and that helps a lot with staying correct and also being able to find stuff from the spec in our engine and then see how that integrated into other things. Additionally optimizations and everything not in the specs is marked as such. I mean, we still got parts that are not in the spec at all, garbage collection, for example. Also we have no roadmap because it's all volunteers, it’s just an open source project. Contributors don’t really make any promises about what they will work on or will not work on. So, the easiest way to make something happen is to make it yourself, we usually say, but that doesn't mean that we won't implement something at all. We usually try to stay up to date with all the latest proposals and stuff. But there is no official "yes, we will do this at point x in time". And as I also mentioned earlier, we regularly get recordings of engine development, recorded and published on YouTube, and also once a month, we make a summary video that contains all the changes both on the operating system side and on the web engine side. Some people use that to stay up to date with the project. Here’s one example of how that coding style looks. So, for example, the GetMethod operation, it's just four simple steps but turns into way more code than just four lines because we mark everything with those comments, which also makes it very easy to review, and makes it very easy to check for correctness. You can also see, we don't always use the exact same name. So “func” from the spec gets turned into “function” for example, and we have additional helpers as well. It's not a literal translation, but it tries to stay very close. +LGH: Let's look at some characteristics. So as I mentioned it’s completely developed from scratch, and that’s not really because we don't think that all external code is not suitable, it's just sort of just a principle because we get really good integration. And once you develop everything from the kernel to the last user space application and like they all share the same standard library. Like you've got the same string type everywhere, the same vector type, and we really don't want to introduce anything from the outside into that because then it falls apart. It's implemented in a way where we started quite late, like two decades behind some engines. Which results in a weird order of implementing stuff. So, at times, we had very modern bleeding edge JavaScript features implemented way before some obscure legacy thing that every other browser has got. One example is the ‘with’ statement. Yes, no one wants to use it, but you kind of still need it, but that came later on. The historical timeline is a bit distorted and it's all mixed. One thing we find very important is staying very close to the spec. I'll get into that in a bit, but its source code just looks very similar to what you would see in the ECMAScript specification and the proposals, and that helps a lot with staying correct and also being able to find stuff from the spec in our engine and then see how that integrated into other things. Additionally optimizations and everything not in the specs is marked as such. I mean, we still got parts that are not in the spec at all, garbage collection, for example. Also we have no roadmap because it's all volunteers, it’s just an open source project. Contributors don’t really make any promises about what they will work on or will not work on. So, the easiest way to make something happen is to make it yourself, we usually say, but that doesn't mean that we won't implement something at all. We usually try to stay up to date with all the latest proposals and stuff. But there is no official "yes, we will do this at point x in time". And as I also mentioned earlier, we regularly get recordings of engine development, recorded and published on YouTube, and also once a month, we make a summary video that contains all the changes both on the operating system side and on the web engine side. Some people use that to stay up to date with the project. Here’s one example of how that coding style looks. So, for example, the GetMethod operation, it's just four simple steps but turns into way more code than just four lines because we mark everything with those comments, which also makes it very easy to review, and makes it very easy to check for correctness. You can also see, we don't always use the exact same name. So “func” from the spec gets turned into “function” for example, and we have additional helpers as well. It's not a literal translation, but it tries to stay very close. LGH: What's in scope? As I mentioned earlier, it's basically everything. We target the latest specification draft, so we don't even stick to the 2022 or 2023 yearly release. We just use whatever is currently on the main branch, both in 262 and 402. New proposals are usually considered from stage 3 onwards. We don't always have the capacity to make stage two prototypes and then do all the changes from there, but past stage three we are usually very happy to implement them. Annex B as well, it's just part of the core engine. We provide all the hosts hooks that are in the spec, and host-defined slots for any host that wants to embed the engine to customize behavior as they expect. And that is obviously the case in the browser, stuff like how you load modules. Or [[HostDefined]] slots to hold custom data and next to the VM. But it's also important to mention that it’s a pure ECMAScript engine. Concepts that are defined elsewhere, we don't really want to put into the engine. So while some engines will choose to ship WebAssembly as part of JavaScript, in our case it’s a separate library and the web engine provides the JavaScript bindings for WebAssembly, it's not part of LibJS itself. Here's a list of implemented proposals. A bunch of stuff in there like array grouping, change array by copy, import assertions, JSON modules, ShadowRealm, Temporal, Symbols as WeakMap keys. On the Intl side as well, but that's not a lot of stuff I work on. DurationFormat, the enumeration API, and Locale Info, for example. @@ -951,7 +917,7 @@ LGH: So it doesn't really make sense to make a JS engine if no one is using it. LGH: It's not been without problems. So over the years we've had probably three big issues, which I have listed here. The biggest one is initially not using the spec as the main source of truth. In hindsight that sounds very silly. But you know, you got a bunch of people just being excited about the new thing and they jump onto it. They're probably not as careful as they should be. So people would go and implement stuff from memory, or implement stuff based on MDN descriptions, or implement stuff based on other browsers’ observed behavior, and you just get a bunch of edge cases that will not work, we had so many inconsistencies. For example, evaluation order of individual steps was not right, or mismatched compared to all the other engines. Also things like duplicate ToString or numberconversions. Yeah, it was a bit of a mess. Our solution to that was just imposing a very strict coding style, which I showed earlier, where we literally take the entire spec, copy it into the source code one line of spec, one or more lines of source code, and then you can very easily compare that. And it also makes sure that the person who implemented that actually looked at the spec instead of just going ahead and doing it from scratch. That's extreme but it solved the problem. Then we also did not get some fundamentals right from the beginning, one example being objects. So you probably are familiar with internal object operations. Which proxies hook into, for example. And unless you specifically provide those entry points it’s very hard to make a fully compliant object implementation, we had one that worked in 90% of the cases and getting the last 10% right was incredibly difficult because you would just get so many edge cases and special handling, it’s turning into a mess of spaghetti code. To fix that, we did it again from scratch, which is unfortunate. And the same with Realms. So early on, we just didn’t have the concept of Realms. Intrinsics lived on the global object, which was fine. It gets you ahead for a bit, but then after a while, especially once you start integrating it into the browser and stuff like iframes, cross realm correctness just falls apart. And then we had an almost 10,000 line change in a single PR, we ripped out all the old stuff and replaced it. Also not testing at a large scale, test262 is not complete coverage, but you can get very far. And so had we run that consistently earlier, we would have noticed all these issues from the beginning. Like “oh, I just implemented this function and I expected it to work. Why are half of the tests still failing?”. -LGH: We have currently 87 individual contributors, counted by commits that focus on the engine itself. If you take everything around it like BigInts, or regex, it’s much more. But it's still only a very small core team. So we have eight people which made over 90% of contributions. But we still to this day, encourage newcomers, it's not as easy as it was in the early days where you could go and Implement if statements because no one has done that yet, all the low-hanging fruits are gone. But we still encourage people to join and try something. And maybe they'll like it and they stick around. One example of that is editorial changes. We try to keep up with those even though it's not necessarily required to do it. But that could be as simple as changing a few words, you can get a good feeling of how the whole process works. Inspired by the graph showing Igalia as being the number two chromium contributor that got shared recently I made one for ourselves, the penguin avatar is myself but we got a few more people there, many of them focus on some specific things. You also might have seen them online on GitHub filing issues in proposal repositories or for test262. It's a great team. I’m very proud of all the work they have done. +LGH: We have currently 87 individual contributors, counted by commits that focus on the engine itself. If you take everything around it like BigInts, or regex, it’s much more. But it's still only a very small core team. So we have eight people which made over 90% of contributions. But we still to this day, encourage newcomers, it's not as easy as it was in the early days where you could go and Implement if statements because no one has done that yet, all the low-hanging fruits are gone. But we still encourage people to join and try something. And maybe they'll like it and they stick around. One example of that is editorial changes. We try to keep up with those even though it's not necessarily required to do it. But that could be as simple as changing a few words, you can get a good feeling of how the whole process works. Inspired by the graph showing Igalia as being the number two chromium contributor that got shared recently I made one for ourselves, the penguin avatar is myself but we got a few more people there, many of them focus on some specific things. You also might have seen them online on GitHub filing issues in proposal repositories or for test262. It's a great team. I’m very proud of all the work they have done. LGH: Now you might wonder, how can you try it? So obviously, it's still part of the Serenity operating system, which runs into QEMU. So, we've made it very easy to run, it’s basically just one command to run after installing a few toolchain dependencies. It also runs natively on Linux and macOS. We’re not providing binaries right now because we don't think it's quite ready for that yet. We want people who provide feedback to be at least able to build it. We have integration in esvu and eshost, thanks to Idan who did that a little while ago. Also Ladybird as I showed in these pictures earlier. And as of one or two weeks ago, we have a WebAssembly build of the entire C++ engine. So, you can actually try in the browser right now. Which is thanks to Ali, who also made our WebAssembly engine. That’s it, if anyone has any questions I’m happy to answer them now, but I'm not sure how much time is left. We are already over the time.