-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WD-40 Issues thread was accidentally deleted by @keean #49
Comments
Seems you completely ignored what I wrote earlier in the thread in response to @sighoya:
No it doesn’t. It obscures intent which can lead to inadvertent resource starvation. Yet I repeat myself again. And Rust is not doing the ARC implementation of the form of RAII which can return the resource from the stack frame scope (i.e. not the “basic use case where the lifetime of an RAII object ends due to scope exit”), it is entirely a static analysis which is what I was discussing with @sighoya. AFAIK Rust can’t handle the multiple strong references case, although I’m arguing against allowing (or at least discouraging it) it anyway. But surely C++ allows it via
If programming was only so simple and one-dimensional as you want to claim here on this issue.
I actually find they often get in my way. It’s usually better to paradigm-shift than pile complexity on top of complexity. |
@shelby3 my use of Weak/Strong is in line with the common usage, for example see weak references in Java and JavaScript. Suggest we try and stick to the standard terminology. The RAII case is less likely to lead to resource starvation because the programmer cannot forget to free the resource, and the resource is freed as soon as the program holds no references to the resource. There is no semantic conflation. If you have the handle you can read from the file. If there is no handle you cannot read from the file. It's simple and clearly correct. The case where you can have access to a handle that is closed is far more confusing to me. Don't forget that by wrapping that handle in a weak reference you can explicitly destroy it at any time. The purpose of a weak reference is to encapsulate something that has a sorter lifetime than the reference. So from my point of view the explicit distinction between strong (the default) and weak references is the important thing. If we have this then we can have RAII file handles, and can use them in all the ways you want. I don't really think there is any conflation here, just a simplification the prevents a whole class of mistakes (unless you explicitly make the reference to the handle weak). |
No I suggest you assimilate what I write and understand that I specifically wrote (much earlier in the discussion) to not conflate it with (the traditional meaning of) strong and weak references.
As I predicted more blah blah from you repeating your same myopic argument for the Nth time and totally failing to address my holistic point. The last use of the reference to a data structure which somewhere in the object graph references a resource handle (and especially opaquely if you allow cyclic references) doesn’t correspond necessarily to end of the last access of the resource handle or the resource.
Which in your model does not release the resource. Again that is tied into my holistic analysis, but it has become too complex (requires too much verbiage which I am unwilling to continue) to untangle all your vacuous non-rebuttals.
We can have RAII or even slightly better than RAII with a variant of my desire to ask the programmer to be explicit to make sure intent is not obfuscated or forgotten. And we do not even need ARC but that leads us to something like Rust, which I don’t like. And we already discussed that ARC is incompatible with non-ARC, GC, and I don’t like conflating resource lifetimes with reference lifetimes anyway (for the reasons I had already stated including obfuscating intent and explicit duration of lifetimes, e.g. where delete an item from a
Even with a single strong reference (and even if no weak references) and given implicit RAII (whether it be via ARC or linear types a la Rust) there is still a conflation of resource lifetime semantics with (reference or access to) resource handle lifetime semantics. |
@shelby3 I think you just don't understand some things that are common knowledge in programming and it's frustrating trying to explain them to you. I can understand you have a preference for a particular style of semantics, say GC with explicit destructors, and that is fine. You should also understand that there are other people who prefer other idioms like RAII. Your criticisms that there is something wrong with RAII, or use cases it cannot cope with are wrong. I think because you don't like it, you have not really studied it looked at how it solves the problems. I have written programs using RAII, so I know how it helps avoid bugs. I have directly seen the benefits in real projects compared with explicit destructor approaches. So the situation from my point of view is that real world experience contradicts your speculation. |
That is an extremely unfair allegation when in fact I wrote earlier in the discussion that I was making a distinction from the common usage of the terms. Do I need to quote it for you? There is nothing wrong with the logic of my model. And you put your foot in your mouth (and I use that terminology because you were forcefully stating I was wrong) because you did not read carefully what I wrote. And so now you ’backsplain by attempting to claim that I don’t understand when in fact I wrote earlier on that I was using a different model.
What part of the following can’t you read?
What part of RAII on non-ARC requires (something like) Rust do you not understand? Even I wanted implicit I would have to use Rust if I don’t want ARC. Why do you conflate everything into a huge inkblot instead of understanding the orthogonal concerns I explain. Did you entirely forget that I wrote that my new ALP idea does not use ARC?
My preference for the programmer to express explicit intent was not about explicitly calling a finalizer in every case. You completed failed to follow the discussion from start to finish. I was advocating an explicit indication when something like RAII was going to be used in each instance where the reference is initially assigned.
I used ARC also for resource destruction in C++. But I also did not have a highly concurrent program. And I wasn’t doing anything significant that would have significantly stressed resource starvation if my usage was suboptimal. But the points I made against implicit RAII were not that it can’t be made to work and were not a claim that it isn’t convenient and doesn’t prevent bugs compared to an unchecked manual cleanup (which I never advocated completely unchecked manual cleanup!), which just goes to show you have not even bothered to really understand what I wrote. My points are about the obfuscation in code it creates and how that can lead to suboptimal outcomes and unreadable code in some cases. Surely you can fix bugs and mangle your code to make it all work, but you may make it even more unreadable. My point was an attempt think about how to make it even better and how to solve the problem of ARC being incompatible with non-ARC, GC (as we agreed merging the two would bifurcate into a What Color is Your Function) and without having to take on the complexity of Rust’s borrow checker. While hopefully also gaining more transparency in the code. You completely mischaracterize what I wrote after deleting the entire thread where I wrote all of that. |
@shelby3 I have not said that your model is wrong, I am sure it's probably correct, just your use of non-standard terminology make it too much effort to decode. I think your criticisms of RAII amount to a personal preference, and you don't seem to appreciate the real word benefits that I, and some others have been trying to explain to you, and are rooted in practical experience not speculation.
Well you can do RAII in C++ so that's not quite right, but I get what you are trying to say, and I agree with you. I don't think you have understood what I am trying to say, because I have not disputed this. |
Have I ever stated I don’t appreciate the benefits of RAII as compared to manual unchecked explicit release of resources? No! If you think I did, you simply failed to read carefully.
You did not even understand what I wrote! Hahaha. I was stating that I would have to punt to something like Rust only if I wanted to achieve something like RAII in a non-ARC scenario. To any extent that C++ is not using ARC, then it is employing move semantics like Rust.
I don’t think I have imputed that you disputed that claim — rather that you do not seem to incorporate in your understanding of what I am writing about, that claim is one of the factors in my analysis. There is a logical distinction. Discussion with me can include some mind bending orthogonal concerns, and it is difficult to keep up apparently. I think I know what Bill Gates felt like talking his interviewer as I cited before. Maybe the blame could be put on me for not having a fully threshed out replacement for Rust or ARC and having it all written up in a tidy blog. But I never thought that discussion had to be only for the stage where ideas were fully formed and hatched. |
@shelby3 - Can you please just write a code example that actually truly genuinely shows your point that resource starvation is a likely outcome? Your existing example was disproved before you even posted it (whether or not you mentioned something to someone else later) because Rust lets you do what you said you couldn't do, and the resource is cleanable (if not clean because of Rust's drop-at-block-boundary semantics). Like @keean I have spent a lot of time with RAII-based languages (C++ and Rust) and a lot of time with GC-based languages with When it comes to correctness in handling non-memory resources, empirically my observation is that it is far easier to do it in C++ and Rust. It's a constant problem with my Scala programs. It's basically never a problem in my Rust programs. (In C++ I had so many other problems that I can't recall how bad it was, just that various other things were worse.) When it comes to explicit vs. implicit for things like this, I don't have a language that catches every case and insists that I be explicit about it, but I do have experience with parallel constructs (e.g. be-explicit-with-all-types, be-explicit-with-all-exceptions-thrown, be-explicit-with-all-returns) and in every case being explicit is a drawback. The reason is that attention is finite, and it wastes your attention on the obvious stuff that is working right every time (or which you can't even tell is working right because you're not a computer and can't trace every logical path in milliseconds like it can). Even saying In addition to the extensive experience that this is the better way to do things (at least for someone with my cognitive capacities), there is also the critical invariant that @keean keeps mentioning over and over again, which you never have adequately and directly responded to, which is that RAII can prevent use-after-free (and use-before-initialization) errors. So, write a code example that shows what you mean! So far you've failed every time. Everything is trivially fixable or isn't even the problem you claim it is. If it takes design to get it right, you additionally need to argue that this is harder than the alternative to get your proposal right. (So write two examples if you must in order to show the contrast.) I am certain I am not understanding your objections without code; it seems as though @keean is also not. Now, if your objection is just, "Rust does this all correctly, but I don't like Rust," then I understand that, but you keep saying things about risking resource starvation when my experience, and the logic of can-use-it-when-you-can-refer-to-it-can't-when-you-don't both argue that this is the way to avoid it. |
What part of I don’t want to use Rust which I have stated many times do you fail to assimilate? So you did not disprove anything w.r.t. to my stated context. The discussion started with you claiming that Rust’s ability to manage resource lifetimes was a big win for the borrow checker. And I followed by saying I had been thinking about I don’t want to implicitly conflate resource lifetimes with resource handle access lifetimes (which @keean is not the same as saying I want unchecked explicit finalization instead of RAII, come on keep orthogonal points orthogonal and not conflate them). And one of the reasons is because I don’t want to be forced to use Rust’s semantics and borrow checker (given that ARC is incompatible with my GC idea for ALPs). And the other reason is I think perhaps being implicit is obfuscating and can at least lead to unreadable code, which was exemplified in your subsequent example wherein @sighoya did not immediately realize that the function could have thrown an exception leading to a leak without implicit RAII. In other words implicit is not always conveying all the cases of implicit semantics to the programmer. Now whether that is a good or bad thing is a separate debate. But please come down off your “I disproved something” high horse. |
Remember you have pointed out that
This makes sense to me. But it is not a rebuttal to my desire to not want to be forced to use something as tedious (and ostensibly fundamentally unsound type system) as Rust. And punting to ARC will also not meet my ALP zero cost GC goals. Also ARC can not collect cyclic references! I think @keean may have forgotten that, otherwise maybe he would not be pitching ARC as a slam dunk. EDIT: also apparently Rust can’t do unrestrained cyclic references (although some special casing of data structures with cyclic references apparently can be accommodated): https://www.reddit.com/r/rust/comments/6rzim3/can_arenas_be_used_for_cyclic_references_without/
I was not proposing to be explicit about every
Whether it is useful or not is an orthogonal debate, but at least it invalidates your comparison above.
I have addressed it. You must have forgotten the entire
If you continue to make false allegations like that, then the exchanges between us are going to become more combative. You take something completely out-of-context and then make claims which thusly don’t apply.
Nope. With that claim you implicitly presume or claim Rust is trivial. And by transitive implication you claim that ARC can handle cyclic references because my context has clearly been that Rust is not trivial, so the only alternative to Rust’s lifetimes currently concretely known to work for RAII is ARC. EDIT: and apparently Rust can’t implement unfettered cyclic references well either. So your hubris has been deflated. Hey I was in an amicable mood of discussion. And then you and @keean started to attack me with an attitude of hubris with confident, false claims about how wrong I am, how you disproved me, how I don’t understand terminology, how I don’t understand the benefits (and tradeoffs) of RAII, ARC, Rust, etc... Tsk, tsk. There’s always some tradeoffs in every paradigm choice. We should not declare that all the possible quadrants of the design space have been enumerated because it is difficult to prove a negative. |
Replying to myself:
So it should be easy to make an example that shows how this could lead to resource starvation which would not be detected by RAII (neither Rust nor ARC). Just store the resource handle in a data structure and stop accessing it meaning you are finished with the resource handle, but continue to hold on to the reference to the data structure and access other members of that object. So neither Rust nor ARC will detect that the semantic resource life has ended before that of the lifetime of the data structure which contains it. The only way around that is refactor the code to attempt remove the encapsulation (which has potentially other deleterious implications) or explicitly delete a strong reference (which @keean points out it is unsafe because the resource handle object would still be accessible). I mentioned this scenario in my prior example (such as quoted below) and @NodixBlockchain also mentioned it.
@Ichoran what have you disproved? 🤦 EDIT: even just general discussions about memory leaks applies to why ARC can leak resource lifetimes if we conflate them with ARC (or for that matter RAII as implemented in Rust, because the lifetimes checker can’t resolve the last paragraph below which was my point all along): https://www.lucidchart.com/techblog/2017/10/30/the-dangers-of-garbage-collected-languages/
|
So now perhaps you can understand my perspective. Given that I can’t have RAII in Task because:
And given that RAII has semantic vulnerabilities to resource starvation and implicit hiding of resource lifetimes as I have explained, then any solution I come up with could also have some vulnerabilities to resource starvation and not be at a disadvantage to RAII in every case. (to be continued after I sleep with an explanation of my paradigm-shift idea, which isn’t that far from the original idea I was attempting to explain since yesterday) |
@Ichoran To summarise my comments on how we could do better than Rust, I think lifetime erasure is a problem in Rust. I propose a system that uses reference counting as its semantics, and the type system is then used to provide static guarantees over this. Where static behaviour can be proven, then the RC will be optimised out. The reason for ARC rather than Mark-Sweep memory management is then the semantics with destructors will be the same whether the compiler statically manages the memory or uses runtime/dynamic memory management. This allows using RAII consistently. The first pervasive simplification would be when there is only one reference (effectively an owning reference). In effect we have three kinds of reference: Owning references: Not owning references (better name?): Notes:
So the idea is, unlike Rust we start with memory safe runtime semantics, and we optimise performance when we can make compile time proofs. So unlike Rust we don't need heuristic rules to keep the expressive power in the language, and static optimisations can be restricted to only those places we can actually prove the optimisation is safe. This also avoids two things I don't like about Rust. There is no safe/unsafe code distinction, just performant and less performant code, and there is no source code distinction between ARC and non-arc code. This really helps with generics because we don't need to provide two different versions of generic functions to cope with ARC and non-ARC data. @shelby3 If you want to use Mark Sweep style memory management, you would have to avoid static destructors to allow the compiler to optimise between runtime/dynamic memory management and static memory management with no change in the semantics. So the alternative architecture would be Mark Sweep with explicit destructor calls for non-memory resources. My hypothesis, which could be wrong, is that enough code will optimise with static memory management that the performance difference between ARC and MS will not be noticeable. I think some people will prefer the RAII approach, and if we can negate the performance penalty of ARC with static management optimisations then that will be an interesting language. I think both of these languages (RAII+ARC and ExplicitDestructor+MS) will be a lot simpler than Rust because we can hide all the lifetimes from the programmer, because we have safe runtime semantics with RC or MS, and then static lifetime management is an optimisation. We can implement increasingly sophisticated lifetime analysers without changing the semantics of the language, something Rust cannot do because it differentiates between dynamically managed resources (ARC) and statically managed resources in the language. |
@shelby3 - You can use The downside of RAII is basically the opposite of what you're saying. It's not that it has semantic vulnerabilities. It's amazing at avoiding vulnerabilities compared to everything else that anyone has come up with. However, it comes with some awkwardness in that you have to pay attention to when things are accessible and get rid of them as soon as they're not. This is a heavy burden compared to standard GCed memory management where things get lost whenever, and then eventually it's noticed at runtime and cleaned up. If one doesn't mind the conceptual overhead of having both systems active at the same time, one could have the best of both worlds. A resource could either be another way to declare something, e.g. Alternatively, one can have general-purpose resource-collecting capability in addition to memory. You'd have to provide some handles to detect when resources are growing short, and adjust the GC's mark-and-sweep algorithm to recognize the other kinds of resources so they could be cleaned up when they grow short without having to do a full collection (though generally the detection is the tough part anyway). Then every resource would act like GC--you'd never really know quite when they'd be cleaned up, but whenever you started to get tight they'd be freed. Sometimes this is good enough. Sometimes it's risky. (E.g. not closing file handles can increase the danger of data loss.) Regarding how to make RAII fail:
Yes, absolutely. If you hang on to a resource on purpose by sticking it in a struct then, sure, it's not going to get cleaned up because you might use it again. Any resource can be held onto that way--actually finish using it, but don't say so, and require your computer to solve the equivalent of the halting problem in order to determine whether it'll actually get used again. If you have users who are using stuff that is a resource without even knowing that it's a limited resource, and are making this kind of mistake a lot, then yeah, okay, I see that they might need some help. Having the type system yell at them when they create it to make sure they know what they're getting into is perhaps a plus. ("I opened a file? And I can't open every file at the same time, multiple times, if I want to? Golly gee, these computers aren't so flexible as everyone makes them out to be!") If you have very slightly more canny users, they presumably won't do that. They'll learn what things use scarce resources, and use whatever the language provides for them to release them when they need to be. |
@keean - The cognitive overhead in Rust of avoiding Arc cycles is not entirely negligible--data structures where you used to not have to care end up being a mix of Arc and Weak. Do you have a plan for how to avoid that? |
Yes, owning pointers must form a Directed Acyclic Graph, enforced by the type checker. |
@keean any feedback yet from Github? If we loose that entire issues thread #35, that is somewhat catastrophic to my work on the new PL. There were so many important comments posts that I had linked to from my grammar file which contained various explanations of various issues in the PL design space. I know you have that backup you made 2 years ago. At the moment I still have a copy of that #35 issues thread still open on my web browser. So in theory I could start copying (at least my) hundreds of posts out the thread (or at least the ones which were added or changed since your backup), but that would be a significant manual undertaking, unless perhaps I figured out how to write some scraper in JavaScript or a browser plugin. Does Github have a paid support level? Should we pay to get the service we need? How much would it cost? I might be willing to chip in to get that issues thread restored. Also it may be the case that if they restore, they do it from a nightly backup so perhaps we may still lose some of the last comment posts. So the sooner I could know, the more chance that the copy open in my browser window will still be there, so I can recover the last comment posts as may be necessary. So what I am saying is can you push harder on Github support for feedback sooner? |
@shelby3 nothing heard back yet. Its probably be sensible to make sure you don't lose what you have, but hold off posting up yet. |
I haven’t read the context yet, but I actually thought this point might be raised so I was thinking about this already just before I drifted off to sleep...
That Rust essentially prevents one from creating willy-nilly cyclic references, isn’t necessarily a good thing. It’s a limitation which may impact on degrees-of-freedom. I have not fully explored what that limitation means in practice, but I have presume that being able to not restrict cyclic references is preferred in a general purpose PL, if it doesn’t incur some unacceptable tradeoff such as the level of performance priority required for the use case (which for example in a high-level language the performance priority is somewhat lowered relative to an increase in the need for degrees-of-freedom, flexibility/ease-of-expression, unobtrusive safety guarantees, etc)? And ARC doesn’t prevent cyclic references yet can’t collect them. And I’m aware of partial tracing algorithms that attempt to cleanup cyclic references, but these don’t guarantee to catch them all without perhaps the extreme of essentially trashing the overall performance. Also for any reentrant or multithreaded code, then ARC requires a multithread synchronization primitive on each modification of the reference count. There are other issues with performance which have some workarounds with some designs but various tradeoffs: https://en.wikipedia.org/wiki/Reference_counting#Dealing_with_inefficiency_of_updates So this seems to point towards Rust as the only solution which is more performant while still admitting some cases of (but AFAIK not truly unrestrained) cyclic references, yet I would prefer to find a solution which is nearly as performant and zero cost abstraction as Rust (including lowest memory footprint), without any of the complexity of the lifetime tracking (annotations, limitations, unsoundness, etc) and which can fully integrate with cyclic references as tracing GC does. I think I may have found another quandrant in the design space. There will be tradeoffs of course — there’s always some caveat. AFAICS, our job is to identify the tradeoffs and make engineering choices. |
In the meantime could you ask the community what Github’s typical capabilities and reaction is to such a support request and what is the best way to go about seeking any remedies which may be available? Are you sure the history is not available via an API? Surely the issues threads are Git versioned, including deletes? Perhaps the community may have a solution which for us which doesn’t require action from Github Support, whose webpage for opening a support ticket states they’re currently understaffed due to [1] Skip to p.265 of Bolton’s book to read the chapter about how China manipulated Trump. Keep in mind that Bolton is a war hawk and his book is written to inflame conservatives. Nevertheless I think it’s possibly also a valid indictment of China’s internal politics. |
@shelby3 When I searced for issues, it seems they normally restore deleted stuff, next day for some people, but they say they are responding more slowly due to the current situation. |
It's in the idea of what i'm doing with the runtime, can switch from rc to ms, and still keep the rc semantic, that can help tracking subtle memory bugs, and make the required memory pattern more obvious. |
The strong and weak references paradigm is a pita. Although perhaps it should be an option, I’d hope (at least for a high-level, easy-to-use PL) not to choose a paradigm where it’s the only option, because it’s another deviation from the ideal degrees-of-freedom, being sort of another What Color Is Your Function bifurcation — which also describes at the generative essence Rust’s alternative to not using weak references ostensibly means an inability to have any cyclic references.
Again AFAICS you’re (perhaps unwittingly due to inadvertent choice of wording and not with tendentious nor combative intent[1]) making false allegations by insinuating that I didn’t recognize the downside of the tradeoffs (e.g. “heavy burden”) incurred to implement RAII and/or that I claimed that RAII has the downside of not improving upon unchecked lifetimes semantics, which I did not claim. Let me quote myself again:
[…]
Yet it does have semantic vulnerabilities which you finally admit below.
I never claimed otherwise, although I wouldn’t go quite so far as the hyperbole[1] “amazing” because it doesn’t prevent naive programmers from “live leaking” (i.e. not just forgetting to deallocate but deallocating too late) and it does lead to opaque implicitness — remember I am trying to design a PL that could possibly be popular which means as I have documented hordes of naive, young, male programmers with less then 5 years of experience (they significantly outnumber the experts). Does wanting to improve upon RAII have to mean (conflate) in your mind that I think the implicit cleanup of RAII is the worst thing ever invented? I attempt to improve upon RAII because up until now the only ways to achieve it have been the all-or-nothing tsuris of Rust or the limitations/downsides of ARC. Meaning that although I think it was an improvement in some respects over the paradigm the programmer has been offered for example with a MS (aka tracing) GC, that the tradeoffs of using it are countervailed to some extent by significant tradeoffs in other facets of the PLs that offer RAII as @keean, you and I have discussed. And especially so the tradeoffs when we consider the aforementioned demographic I might be targeting, and in general the point I have made about a bifurcation between high-level and low-level PL taken in the context of my desire to focus first (for now) on the high-level and making it as fun and easy to work with as possible by not conflating it with low-level concerns. Note there are many orthogonal statements above, so (to all readers) please don’t conflate them into one inkblot. Give me some benefit of understanding please. Our discussions should be about trying to enumerate the ranges of the PL design space and raising all of our understandings and elucidations of the various possibilities which have already and have not yet been explored.
Okay we are making progress towards some consensus of understanding and claims.
Agreed as I understand that Rust forces the programmer to prove certain lifetime invariants. I just want to add and note that I had already mentioned that Rust can also allow a “live resource leak”. My point being that the “heavy burden” paid for RAII as implemented in Rust does not resolve all risk of a “live resource leak”.
AFAICS, merging RAII with a MS (aka tracing) style GC’ed language would incur the same lifetime proving “heavy burden” as Rust (because ARC can’t be merged without creating a What Color Is Your Function bifurcation) unless as @keean’s post today is leading towards (@keean you took some of my design idea from my mind while I was sleeping but you’re not all the way there) the design instead becomes it’s only a best effort and not a guarantee.
I guess we could teach a GC to have prioritization for cleaning up certain resources when they become starved but this is not research I am aware of and it sounds to me that it would have many pitfalls including throttling throughput. I am not contemplating this design. My design idea essentially parallels what @keean wrote today but combined with my new ALP-style GC which requires no MS (no tracing).
Thank you. Finally we presumably have some level of consensus that I am not completely batshit crazy, don’t “you've failed every time”[sic] and was not writing complete nonsense.
Which BTW is the generative essence of why Rust’s lifetime checker has no prayer of ever being able to analyze every case where something leaks semantically or something is safe but Rust thinks it is unsafe. And remember I wrote in my ranting about Rust that Rust can’t catch all semantic errors. (Not implying you disagreed with that claim).
I believe you at least slightly mischaracterize the vulnerability. The convenience of being implicit (just declare a destructor and fuhgeddaboudit) can lead to not paying careful attention. @keean has for example documented how not-so-naive programmers leak in JavaScript with implicit closures. Implicitness encourages not thinking.
They will make less errors, but they can still be snagged. Don’t me tell you never have because I will not believe you. [1] It’s understandable that possibly my rants against Rust have presumably caused (perhaps subconsciously) you to push-back with more vigor than you would had I not expressed what you probably perceive to be unnecessarily too aggressive, flippant, one-sided, uncivil, discourteous, incendiary, emotionally charged, unprofessional, community-destroying, self-destructive, etc.. |
[…]
@keean I like that you were trying to paradigm-shift in a way that somewhat analogous to the paradigm-shift I’m contemplating, but unfortunately there’s a flaw in your design idea.
The flaw is that contrary to your claim of avoiding a What Color Is Your Function-like bifurcation, because there’s still such a bifurcation in your design.
And I don’t think your idea will be as performant as my paradigm-shift which I will attempt to explain today.
My ALP design (c.f. also) doesn’t employ MS nor tracing. It’s a bump pointer heap (i.e. nearly as efficient to allocate as static stack frames) which is released in entirety with a single assignment to reset the bump pointer in between the processing of each incoming message of the ALP. Although you’re correct that in lieu of the guarantee of RAII via ARC (even Rust’s lifetime model requires ARC because it can’t model everything statically!), some explicitness is required, it will not preclude some of the static analysis you were mentioning for your paradigm-shift idea — it just won’t be a 100% RAII guarantee. But here’s the kicker which you were missing in your presumptions about what I am formulating: the guaranteed fallback in my ALP idea is still a 100% check and that typically occurs reasonably soon as the ALP returns to process the next message in the queue (although this will require overhead in addition to the single-assignment for resetting the heap, but this will only occur for those not explicitly tagged as RAII, which of course the compiler statically checks). If the programmer is doing something which can cause a long pause, they out to make sure that either the statically proven RAII was implicitly carried out (i.e. destructor called, which is the reason for some terse explicit keyword) or they should manually finalize. EDIT: There appears to be another place in the design space.
Your ARC will still be less performant than my bump pointer heap with single assignment reset (no mark, no sweep, no tracing, nothing aka nearly “zero cost abstraction”). Mine should even better approach Rust’s performance than Go, and without the tsuris “heavy burden”.
Well
I agree
Agreed.
|
Does it have the link for each post? Because I refer to the posts by their URLs, so finding which posts I cited would be implausible without the links. |
The source code of the function will be the same whether it can or cannot be monomorphised statically, so there will be no bifurcation of source. In some cases where we can prove static management of the resource is sufficient we will emit a different function, compared to those where the optimisation cannot be made. This second part is trivially obvious, because if we did not emit different code depending on the use case, we would not be optimising anything, and instead we would just have a language with ARC memory management. |
You will not, but you will.
|
I proposed Pony-esque Also I want to correct my error, if I have ever (especially recently) implied or stated that Scala enforces immutability. Scala’s |
The Death of Hype: What’s Next for Scala — A Solid Platform for Production Usage points out that the speed of Scala’s compiler has doubled in the past 3 years.
Agreed and Scala’s creator Martin Odersky admits in his recent talk about Scala 3 that his goal has been to create a unification of disparate programming paradigms (which you and I assert are ill-fit, although @Ichoran may disagree per the dicussion in the Why Typclasses #30). Odersky does mention some of the downsides which Scala 3 attempts to fix, but he doesn’t address the essence of too many paradigms in one PL: Odersky admits when he discusses the new exports capability that inheritance is bad and should be avoided. So why keep it in the PL? Probably because Scala has it genesis and thus DNA as a research language. Note I will come back to the remainder of what you wrote in that post, but first I will respond to the post you made before that as follows. I quote the above here because I want to point out that I think Scala sacrifices the opportunity to be sufficiently opinionated enough to have an optimum balance of limited but sufficiently powerful abstractions for a simple language per your point quoted below.
Agreed. Yet it seems we differ (in the way we conceptualize) on what the correct abstractions should be in order to achieve that simple and thus popular language.
Powerful, mathematical or algebraically generative essence abstractions can be too low-level and too literal. For example my recent epiphany obviating and swiping away Rust’s literal and low-level encoding of a total order on exclusive writability, in exchange for a simpler abstraction by cleverly leveraging that types exist. A tradeoff has to be made because there’s no free lunch. We can choose abstractions which serve the 80/20 needs and arrive at an elegant, popular and simple PL, or we can attempt to be as literal and exhaustive as you ostensibly want to be and end up with another STL mess. Perfection can be the enemy of good. I’m not claiming the Stepnov’s model of for example iterators is undesirable per se. Perhaps it was the attempt to have the STL perform as well as hand-coded LLL programming that is the culprit of the complexity. I’m skeptical about whether it’s possible to design a PL that is optimal for the extremes of both HLL and LLL programming. Rust, C++ and other PLs have attempted to combine for example HLL generics, closures, etc with LLL optimizations and the result has been painful complexity, unsoundness, and extreme difficulty in staying within the intended paradigm without punting to for example inefficient and leaky ARC. In general for example low-level specializations seems to multifurcate generics into a noncomposable mess analogous to What Color Is Your Function? However I’m also contemplating that Low Level Language Programming Is Considered Harmful and Antithetical To Optimization at least due to radical paradigm shifts on the long-term trend. Something like V(lang) or Go(lang) is all I should need for non-extreme (i.e. not as performance and space optimized as C++ or Rust) LLL programming in most cases that don’t require the optimization of assembly language or extreme control over avoiding redundant copying such optimization of result value temporaries, as can be accomplished with C++ move semantics and perhaps also in Rust? V even allows pointer arithmetic in I might be more enthusiastic about Rust for LLL coding if they hadn’t tried to compete with C++ and instead just made a better C, e.g. the complexity around closures, c.f. also and also. But Rusts wants to be a HLL tool also, which thus afaics makes it extremely complex as is also the case C++ with the complexity lurking in the apparently unsound type system. Eric Raymond wrote an unflattering opinion about Stepanov’s STL and C++’s overall complexity:
Also Eric blogged C++ Considered Harmful:
One of the commentators wrote:
So I suspect Linus Torvalds is correct that C will remain the PL for operating systems if one has to routinely punt to
I appreciate your insight where you explained that typed row polymorphism is a response to one of Rich Hickey’s major criticisms of typing. Afaics, Clojure's Epochal concept is essentially just immutability and making immutability more efficient with persistent data structures that only duplicate changed data. Persistent data structures share (between threads) those items in the data structure which haven’t be modified. I suggested that each thread should inform other threads sharing the same data when a change is made to a private copy of the data. I wrote:
I have yet to read a coherent argument in favor in the benefit vs. drawbacks analysis of Algebraic Effects. I’m not claiming there isn’t one? Your prior attempts at explanation apparently hasn’t stuck yet in my mind.
Reminding readers that Algebraic Effects are the free monad and it’s argued that they’re a duplication of what can instead be achieved with typeclasses in your collegue Oleg’s final tagless encoding. I quote:
So I suppose understanding that the free monad retains control-flow context might be the best hint to their justifiable utility of coding control-flow and interpretation as separate concerns, c.f. also, also, also, also and also.
We discussed that upthread with @Ichoran.
I’m still progressing on my ALP idea, but it’s not yet a holistically, defensible design.
Agreed. And we’ve devised some ways to tackle the modularity #39 issue with typeclasses in light of Robert Harper’s criticism, “…the integers can be ordered in precisely one way (the usual ordering), but obviously there are many orderings (say, by divisibility) of interest…” Although I surmise you’re somewhat unsatisfied with violations of the total ordering required to insure the abstract algebra. Clojure’s, Haskell’s and Go’s mistake of structural instead of nominal matching of functions to (typeclass) interfaces amplifies Rich Hickey’s criticism of an explosion of interference and boilerplate with types.
Perfection can be the enemy of good.
Philosophically I agree. I will need to think about implications of this specific design suggestion. |
I responded on the thread Dotty becomes Scala 3:
|
Very interesting: The corresponding blog is here. |
Attempting to paradigm-shift our discussion about resource cleanup and incidentally RAII. In a purposely (re-)designed operating system (OS) I can’t think of any reason for non-memory “resource starvation” in terms of forgetting to timely release the handle to the resource? The entire filesystem could be virtual memory mapped or at least there’s a nearly unbounded number of 64-bit file handles. TCP/IP streams should be limited only by data structures allocated in memory so any resource limitation is synonymous with memory resource starvation. There can be resource starvation for example reading too infrequently from Internet stream buffers thus causing the TCP/IP connection to go stale or timeout (arises as we had discussed in years past, for example in the tradeoff between throughput versus latency in concurrency), or forgetting to release exclusive access to a shared resource such as exclusive write. But the first has nothing to do with forgetting to release access to the resource and the second exists within the broader scope problem of deadlocks and livelocks. For example perhaps the only way to guarantee there will never be file write access deadlocks and livelocks, is to allow for its entire lifetime file exclusive write access only for an owner application and allow other readers unfettered access to the file. Owner batch file writes could be employed so that readers never access partially written inconsistent data (yet readers would never wait they would just read the prior version of the data while the batch is not yet completed), although in theory this could create an obscure livelock wherein the reader needs the batch update to interact (even indirectly) with a dependency for the writer’s completion of the batch write. One could imagine resource starvation due to forgetting to release for example the camera of the mobile device, yet presumably the OS can give the user a manual override for such a buggy application given it is a resource the user interacts directly with. Thus to the extent that AMM (e.g. via a tracing GC such as mark-and-sweep) is acceptable for automated collection of unused memory resources then it should be acceptable for collecting unused access to all types of resources? Not closing a TCP/IP stream may keep resources on the other side of the connection tied up until the connection times out, although typically should either being employing long-lived connections or connections set to close automatically after the response. And closing a connection shouldn’t be conflated as we ostensibly did upthread with release of the resource handle where we can’t be sure they can occur simultaneously. I would return to my original upthread point that neither ARC-based RAII nor Rust lifetimes will always insure timely release due to the potential for insidious cyclical or dead references although Rust apparently minimizes the potential for cyclical references at the cost of making some dynamic lifetime management very onerous or implausible. Thus the programmer won’t be able to rely on those in all cases without carefully studying the semantics of the source code which is thus perhaps not arguably better than explicitly coding the close of the connection with the exception that if there was an explicit lifetime destructor in Rust then the programmer could assert his precise timing (thus correctly conflating lifetime and for example close of a connection) which Rust could presumably check for use-after-free safety. The resource cleanup drama is a red-herring where there’s no starvation due to untimely release of the resource handle. In the (probably rare?) cases where there needs to be timely close of a connection which can be tied to resource handle lifetime, then Rust may offer (with nested blocks or improved presumably by adding a lifetime destructor) the ability to prove use-after-free safety assumptions about the explicit timing. But just relying on RAII or lifetimes without careful study isn’t going to be safe from resource starvation in said cases where timely release is paramount. @sighoya I never heard back from you again in email after a year. I hope you have survived. |
I wrote:
In addition to inability of Rust lifetimes to model some semantics correctly and the Rust enables to prove static lifetimes for a subset of semantics. For this subset it improves performance, but when your program requires semantics that Rust is unable to model then Rust interoperates poorly given the hoops one has to jump through to accomplish said semantics. The Pareto principle applies to performance optimization. Only a small fraction of the source code needs to be fully optimized for maximum performance. It is overkill to apply Rust’s onerous lifetimes to all the source code. Imagine a blockchain application that groups conflicting transactions based on the UTXO records they will invalidate, replace or write to. Each group can run in a separate concurrent thread and each can employ a queue so that there’s no race condition among conflicting transactions because conflicting transactions don’t ever run in concurrent threads. This invariant is only enforced dynamically at runtime. Ditto a smart contract for an exchange that has to sort bids and asks by trading pairs so that conflicting transactions for each trading pair are queued. The only other way to speed this up would be pipelining of the said sequential queue. I can’t envision a way to structure Rust’s exclusive write access moves and borrowing to make any compile-time check of the said runtime invariant. One way to structure such code would be to put immutable records in a (e.g. list, tree, etc) data structure with statically checked exclusive write access on the references (i.e. pointers) to immutable records which are the leaves (aka contained elements or items) of said data structure. The owner of the statically checked write access can remove immutable records and queue them as aforementioned. But given that (borrowing) read access (i.e. to immutable records) isn’t exclusive this doesn’t provide any statically checked guarantee that the owner can’t queue them incoherently in a conflict that would result in a race condition. Alternatively (the records need not be immutable and) the exclusive write could somehow be moved to the thread that will queue and operate on them and their associated transaction. But how does the function accepting the move return immediately so that queuing of other transactions can proceed concurrently while also enabling the said function to move the exclusive write access back to the original caller when the said operation is complete? Thus it seems that Rust’s conflation of lifetimes with exclusive write access complicates matters? Whereas, Pony’s reference capabilities don’t track lifetimes and thus can be freely passed around independent of tracking the function (stack) call hierarchies involved with lifetimes, which solves my question in the prior paragraph. Lifetime tracking as an efficiency improvement for AMM and to proof RAII lifetimes could still be useful and afaics could be unconflated from access permissions such as exclusive write. Apparently the reason that Rust requires littering the code with lifetime annotations is because otherwise the lifetime signature of the function could change depending on the lifetimes of the caller. The Rust docs say:
For example:
[…]
Thus it seems it might be possible for the compiler to infer lifetimes if we remove the constraint that a function’s lifetimes signature must not change for different callers? https://vlang.io/ says:
Note as of January 15, 2021, Vlang has Go-like coroutines and channels. And it compiles to C. Mainly what I seem to want to add to Vlang (other than perhaps a different syntax which is not really a big deal to implement a transpiler parser) is Pony’s concurrency model via reference capabilities typing. This is to be sure we don’t have race condition bugs in concurrent code. For the neophyte readers, a race condition is where two or more simultaneous threads of execution will overwrite each other’s scratch pads. I presume you all know that modern microprocessors have multiple cores so they can run multiple program slices simultaneously. Another reason that concurrency is introduced into programs is that for example the program is waiting on some resource(s) that is/are being fetched over the Internet so it will store that memory scratch pad for the task that is waiting, and go work on something else interim. We want to prevent these concurrent activities from corrupting each other. And not only do we want to think we prevented it, we ideally want the compiler to check to make sure we didn’t have any insidious mistakes. Note there are multiple ways to address concurrency safety. For example the Clojure programming language instead employs persistant data structures, which are immutable except that one can efficiently create a new copy that is mutated without needed to copy the entire data set. Those data structures efficiently keep track of changes. Immutability avoids the potential for any tasks to share a mutable reference to the same scratch pads. |
@Ichoran since I don’t want to put this argument with you in the Scala Contributors user group, I will continue the discussion here. I can’t believe the intransigence and disingenuous lies that some people make in an attempt to hamstring Odersky and prevent him from experimenting with ways to possibly elevate Scala from a obscure programming language that has nearly died on the vine. And you’re one of the prime transgressors there, lying about your use case of Scala (which you had previously shared with us here in this thread) by pretending you will be harmed if others will program in Python’s braceless style (while also continuing your underhanded vendetta against me personally). You do not even like typeclasses, so you are just holding Scala back from ever being anything at all significant. Scala will never be a better Java than Kotlin is. Scala has to make a different niche based on its strengths. You ought to just GTFO of Scala and go use Kotlin and stop being a thorn in the side of those who actually have some vision for how to make a popular programming language. Also you were totally fucking wrong about everything about the Certificate Of Vassal IDentity scam. You’re a smart idiot, obstructionist of truth, wealth and prosperity. Yeah I hate you with a passion, you dickhead loser, useless blob of protoplasm. |
It's going to be a one-sided "discussion". There isn't much I can offer if you can't or won't distinguish your misunderstandings from other people lying. Feel free to rant, though, if it's cathartic. |
@shelby3 Just want to remind you to keep to programming language discussions here to keep the noise down. Regarding syntax, one option is to regard the language abstract syntax tree as the fixed-point, which you might store as JSON or XML or some such thing that can store tree formatted data. Then users can choose to render the tree according to house style rules when they view things. I think this is a good solution to the religious wars that start about the exact coding style that should be checked in repositories. Should there be one space or two, tabs or spaces etc... Store a machine readable AST directly, and leave it to the IDE to render. In this way each developer can see all the code in the repository in their preferred style. |
If you are referring to COVID, you are on the side of the lies. And I have all the proof accumulated in a massive trove. I am going to nail your sophistry ass to the wall. Additionally you do not even know about the actual history of the U.S. since the Civil War. You are not a State National. You are some freak and you will be cast out into the technofeudalism corporate gulag where you belong dimwit. |
@keean - For reference, I think the AST-is-ground-truth idea (not "essentially" proposed elsewhere) is really interesting. I don't know if it would be practical, but I think it's an interesting idea especially for Scala since it already has an internal AST that is stored (TASTY). Could we form an adequate bijection between TASTY and editable forms? Probably not. But that doesn't sink the idea--you can then ask whether you could have a rich text AST that could be reduced to actual TASTY, and the rich text AST would be ground truth. Would that work? Dunno. And it wouldn't solve the problem of needing to read code pasted into discord and github and stuff. But it's an interesting intellectual contribution towards solving the how-do-you-edit-your-favorite-flavor-and-let-others-do-likewise (which, for the record, I never opposed...I just didn't think it solved the whole problem, so the other potential issues with language dialects could not be dismissed wholesale without addressing them). The reason it's interesting, unlike the others, is that writing code is really just our way to express that we want a particular AST. Having the AST itself be the common form (even if not really readable on its own) is therefore a particularly pure way to handle dialects: in principle, one could consider any dialect for which there is a bijection with the AST. (The main issue I can foresee is that it would be difficult to allow ad-hoc indentation and spacing, like vertical alignment of common elements to reduce the chance of error. A rich text AST might be able to support this if all dialects could permit the same thing. But without this, historical code bases would be grandfathered in at best: you could have an surjection from them to the rich text AST, but you couldn't get back.) Anyway, it's an interesting idea that I hadn't heard before that's a step above the usual "run the code formatter" / "write a syntax translator" idea. |
@Ichoran would you elaborate (example?) on @morgen-peschke’s claims that there are subjective decisions to make about where to place braces in Scala where apparently according to him (if I interpreted his statement correctly) they are optional and don’t change the semantics at all but change the readers interpretation of code? That is bizarre and seems like bad programming language design? (If you do not want to elaborate for me, you could do so in the context of a discussion with Keean) |
Apparently not according to @morgen-peschke’s claims. Apparently there are cases where the mere presence of braces in Scala even where their presence makes no difference to the AST, has some bearing on the reader’s interpretation, at least according to him. So don’t go trying to act so hoity-toity as if you had some highly intellectual reasoning in mind that had escaped anyone else here. You egotistical fucktard. |
EDIT: I almost forgot to drop a hint. What does chemistry have to do with chain of custody @Ichoran? That is one of many possible myopias of geeks who can’t think outside of a box. I don’t mind if @keean deletes all our recent exchange. Because I am obviously unloading on @Ichoran and I want him to know I think he is a despicable freak. I also fault @keean for believing all the technocracy lies about COVID and what not, but @keean gets a pass because he is open to debate. Also because I view @keean as a human being who is trying to understand the truth each person is attempting to convey or find (even if that person might be off course). @keean is more apt to try to help a person than to cut them down, if he can. And eventually @keean and I will have that debate. He may find some sophistry to continue on believing in his enslavement and that is fine maybe it is his destiny. But at least @keean will engage me respectfully when I am ready to do so. Whereas @Ichoran was cursed with a very high IQ so thus thinks he knows more than the person he is engaging with. @Ichoran would not respectfully engage in a challenge to his belief system. It is his UNDESERVED arrogance that is the huge turnoff. But he will be judged and he will reap what he has sown. That’s not my job to mete out his punishment. That is above my pay grade. He will not escape the truth nor punishment for the horror he has arrogantly help perpetrate and perpetuate on humanity. @Ichoran was gifted with the intellect and the domain knowledge to help humanity at its time of great need in 2020, but instead he decided to memebot the sophistry instead of using his intellect to actually find the flaws in the cherry picking and what not. And for that he has become the de facto enemy of humanity and complicit in crimes against humanity. Maybe he should read the Nuremberg Code and Crimes Against Humanity to be tried at the Hague when all this eventually comes to light. When he says lies, I probably know what he is referring to. I have been to that highly technical scammer website that purports to be refute all the arguments against the scam. What is so hilarious is you two guys subscribe to a criminal syndicate yet then @Ichoran is arrogant enough to think he is actually some intellectual. Come @Ichoran stop being so fucking ignorant of the world and actually learn something useful other than chemistry. |
Here we go again Scala doing their usual unnecessary complexity routine. For example, by not adopting the 3 spaces enforced rule, and instead trying to be too cute, you all are creating problems and complexity: https://contributors.scala-lang.org/t/feedback-sought-optional-braces/4702/34?u=shelby3 The above is presumably the attempt to automatically detect unintentional single-spaces. Very, very bad. What is Odersky doing? He always does this sort of stupid shit. He ruins a good innovation by trying to be too clever. https://contributors.scala-lang.org/t/feedback-sought-optional-braces/4702/35?u=shelby3 The hardcoded 3 spaces rule would be much more regular and easier for tool developers. Stop adding unnecessary complexity. I probably understand. Odersky is probably trying to be better than Python so he can justify adding this feature to Scala as BDFL to overcome the extreme resistance to the idea. Bad. Just do it and do it sanely. Or don’t do it. But don’t do it badly as a crutch for not wanting to be perceived as the dictator. I have no qualms about being a dictator when I need to be and I will explain it very matter-of-factly, as I did in the thread which the mod shut down. I do not speak out of both sides of my mouth like a weasel. I tell people straight what is. |
@shelby3 - I will discuss political philosophy and historical and current world events in an appropriate venue. This is not such a venue, even if it is less stringently not such a venue than the Scala forums. Pick an appropriate one and I will engage for an amount of time that I can afford (probably ~2000 words max). Time and place permitting, I'm always open to discussing important issues. |
@Ichoran, okay fine. Please realize I am not a total dunce. My mother has a rigorously tested 137 IQ and my father has a significantly higher IQ, so I am at least not retarded in any case. I have a somewhat Aspie profile but with a spike to maximum on neurotypical perception but very weak on communication on both sides of the NT-Autistic spectrum. Perhaps it is my weak (output) communication skills that deceives you about my intellect. |
I added this just for you guys:
|
Impossible because Scala screwed itself early on: https://contributors.scala-lang.org/t/make-fewerbraces-available-outside-snapshot-releases/5024/130 |
So this is what the world has come to. We really do need to enslave all of you. Just continue your support for the technocracy guys, it’s your destiny. Scala is really in deep shit if radical, leftist loons are steering the ship. Glad I realized it sooner than later. Archived: From Telegram:
Continued…
|
Don’t expect any communication from me for a couple of days, as I have my head in the programming sand attempting to accomplish something Herculean. (Some people pissed me off and that is a big mistake when I am healthy) I now have diarrhea with the Ivermectin but I interpret this as a good sign. My health seems to be improving. My concentration is unreal. I am coding as I did in the past. Didn’t sleep for 24 hours. No problems so far. 👍👍👍👍👍👍👍👍 |
Yuri Bezmenov (Leftists are useful IDIOTS) Have a moment to jot down what has transpired which I will do now, because that ostensibly (possibly radical) leftist @morgen-peschke ostensibly mistakenly thought that—by violating the Scala Contributors discussion group’s Code of Conduct, by linking to off-topic discussion here in this thread—he would malign my character in the eyes of others. I guess he didn’t realize that I am delighted if others know that I am angry at @Ichoran for his complicity in the COVID scam, which has been a massive crime against humanity. Why would I be ashamed of protecting humanity against criminal syndicates and their sycophants? But I wasn’t going to spam the Scala Contributors group with my longstanding issue with @Ichoran. So that juvenile twat did it for me, lol. The mods (presumably @sjrd who is also the lead programmer for Scala.js) hid all my responses to that freakazoid’s Code of Conduct violating activity but not the actual CoC violations of my protagonist (c.f. link above). So @sjrd who is ostensibly the sole mod for Scala Contributors (ya know the Scala is community is really tiny with only 0.7% market share up 0.2% recently due to Scala 3 compared to 1.4% for Go, 1.6% for Kotlin, 2.4% for TypeScript, 9% for JavasScript, 18% for Java and 28% for Python) might be breaking the law (will need to ask my attorney to look at this) by violating contractual language resulting in intentionally schemed libel (private companies have a lot of leeway but they’re not entirely above the law although I’m not an attorney). The legal stuff is not my purview, but I share the need for a programming language with the community-at-large (not including the leftists cretins will never have a safe haven for their disruptive activities in any project I manage so they better not even try) and the technology is my forte, so… This was after I had been graciously receptive to @morgen-peschke’s militaristic attitude against the new Scala feature to offer an optional braceless syntax. I was attempting to have a reasonable discussion with him about his claims that being autistic means he can’t read code without braces. I noted he could continue to code in braced style as Scala will offer both. I posited that maybe we could encourage the Scala team to offer libraries in both braced and braceless. I also raised the possibility of an automatic translation between the two styles, as well I suggested multiple colored vertical traces (to help distinguish them rapidly) in the IDE. I even pushed back against @Ichoran’s FUD about small snippets of braceless shared in Discord being hard-to-read as even he agrees small snippets are often easier to read in the braceless style. But no, @morgen-peschke would have none of my attempts to find reasonable compromise (and even I am not the one forcing the new feature down his throat as the Scala BDFL Odersky is pushing it through). He insisted that massive exodus of large companies would ensue from the Scala ecosystem so at that juncture my bullshit and FUD detection alarm kicked on. I told him that many autistics have an inability to perceive reality correctly and that he was exaggerating. First of all, large companies don’t use Scala, instead Java or if another then much more likely Kotlin because it doesn’t have all the abstruse crap and corner cases that have been the bane of Scala of the years—the “kitchen sink” language which Odersky is about to fuck up again, lol. Note @sjrd hid that very important aforelinked post (which aims to head off bugs and unnecessary complexity Odersky is foolishly adding to the new feature), so I have screen captured it as follows: The mod (presumably @sjrd) deleted my post in which I was justifying the claims I started making about why Scala needs a better native compile target. Note the mod also closed that thread to prevent the points from being made. Clearly they’re trying to hide Scala's weaknesses from the users, as they must realize Scala is very near to extinction. And they made a big mistake because they have now motivated me to proceed to remove any reason for Scala to exist. More on that in the next paragraph. Anyway, so in my post which was removed I pointed out that Scala Native because it compiles to LLVM has no reasonable way to implement green user mode threads (e.g. cactus stacks) which are usually more performant than and preserve stack traces (for debugging, sane exception handling, etc) compared to coroutines. That’s because coroutines (e.g. But seeing what a clusterfuck Scala is now with @sjrd being the lead on Scala.js and Odersky running “kitchen sink” wild as usual, I really started to doubt the wisdom of depending on such an ecosystem. I investigated what eventually would be required for me to write a Go output target Scala plugin, and the plugin development is largely undocumented with much needed to be gleaned though printout dumps and trial and error. Worse yet I glanced at the Scala.js Scala plugin code, and it has registered to process a multiple phases (compiler stages) of the compiler. Presumably meaning it has to collect typing information after the typing phase, then classes information after another phase, etc.. Massive, undocumented complexity. I realized it was going to take me more time to master Scala plugins than it would for me to write my own compiler for my own programming language with an output target to Go and/or TypeScript! Besides why do I want to invest my time to learn some stupid compiler design created by cretins? Not motivating whatsoever! Although I would not not have to write a Go output target now if I wanted to start using Scala 3 now for the Scala.js output target, I would be investing a lot of coding inertia in a fucked up ecosystem which is teetering on extinction (just waiting for Odersky to get COVID then Scala could be toast, lol). World War 3 (involving nuclear war between NATO and Russia) is right on schedule to begin 2025ish, so that could also put Odersky out of commission. Also massive famines enveloping for 2023. Got to love these leftists who are intentionally creating this. 👏👏👏 Well done. Then on top of that the Scala compiler is slow as molasses. And there is still some cruft in the syntax and language features. So maybe it is time to just deprecate Scala. Take it’s most unique features of importance, compile them faster to Go and/or Typescript and remove the reason for anyone to use Scala. So I pulled out the grammar I had been working on since circa ~2019 and whittling it down now that I have much clearer design understanding as everything we discussed has settled over the ~2 year hiatus as well as fixing my decades long liver disease had always been a prerequisite to being productive again. Turns out 2015 research unequivocally shows that Ivermectin entirely cures chronic fatty liver disease. Just don’t ever ask any doctor because doctors don’t can’t get their head out of their arse anymore than @srjd can. And so it goes…and I have more than ample funds to hire whoever I might need, but for the moment I will make the initial push because I need to flex my coding talent after more than a decade of being too ill to do much of anything but wallow in bed. Warning to cretins. Don’t fuck with me when I am healthy. You don't know me. I am fiercely competitive. |
@shelby3 It would be great if we could try and not stray too far from the topic of programming languages. My general thoughts are that it is always hard to move the status quo. I think it is better to develop a new language rather than change an existing one. This is partly because it is hard to overcome momentum, but also because adding features not designed from the start leads to complex and unwieldy languages. I think opinionated languages that strongly encourage a certain coding style are better than kitchen-sink languages that try to do everything (poorly). |
Pack it up folks, Shelby has all but obliterated any reason for Scala to exist. By the end of the week, this new project, powered by his clarity and irresistibly friendly personality will reign supreme. It will steal Scala's (if not all existing languages') market share once and for all Flee, flee from the wrath to come! |
Hahaha. Thanks @swoogles for the inspiration. I'm already 200 lines into reviewing, checking, refining, condensing and recalibrating the 500 lines of EBNF grammar code I had written in 2019 and 2020. And remember my advice about wealth generation, don’t associate with poverty-striken, aspirational, leftist “power rangers” losers and their Ponzi schemes as they have a tendency self-immolate in their leftist holiness spirals. Stalin murdered most of them to stop the holiness spiral from razing Russia with unabated megadeath— which is the way all power vacuum holiness spirals end and roughly what awaits these clowns (this time it may be Bill Gates’ WHO pandemic treaty euthanizing them with “COVID” clot shots). Understand paradigmatically and methodically how their virtue signaling politics sustains them and also destroys them in the end. They’ve roosted themselves in every open source project of significance, e.g ousting Linus Torvalds and Eric S. Raymond. They may “think” (if a virtue signaling ganging up is considered rational thought) that gives them power and they are mighty offended when a BDFL such as Odersky overrules their “community power." Odersky is now wading in feces and Scala (unlike Linux and Python) is not on firm footing to begin with, so it won’t require much to tip it over the cliff, e.g. the loss or retirement of Odersky. I won’t be surprised if the leftists find a way to kick Odersky out for ramming through the braceless style. When a @morgen-peschke says that Scala will suffer immensely from ramming through the braceless style, he’s probably not referring to some organic failure. I won’t be surprised if the leftists organize some drama. Hand the keys to these privileged, undisciplined juveniles who’ve never been spanked in their life and they will run amok (Western societalcide is careening now with demoralization and destabilization that Yuri Bezmenov described and predicted). I don’t yet know @Ichoran’s stance on leftist holiness spirals. He keeps most of his opinions close to his chest (apparently being an introverted Myers-Brigg type), all I know is he was hoodwinked by the COVID scam at the outset and I wasn‘t (so one or two SD in extra IQ didn’t help him at all). I want to read his 2000 words rebuttal someday when we can agree on a proper forum to have a debate. Maybe LessWrong? I want to get all my ducks in a row first and published, so he has access to my trove of evidence so we can establish ground facts. @swoogles remember this maxim. Often the most success programming tools are built by those intensely motivated programmers who were building it only for themselves. When you’re building it for yourself, you’re building it with passion. Linus was building Linux for himself. I am building a compiler for myself because I really need it and I don’t care if nobody else in the entire world uses it. If I build something (assuming I succeed in completing it) and others find it useful, then so be it. I could use Scala, but the compiler is slow and will cost me a lot of time even if it is only a few seconds during each incremental recompile. Worse is that when I code the crypto ledger (not a block chain), I loathe to be running on Node.js with JavaScript or on the JVM. And I will also loathe attempting to learn scalac (now dotc) internals so that I could write the better server scaling compile target I would need then. Why should I invest and help a leftist community then they will one day oust me also with all my effort being vandalized by cretins? Lastly I am not building a complete compiler, but rather a transpiler to another compiler such as Go and/or TypeScript, which significantly reduces the landscape I need to master. I certainly don’t want to be rewriting Go at this juncture. Your comment seems to underestimate what significant wealth can be summoned to accomplish, although I will not try to hire anyone until I have at least a working skeleton proof-of-concept so I am knowledgeable enough about what’s needed. I need more experience first before going headstrong into expanding human resources. I had made the tentative decision to embrace Scala 3 because as you astutely point out that it is a lot of work to create a compiler. Why reinvent the wheel to needlessly create busy-work for myself. And I would prefer to add to something that is. The maximum of division-of-labor is virtuous. But unfortunately Scala is so discombobulated (I will not reiterate the numerous points of my prior post) that it’s really not a valid option for anyone suitably astute. It was 13 years ago when I wrote my response to the postulate that the complexity of the Scala 2.8 Collections Library was “the longest suicide note in history.” Notably even in 2009 I knew everything that would come to pass by now (except that a chronic illness would delay me for more than a decade). Essentially it’s still the same crap with Scala shooting itself in the feet with complexity and not focusing on making a tool without the corner cases that has significant adoption by developers and can actually help IT departments do something really well that they could not do as well with another tool. Note the extension methods in Kotlin was my idea. Yeah I was there helping Kotlin at the outset. Kotlin has been more successful than Scala. I just remembered we had encountered another one of those Scala developers @alexandru who also must have had an overflowing tampon the day he decided to interact with us in reaction to my refutation of every point in favor of Scala’s OOP in In Defense of OOFP. I guess he was involved in Typelevel, Monix, Cats. What is it with feminine “male” (probably lacking one or both of ACTN3 and ACE genes) programmers and cats? Do they realize cats can carry a disease that can cause jaundice, blindness and/or schizophrenia. (I have one of the R athletic genes not two as some Africans do apparently as an adaptation to malaria. ACTN3 R/X genotype.)
Agreed it would be ideal. Unfortunately politics is a reality in this world. I don’t foresee why I would need to belabor the point after this response. And I am mixing in discussion about programming languages. For example what is the threshold that could motivate someone to not find an existing programming language to choose. And specifically why Scala which we have discussed often, has perhaps still not pulled itself out of the “kitchen sink“ routine, etc..
You’ve stated this before and I think it helps to have that statement here again at this juncture. P.S. I’m pinging @jdegoes who introduced me to Scala in 2007 (or was it 2008?) when I was expressing frustration with HaXe’s limited capabilities in the HaXe discussion group. I know for the past several years (or at least until I last exchanged messages with him circa 2020) he had been suffering from some sort of ailment in his gut and digestive organs and I want to make him aware of research that Ivermectin has tumor shrinking and cancer inhibition abilities as well fully cured fatty liver disease in mice. I started to have leg edema and chronic fatigue circa 2008 which worsened eventually into a near-death perforated ulcer and ER hospitalization episode in 2012 followed by declining chronic health. Diagnosed with dengue then tuberculosis in 2017. The 5000mg daily antibiotics for 6 months to treat the TB was highly liver toxic, apparently triggering my endocrine system into some permanent state of metabolic disease which kept my liver in a worsening fatty liver state ever since. That is why I had not been able to work effectively since ~2010. I began the Ivermectin treatment several days ago and I am very encouraged. Chronic disease is very, very difficult to handle from a mental health perspective (readers can lookup some research on this if interested). Of course it is no longer possible to buy the human form of Ivermectin anymore without a prescription from the criminal syndicate medical system, because some people were attempting to use as a treatment for the imaginary disease COVID, but the equine form is ostensibly the exact same active ingredient, just in an injectable form at 1% concentration (which I dose orally). John might remember his 2016 blog post Twitter’s Fucked—which it may be still—and my comment on it then reflects what I wrote two days ago above. |
I think I finally arrived at a good solution to the modularity problem with total orders, abstract algebra and type classes. So to repeat the background issue, while for example the concept of sort and semigroup/monoid are total orders, their specifics are not. For example there’s ascending or descending sort order, and monoids can be additive or multiplicative. I mentioned to you in a private message carving out the natural total order portion of algorithms into an abstract algebra devoid of the complex modularity issues that plague type classes. You mentioned the need to be able to apply the partial order remnants modularly so as to not discard one of the key benefits of type classes being that the injection of the algorithms is orthogonal to the function hierarchies—a benefit for refactoring and such. We can organize the so called remnants as separate abstract algebras, e.g. ascending, descending being distinct total orders, which are forked off from the overall abstract algebra of sort order. Then an unopinionated usage will Tada! Why didn‘t we think of that before. |
It’s instructive to elaborate these points. I remember being bewildered when first joined Fractal Design Corporation in 1993 for the Painter X2 project with an $80,000 salary and $1 million in stock options,[1] (that’s $1m and $13m properly inflation adjusted so readers will understand how impoverished and enslaved you are!) Mark Zimmer (eventually personally recruited by Steve Jobs) and Tom Hedges (could memorize an entire page of a phone book) explained to me that the reason they wrote all their own code and refused to rely on third party libraries is because they wanted to control the outcome (i.e. didn’t want to be screwed over). Being the young aspirant “power ranger” idiot that I was at that time, I was recalcitrant as are these young idiots on the Scala team who don’t yet understand how the world works (or think they can change it, lol). Eric S. Raymond’s Linus Law “given enough eyeballs, all bugs are shallow” doesn’t apply as well when a group of insiders have managed to write a giganormous body of highly undocumented, poorly commented, highly complex code.[2] Thus the learning curve attrition is very high. Who would make the investment to overcome that hurdle if not So if you want to know the real reason that (astute, sane) companies don’t embrace Scala, the above is easily is as important as the single-minded FUD that @morgen-peschke was attempting to parlay. At least with Kotlin corporations know they’re relying on a sane, profitable corporation beholden to its paying customers with sane leaders with a history of sanity, profitability and a paying customer base— unlike Scala history of discombobulated insanity with self-important, egotistical, entitled, free-riding leftists. It’s blatantly obvious that free-rider @morgen-peschke was trying to construct a strawman of FUD to elevate his claimed (probably exaggerated) mental handicap to importance by attempting to scare paying customers—these sort of dysfunctional open source projects (where one lone, self-important, leftist, free-rider loon can run roughshod over community discussion) should scare the heebie jeebies out of corporations. As much as I wanted those few unique features in Scala, it just isn’t worth it especially when it only takes one day in their discussion groups to be attacked with the insanity. Scala’s community doesn’t understand what professionalism means, although they may think their virtue signaling implementation of their CoC is some form of professionalism that is yet another example of their aspirant “power rangers” delusion and the common psychosis. Let this post serve as the notice of the day Scala finally died. We hardly knew ye. [1] Eventually became Corel Painter via acquisition. [2] Even perusing what should be the simplest code of the compiler, the |
I will continue the discussion here, until or if @keean can get Github to restore the thread. Luckily I still have a copy of the deleted #35 thread loaded, so I can copy and paste from it. If the original thread is restored,
I will copy my posts from here back to that thread and delete this thread. I will give you an opportunity to copy your replies also before the deleting the thread. Perhaps @keean is the only person who can delete an issues thread.[or perhaps we’ll just link the old thread to this new one, because that thread was getting too long to load anyway]@keean
I find the term and acronym GC to be more recognizable than AMM which is why I continue to use it. And nearly no one knows the acronym MS. And you continue to use RC which widely known as radio-control and not ARC which is the correct term (employed by Rust docs for example) for automatic reference counting, because there is such as thing a manual reference counting.
And you completely failed to respond to my holistic reasons why doing so (for the multiple strong references case) would be problematic. And I grow extremely tired of this discussion with you because I have to repeat myself over and over ahead you repeat the same rebuttal which ignores my holistic points.
So am just going to let the argument with you stop now on this point. I do not concede that you’re correct. And I do not think you are correct. I already explained why and I will not reply again when you reply again totally ignoring and not addressing my holistic point. Total waste of my time to around and around in a circle with you making no progress on getting you to respond in substance.
Which as we have already agreed is being explicit and which is what I wrote yesterday. But yet you still side-step my holistic points. Yes you can do the same things with ARC that we can do without ARC in the weak references case, but that does not address my point that in that case ARC is not better and is arguably worse because it conflates separate concerns.
I guess you did not assimilate what I wrote about breaking encapsulation. But nevermind. I don’t wish to expend more verbiage trying to get you to respond to something you wish to ignore and skip over with addressing it.
Also I am starting to contemplate that the encapsulation problem is fundamental and we need to paradigm-shift this entire line of discussion to a new design for handling resources (but that will come in a future comment post).
You’ve got your model inverted from my model. I noted that in one my prior responses.
In my model the “strong finalizer reference” controls when the resource is closed. And the weak references are slaves. When you tried to fix my example, you did not, because you used a weak reference but I wanted the resource to actually be released before the asynchronous event. Thus I wanted to close the strong reference. The static linear type system can ensure no access to the strong reference after calling the finalizer. Thus it can also work with non-ARC. (However as I alluded to above, I do not want to recreate Rust and thus I am thinking about paradigm-shifting this entire line of discussion in a future post).
Nope as explained above.
By your definition, but I had a different model in mind.
I explained it above. Open your mind.
To repeat you do not seem to understand what I explain.
Okay but it doesn’t not rebut any of the context of my point which I will quote again as follows:
I was referring to the implicit release of the resource upon the destruction of the strong reference in the
Map
of your upthread example code, which is conflated with your code’s explicit removal from theMap
(regardless of how you achieve it by assigningnull
or explicit move semantics which assigns anull
and destructs on stack frame scope). You’re ostensibly not fully assimilating my conceptualization of the holistic issue, including how semantic conflation leads to obfuscation of intent and even encouraging the programmer to ignore any intent at all (leading to unreadable code and worse such as very difficult to refactor code with insidious resource starvation).The text was updated successfully, but these errors were encountered: