Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Discussion] Data conversion management #143

Open
cottinisimone opened this issue Feb 17, 2023 · 9 comments
Open

[Discussion] Data conversion management #143

cottinisimone opened this issue Feb 17, 2023 · 9 comments

Comments

@cottinisimone
Copy link
Contributor

Esrs data conversion management

Topic

This is a discussion thread in order to decide which route should this library follow in order to manage data conversion.

State of the art

Actually esrs is implementing the weak schema technique. With this technique the library handles missing attributes or superfluous attributes gracefully, just by manually updating the event shape adding Optional fields or removing existing fields from the event. Being that this is an optimistic approach, sometimes retro-compatibility issues while loading older events from the store come out.

Techniques

Event versioning/multiple versions

In this technique multiple versions of an event type are supported throughout the application. The event structure is extended with a version number. This version number can be read by all the event listeners, and they have to contain knowledge of the different versions in order to support them. In this technique the event store remains intact as old versions are never transformed. There are no extra write operations needed to convert the store.

Implementation

A macro to put on top of an event that every time that piece of code is compiled optionally insert that new event version in a local schema registry (like a file?) and checks the event retro-compatibility.
The cons with this approach is that is needed to think about a macro attribute to ignore a specific field (for example if the event store has been updated manually) or ask the user to manually fix local schema registry file.

Upcasting

Upcasting centralizes the update knowledge in an upcaster: a component that transforms an event before offering it to the application. Different than in the multiple versions technique is that the event listeners are not aware of the different versions of events. Because the upcaster changes the event the listeners only need to support the last version.

Implementation

Create a new trait Upcaster that your event must implement and having a two functions that user must implement, like upcast and actual_version. Inside of the upcast function the user should manually deserialize a json to get versions and the fields it needs in order to build latest event version?

Others

There are two other more exotic techniques to mention:

  • Lazy transformation: also uses an upcaster to transform every event before offering it to the application, but the result of the transformation is also stored in the event store.
  • In place transformation: transform every event and updates the existing event in the store with the new transformation.
  • Copy and transformation: it copies and transforms every event to a
    new store.

The biggest downside for those three techniques is that all those techniques perform, while reading events, updates on database, breaking the "events are immutable" dictatum.

Moreover lazy transformation and in place transformation are not reliable being that changing the event store permanently makes it mandatory to make backups.

And ofc there's the "leave it as it is" option :).

@fusillicode
Copy link
Contributor

fusillicode commented Feb 17, 2023

My "gut feeling" vote goes to... Event versioning/multiple versions 😬

P.S: https://outcrawl.com/event-versioning-rust

@heartlabs
Copy link

heartlabs commented Feb 17, 2023

Moreover lazy transformation and in place transformation are not reliable being that changing the event store permanently makes it mandatory to make backups.

Is this because there could be a bug in the upcaster that could create invalid events (e.g. an invalid JSON) or a bug in the code that updates the db row that could corrupt the data (e.g. accidentally deletes a property)? Or would there be any other examples that would make a backup mandatory?

@heartlabs
Copy link

Also given that our data is really valuable - shouldn't we anyway backup our databases? Data corruption during update of an event version is just one of many ways that we could lose the data in a database

@angelo-rendina-prima
Copy link
Contributor

What about separating between in-memory domain Events and persistence concerns? In short,

  • the Aggregate only deals with in-memory types, and is not concerned at all with how they are saved/loaded
  • the EventStore deals with how to save/load from into intermediate (call them) "I/O types"
  • you need to explicitly define the many-to-one conversion from I/O to in-memory

Reasons:

  • separation of concerns (e.g. at the moment the in-memory Event needs to declare ser/de logic as well, even though that's not the domain's business)
  • event versioning (or conversion management or whatever) really only applies when you are changing the event shape in a way that does not impact the business logic - e.g. the "new" events have an optional email field, and the business logic just says "if user has provided their email do something, otherwise do nothing": in this case you'd change the in-memory model to have an email: Option<_> field and then write the necessary "conversion management" logic at the persistence level (which could be any of the ones proposed above - versioning, upcasting, defaulting, w/e)
  • if the business logic has actually changed, then you would define a completely new in-memory type that can only be obtained by a proper command -> event flow, and the "old" event is just not compatible with this - e.g. the new email field is mandatory, and the business logics assumes that this data is indeed provided: there just is no way to upcast/transform old events into the new ones (no sensible default exists!), and you require an actual AmendWithEmail command at the domain level to navigate between the two

Having an actual separation between the two layers would make it explicit when you need to simply "upcast/version" the writable data formats into the same in-memory type, vs when you have to actually think about the business logic and just add a new domain event type.
This has bitten us already in the past, where by changing in-memory event types we inadvertently broke the db.

In conclusion, at the "persistence" layer, you can actually choose any of the strategies proposed in the thread - or even just leave that as an interface/trait in the EventStore that the user has to implement themselves (granted we can provide a default implementation for the PgStore as we see fit).

@cottinisimone
Copy link
Contributor Author

Is this because there could be a bug in the upcaster that could create invalid events (e.g. an invalid JSON) or a bug in the code that updates the db row that could corrupt the data (e.g. accidentally deletes a property)? Or would there be any other examples that would make a backup mandatory?

Yes, exactly. And maybe not even for a total bug.
And if you remove a field because you think that you might not need it anymore and then in the future you see that that field was useful for something else (like data analysis or whatever) if you don't update the event you have the older events still contains that information, resulting in not a total loss of information. (yeah, this is not the main reason. The main reasons are bugs).

Also given that our data is really valuable - shouldn't we anyway backup our databases? Data corruption during update of an event version is just one of many ways that we could lose the data in a database

We already do it (daily?). But still, even if it's just a single day, it is a full day lost

@oliverbrowneprima
Copy link
Contributor

oliverbrowneprima commented Feb 20, 2023

Upcasting seems easier to implement from a library users perspective - they only need to implement From to convert each previous event type into the one used by the application, and be able to hand ESRS a function that consumes a &str and returns an Event - presumably by trying to de-serialize the string into the latest Event type, then the latest - 1 Event type, and so on, until either they manage to construct one or fail out.

We could write a macro to generate that code for them, something like:

let event_deserializer = event_des![serde_json::from_str::<Event>, serde_json::from_str::<EventV2>, serde_json::from_str::<EventV1>]

The alternative of asking library users to version their events seems like a lot of extra work (they'd need to write upcasting code anyway, or else have their application be able to handle every version of an event), for not much of a gain (being able to make business decisions based on event version seems like an edge case that is just as well handled by event timestamps). I think version tags have an unfortunate habit of leaking persistence-specific information and complexity into the rest of the codebase.

Trying to have a macro that auto-versions Events seems like a lot of accidental complexity - managing the list of previously seen versions, across developer machines and CI runs etc (if you check it in how do you handle merge conflicts?) will be annoying, and non-obvious to users as to how it works. It also runs against problems with backwards compatible and non-backwards-compatible changes - a user might, for example, add an Optional field, and not want to bump the event version when doing so - how do we let them? Do we auto-detect optional fields? etc.

With respect to the proposed Upcaster trait, I think it's easier if the library only asks the user to provide a fn(str) -> Result<Event, String> (or maybe it should take a serde_json::Value?), where Event is always the latest version, and then inside that fn they handle parsing the event we load from the DB by descending through their known event types, each time trying to call from_str and then their From impl that converts that particular event into the latest Event type. This way, all of the previous-event-version parsing code lives in a single function (plus those From impls), and all the rest of the code can ignore the fact that there's previously versioned events in the DB at all. As mentioned above, we could provide a macro to generate the parsing function, leaving them to only write the necessary From implementations, which is I think a fairly small ask.

For the sake of clarity, the above macro would expand into:

|data: &str| -> {
    if let Ok(e) = serde_json::from_str::<Event>(data) {
        return e;
    }
    if let Ok(e) = serde_json::from_str::<EvenV2>(data) {
        return e.into();
    }
    if let Ok(e) = serde_json::from_str::<EventV1>(data) {
        return e.into();
    }
    Err(format!("Failed to parse event: {data}"))
}

Or something similar

@cottinisimone
Copy link
Contributor Author

cottinisimone commented Feb 20, 2023

Trying to have a macro that auto-versions Events seems like a lot of accidental complexity

I totally agree with this, especially when you say:

how do you handle merge conflicts?

But for the other perspective this means that you need to have all the events version in your codebase. This means that if you have an event (that should be an enum) with 5 variants everytime you change one of the five variants field you need to replicate the entire event enum. I think that this will be a total mess in the long run.
On the other hand versioning events with a macro allows you to have a single external file that can grow up in time, can keep the versions for every single variant instead of replicating the entire enum and allows you to have a single enum for the event in your code. Because having different versions of the same event stored in your code will end up anyway in lots of merging problems imho

@succoDiPompelmo
Copy link

succoDiPompelmo commented Feb 20, 2023

I give my 2 cents, even though I did not participate in the internal discussion/study group, so do discard my words if I'm missing some context.

if the business logic has actually changed, then you would define a completely new in-memory type that can only be obtained by a proper command -> event flow, and the "old" event is just not compatible with this - e.g. the new email field is mandatory, and the business logics assumes that this data is indeed provided: there just is no way to upcast/transform old events into the new ones (no sensible default exists!), and you require an actual AmendWithEmail command at the domain level to navigate between the two

I agree with Angelo and support his points exception made for the last one. If for example at one point in time business decide the field email become required the business should provide a sensible default even if it not exists. Diving deeper, if we have a website where we registered customer without email but after some time the business decision to require the email we cannot discard the fact that until now all the users have not been able to have an email. We need the business logic to handle this case.

In addition, I vote for Upcasting. I prefer to hide the complexity of managing the versions from the listener, also in the case we have an event that reached the V10 each new listener should implement 10 different versions each time.

@angelo-rendina-prima
Copy link
Contributor

angelo-rendina-prima commented Feb 21, 2023

If for example at one point in time business decide the field email become required the business should provide a sensible default even if it not exists. [...] We need the business logic to handle this case.

What kind of default should this be? Some random email?
IMHO, if tomorrow Product decides that we also need to collect emails, THEY also have to figure out a way to obtain the missing data (get our agents to telephone the customers and ask for the email directly!) - and this operation is a domain one, not a simple "upcast" that the code has to deal with.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants