• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

Optimistic Lock in Spring Boot.

 
Bartender
Posts: 1359
39
  • Number of slices to send:
    Optional 'thank-you' note:
I'm tackling a project, based on Spring Boot, where a number of entities - each of which can have other entities of different types connected - must be processed simultaneously in the same session. The list of and entities to be processed is passed as requests to a REST API. In some - rare cases, it may happen that a duplicate request of the same processing is executed; To handle cases like these, where the chances of having two transactions processing the same data set are low, using the Optimistic Lock Strategy is recommended. This pattern assumes that entities are versioned, i.e. they have a field annotated with @Version. I'm just wondering if someone tried to apply somewhat similar to decorator pattern to this approach, i.e, using a "wrapper versioned entity" which wraps the actual, unversioned entity, exposing a @Version field and having an ID derived from wrapped entity ID.
Thanks in advance.
 
Saloon Keeper
Posts: 27868
196
  • 1
  • Number of slices to send:
    Optional 'thank-you' note:
Just to be clear, I'm taking it as a given that you're NOT attempting to do any sort of threading here, that these are, in fact, multiple concurrent client requests. Because although it may not be apparent, most of the business logic in a Spring Boot app runs under a Servlet or JSP and both of those are absolutely forbidden to spawn threads.

As far as database locking goes, my understanding of Optimistic Locking in JPA is that before an update is posted the JPA infrasstructure does a fetch of the affected record(s) and checks to make sure no one has modified them elsewhere, throwing an Exception if they have. Been there/done that.

My own database logic, which I've often mentioned elsewhere on the Ranch has essentially 3 layers.

*  The Entity Layer, which (surprise!) defines the table and View Entity classes.
* The DAO Layer, which is primarily concerned with CRUD and finders for individual tables, or tightly-bound parent-child table pairs. Lately Spring Data's repository feature has supplanted a lot of the brute-force logic for me.
* The Service Layer. This layer works on a "working set" of related records (a directed graph). It detaches the graph from the JPA system before returning a fetched set to the higher levels of the app and it re-attaches (merges) when the business levels request a Service method to update the datastore. The Service layer delegates its dirty work to the DAOs.

Both the Service Layer and DAO layer classes are marked @Transactional,wwhere the DAO transactions get adopted into the transaction of the service that uses them (I forget the exact name of that option, though).

Once a Service is bound in a Transaction, it will either queue behind other Transactions or throw a locking Exception if it cannot otherwise resolve matters. In the case of a lock exception, the service is going to have to re-think its own changes to whatever Entities had conflicting requests and resolve them before trying again. Which sounds intimidating, but isn't something I've had many problems with.

In extreme cases, I might consider taking critical detached Entities, storing them as synchronized objects in Application Scope and sorting out possible conflicts before invoking the update services. That would be for a case where I was runing an extended set of operations over multiple Http Client requests, Though more often, I'd set up a transaction table to stage the changes, write to that and let the database backend chew through them.
 
Claude Moore
Bartender
Posts: 1359
39
  • Number of slices to send:
    Optional 'thank-you' note:
First of all, thanks Tim for your reply.

Tim Holloway wrote:Just to be clear, I'm taking it as a given that you're NOT attempting to do any sort of threading here, that these are, in fact, multiple concurrent client requests. Because although it may not be apparent, most of the business logic in a Spring Boot app runs under a Servlet or JSP and both of those are absolutely forbidden to spawn threads.



Correct, I'm not starting manual threads anywhere: anyway, concurrent requests handling very same object may happen. The whole scenario is that before I optimized @Service code responsible to handle the business transaction, an user had to wait about 1.5 minutes (!!) before the transaction completes. Somehow, an user managed to execute the very same request twice in a really short time (we actually think that the web UI allowed them to execute some kind of double click). The result was: whole transaction executed twice.

Tim Holloway wrote:
As far as database locking goes, my understanding of Optimistic Locking in JPA is that before an update is posted the JPA infrasstructure does a fetch of the affected record(s) and checks to make sure no one has modified them elsewhere, throwing an Exception if they have. Been there/done that.

My own database logic, which I've often mentioned elsewhere on the Ranch has essentially 3 layers.

*  The Entity Layer, which (surprise!) defines the table and View Entity classes.
* The DAO Layer, which is primarily concerned with CRUD and finders for individual tables, or tightly-bound parent-child table pairs. Lately Spring Data's repository feature has supplanted a lot of the brute-force logic for me.
* The Service Layer. This layer works on a "working set" of related records (a directed graph). It detaches the graph from the JPA system before returning a fetched set to the higher levels of the app and it re-attaches (merges) when the business levels request a Service method to update the datastore. The Service layer delegates its dirty work to the DAOs.

Both the Service Layer and DAO layer classes are marked @Transactional,wwhere the DAO transactions get adopted into the transaction of the service that uses them (I forget the exact name of that option, though).



I'd say it's a textbook solution...

Tim Holloway wrote:
Once a Service is bound in a Transaction, it will either queue behind other Transactions or throw a locking Exception if it cannot otherwise resolve matters. In the case of a lock exception, the service is going to have to re-think its own changes to whatever Entities had conflicting requests and resolve them before trying again. Which sounds intimidating, but isn't something I've had many problems with.


Well, this should depend actually on isolation level of the transaction, but i don't think it it's enough to prevent lost updates without some kind of checking on data. Suppose two distinct transactions are trying to perform some kind of action on a given entity at the same time:  a possible scenario is the following:
a) Tx A tries a SELECT FOR UPDATE on entity  and gets the lock;
b) Tx B tries a SELECT FOR UPDATE on the same entity and waits for some time before getting a Locking Timeout or something similar;
c) Tx A commits the transaction and releases the lock
d) Tx B is granted the lock on the entity, and commits, overriding data set by transaction A.

At the very end,  one needs to thread some business transactions as not-repeatable, and to use some property (a status of the entity, for example, to mark it as "processed"), but this not a job that a framework may accomplish by its own. A solution may be to throwing a LockingException immediately, and rollback the transaction.

Anyway my question was about a pattern to follow whenever you cannot add a @Version attribute to an entity for any reason. The approach I followed was to use an "OptimisticLock" entity which unique key is derived by the actual entity being persisted, and work on @Version field of the wrapping an OptimisticLock.

To be honest, I think that is the entity  design is flawed and it would be wise to fix it before it's too late.


 
Tim Holloway
Saloon Keeper
Posts: 27868
196
  • Number of slices to send:
    Optional 'thank-you' note:
Well,database "locks" in the old-school style aren't a thing in SQL. You'll notice, in fact that there is no "LOCK" verb or adjective in SQL.

SQL is designed so that its 3 verbs: INSERT, UPDATE, and DELETE care very little about the current contents of the tables. If you attempt to INSERT a record and there's already an existing key, it will fail. UPDATE and DELETE can be applied to affect 0 or more rows, based on SELECT criteria. but UPDATE doesn't care about previous column values. So I'm inclinto  yo think that you're thinking like you would for a non- or pre-SQL DBMS like, say IBM's IMS/DB.

Transactions aren'r "locks". They're a mechanism for enforcing atomicity. A lock blocks concurrent requests. A transaction acts as though they weren't there ar all. Until you commit it, at which point it all happens "at once". I can INSERT a row with the same key in 2 concurrent transactions and the first one to commit will work and the second commit will fail.

So definitely, if the order in which changes get applied is critical, I'd consider staging your changes to a transaction table, then using a sequential process to apply the transactions.

Even more so if you're talking web-based operations. No single web request should ever take 1.5 minutes to return a response. You can end up timing out on the client side, and you'll drag down the server big-time by tying up request processor threads. So if a batched transaction process isn't responsive enough, then I'd spawn an "engine", feed the transactions to a work queue for that engine, have it run the work out-of-line, and have it post a completion status for later (posible AJAX-based) user requests to query if they need to. That, in fact, is a technique I've used often, originally as a thread spawned in the servlet's init() method (which unlike process() CAN spawn threads), and later in the independent webapp component that became the preferred method.
 
Claude Moore
Bartender
Posts: 1359
39
  • Number of slices to send:
    Optional 'thank-you' note:
Indeed,in the current code the major flaw I can see it's that there is no way to prevent two transactions to update the state of the same entity, preventing
them to override updates each other. The example you made about attempting to insert a duplicated key in a database is very clear, while, at the same time, says nothing about actual order of execution of two concurrent transactions. We only know that if A commits before B, B is rolled back. In my (flawed) scenario, there wasn't anything like a duplicated key exception to the rescue. Just for example, suppose that I have a Purchase order, with an unique PO number as  ID. How could I prevent that two operators could handle totally differently the very same order? Suppose that the order has a status and the picking procedure starts by setting that status to "preparing",only when current status is "ready", while a "cancel" procedure sets it to "cancelled" .Actual race condition was:
A reads the picking with status ready
B reads the picking with status ready (no select for update here)
A updates picking state to "preparing"
A commits
B updates picking state to "cancelled"
B commits.
There's nothing here preventing such a mess, and @Transactional could do nothing by itself.
So, you could either use an optimistic lock or a pessimistic lock, the latter preventing dirty reads with a proper isolation level and a select for update on the picking record: but in this case, transaction B should be aborted immediately if the lock can't be acquired or check manually for entity status once the lock has been acquired. Otherwise, if it waits and doesn't check, the same problem may arise.
Am I correct?
 
Tim Holloway
Saloon Keeper
Posts: 27868
196
  • Number of slices to send:
    Optional 'thank-you' note:
I think you're relying on the database for serialization, and as I said, SQL isn't designed for that.

What you are describing isn't a transaction, it's a workflow and the first thing it suggests is that there's a problem with the organization of business processes.

Not that two people might not be working on the same invoice at the same time (for example, as stock pickers), but that they should be working on the same part of the invoice at the same time. And even paper-based systems would typically want to have people sign off on their work with ID and time for auditing purposes.

There are a number of Java-based workflow systems, and indeed an XML schema designed to manage workflows. So complex processes can have some sort of structure and conrol. And for that matter, one of the major features of UML was intended to map out workflows.

However, in situations where multiple people are working on the same data and there's no higher-level means of co-ordination, here's one solution for that. I presume that only ONE person can actually work on a given unit at one time because otherwise it's a people problem, not a software problem. Thus, create a lock table (work_in_progress). Identify your unit(s) of work. When a person needs to reserve and edit a document, generate a Transaction to add an identifier (invoice ID or whatever) to hat table. Ideally, attach the user ID and a timestamp. Not only for auditing, but to find out who to clean up after if they step out to lunch mid-job and get hit by a meteor.

Since this is a single Transaction, it's atomic. So if two people file for the same work unit simultaneously, one Transsaction will fail and the loser needs to find something else to so in the mean time. You can, of course, put them on a wait queue so that when the work unit is released, they get notified immediately, but that's optional.

Process your work unit. Commit it to the database and release (delete) the lock table entry. The work unit is now free for the next person.

And again, if you don't need immediate action, you can simply stage via a transaction table and apply in batch, which guarantees order of application.

There are a number of variants of these processes, so you can adapt as you see fit. Keep in mind that what you are looking for is a lot like like the issues that a source code management system works and learn from their experiences.

 
Claude Moore
Bartender
Posts: 1359
39
  • Number of slices to send:
    Optional 'thank-you' note:
Solution you described - with a bit of humor I really appreciated, by the way - is more or less the solution I've adopted. Basically, I create a TransactionLock entity which primary key is derived from keys of actual entities involved in the process -the invoice or the shipping you mentioned.A TransactionLock is @Versioned, so that if two operators decided to work contemporarily on the same flow, well, there will be a winner that will take it all and a loser, that will need to start over the activity -and honestly, maybe to learn how to coordinate itself with coworkers.
Anyway this solution, honestly thought as a modest workaround, looked to me quite general, and I wondered if it could be taken as a convenient approach to handle concurrency in general way.But I soon realized that despite the fact it works, isn't anything more than a patch put there to solve a design severe error: to properly handle that, there should have been some master entity that drove the transaction - an entity to apply properly a lock strategy.
Thanks for your help, and have a cow !
 
Tim Holloway
Saloon Keeper
Posts: 27868
196
  • Number of slices to send:
    Optional 'thank-you' note:
As I said, SQL isn't well-suited to serialization, but a quick refresh on database locking alerted me to the fact that there's a lock() method that can be used to lock individual Entities in JPA since version 2.0. However, it's very basic and almost certainly could lead to Deadly Embrace races. Also, it doesn't say that it locks the database itself or just the Entity within the current app. And the locks need to be held briefly or they will time out (which is better than having to call the DBA to get them un-stuck).

Here are some interesting links about locking in databases:

https://15445.courses.cs.cmu.edu/fall2022/notes/16-twophaselocking.pdf
https://www.guru99.com/dbms-concurrency-control.html
 
Don't get me started about those stupid light bulbs.
reply
    Bookmark Topic Watch Topic
  • New Topic
vceplus-200-125    | boson-200-125    | training-cissp    | actualtests-cissp    | techexams-cissp    | gratisexams-300-075    | pearsonitcertification-210-260    | examsboost-210-260    | examsforall-210-260    | dumps4free-210-260    | reddit-210-260    | cisexams-352-001    | itexamfox-352-001    | passguaranteed-352-001    | passeasily-352-001    | freeccnastudyguide-200-120    | gocertify-200-120    | passcerty-200-120    | certifyguide-70-980    | dumpscollection-70-980    | examcollection-70-534    | cbtnuggets-210-065    | examfiles-400-051    | passitdump-400-051    | pearsonitcertification-70-462    | anderseide-70-347    | thomas-70-533    | research-1V0-605    | topix-102-400    | certdepot-EX200    | pearsonit-640-916    | itproguru-70-533    | reddit-100-105    | channel9-70-346    | anderseide-70-346    | theiia-IIA-CIA-PART3    | certificationHP-hp0-s41    | pearsonitcertification-640-916    | anderMicrosoft-70-534    | cathMicrosoft-70-462    | examcollection-cca-500    | techexams-gcih    | mslearn-70-346    | measureup-70-486    | pass4sure-hp0-s41    | iiba-640-916    | itsecurity-sscp    | cbtnuggets-300-320    | blogged-70-486    | pass4sure-IIA-CIA-PART1    | cbtnuggets-100-101    | developerhandbook-70-486    | lpicisco-101    | mylearn-1V0-605    | tomsitpro-cism    | gnosis-101    | channel9Mic-70-534    | ipass-IIA-CIA-PART1    | forcerts-70-417    | tests-sy0-401    | ipasstheciaexam-IIA-CIA-PART3    | mostcisco-300-135    | buildazure-70-533    | cloudera-cca-500    | pdf4cert-2v0-621    | f5cisco-101    | gocertify-1z0-062    | quora-640-916    | micrcosoft-70-480    | brain2pass-70-417    | examcompass-sy0-401    | global-EX200    | iassc-ICGB    | vceplus-300-115    | quizlet-810-403    | cbtnuggets-70-697    | educationOracle-1Z0-434    | channel9-70-534    | officialcerts-400-051    | examsboost-IIA-CIA-PART1    | networktut-300-135    | teststarter-300-206    | pluralsight-70-486    | coding-70-486    | freeccna-100-101    | digitaltut-300-101    | iiba-CBAP    | virtuallymikebrown-640-916    | isaca-cism    | whizlabs-pmp    | techexams-70-980    | ciscopress-300-115    | techtarget-cism    | pearsonitcertification-300-070    | testking-2v0-621    | isacaNew-cism    | simplilearn-pmi-rmp    | simplilearn-pmp    | educationOracle-1z0-809    | education-1z0-809    | teachertube-1Z0-434    | villanovau-CBAP    | quora-300-206    | certifyguide-300-208    | cbtnuggets-100-105    | flydumps-70-417    | gratisexams-1V0-605    | ituonline-1z0-062    | techexams-cas-002    | simplilearn-70-534    | pluralsight-70-697    | theiia-IIA-CIA-PART1    | itexamtips-400-051    | pearsonitcertification-EX200    | pluralsight-70-480    | learn-hp0-s42    | giac-gpen    | mindhub-102-400    | coursesmsu-CBAP    | examsforall-2v0-621    | developerhandbook-70-487    | root-EX200    | coderanch-1z0-809    | getfreedumps-1z0-062    | comptia-cas-002    | quora-1z0-809    | boson-300-135    | killtest-2v0-621    | learncia-IIA-CIA-PART3    | computer-gcih    | universitycloudera-cca-500    | itexamrun-70-410    | certificationHPv2-hp0-s41    | certskills-100-105    | skipitnow-70-417    | gocertify-sy0-401    | prep4sure-70-417    | simplilearn-cisa    |
http://www.pmsas.pr.gov.br/wp-content/    | http://www.pmsas.pr.gov.br/wp-content/    |