2004. július 6., kedd
Versioning: The InterBase Advantage
Problem/Question/Abstract:
Versioning: The InterBase Advantage
Answer:
An Examination of Database Concurrency Models
As you rush headlong into the world of client/server computing, one of the first things you must do is select a database server. The architectures of database servers vary widely, and as a result, their behavior in a given situation also varies widely.
This means that to select the appropriate server for your Delphi database application you must understand two things:
how data will be accessed and modified in your application, and
how the server will behave in each data access or update situation.
In this article we'll explore the issues that affect concurrent access to data, as well as how they may impact your application.
Locking Schemes
The oldest and most common method of controlling concurrent access to data by several users is locking. When a user locks an object in a database, he or she restricts other users' ability to access that object.
How much a lock affects concurrency depends on the lock's granularity. For example, a lock placed on an entire table will restrict other users' access to all the records in the table. Therefore, a table-level lock has very low granularity. A lock placed on a single page in a table limits access to all the records on that page. A page-level lock is more granular than a table-level lock. By contrast, a lock placed on a single row is very granular, and provides the minimum restriction to concurrent data access.
Most database servers support either row- or page-level locking. The problem with page-level locks is easy to understand by looking at an example. Suppose a page's size is 2KB (2048 bytes) and a row's size is 100 bytes. Thus, each page can hold 20 rows, and each time a page is locked, access is restricted to all 20 rows. With row-level locking only a single row would be locked, and other users could freely access other records on the page. Thus, row-level locking provides better concurrency.
Pessimistic Locking. If your background is in desktop databases (e.g. Paradox), you're probably familiar with pessimistic locking. This scheme is so named because it assumes there's a high probability that another user will try to modify the same object in the database you're changing. In a pessimistic locking environment, the object you want to change is locked before you begin changing it, and the object remains locked until your change is committed. The advantage of pessimistic locking is that you're guaranteed the ability to commit the changes you make.
Let's say you need to change a customer's address. Using pessimistic locking, you first lock the customer information at either the page or row level. You can then read the customer's record, change it, and be guaranteed that you can write your changes to the database. Once you commit your changes, your lock is released and others are free to change the customer's record. Locks can persist for a long time when pessimistic locking is used. For example, you could begin a change and then take a lunch break. After returning, you can then commit the change and release the lock.
Clearly, you want to use locks with high granularity if you're going to use pessimistic locking in a multi-user environment. If you must lock an entire page of customer records while changing a single row, then no other user can change any other customer record on that page. Row-level locks are best when pessimistic locking is used. This is because they impose the least restriction on access by other users. Page-level locks are much less satisfactory, because as long as they persist, they restrict access to many rows.
Optimistic Locking. The most common locking scheme found in database servers (e.g. Oracle, Sybase, SQL Server) is optimistic locking. The locking mechanism is optimistic in that it assumes it's unlikely another user will try to change the same row you're changing. An optimistic lock is not placed until you're done committing your changes.
To understand optimistic locking consider two users, Fred and Ethel, who are trying to change a customer's record. First, Fred reads the record and begins to make changes. Next Ethel reads the same record and begins to make changes. This is possible because in the optimistic locking scheme no lock is placed when a user reads a record and begins changing it.
Then Fred completes his changes and attempts to commit them. The database locks the record, commits the changes, and releases the lock. When Ethel tries to commit her changes, the software detects that the record has been changed since she'd read it. Ethel's change is rejected, and she must re-read the record and begin again.
Optimistic locking has a clear advantage because locks are only held for a brief period while the data is updated. This means that with an optimistic locking scheme you can achieve adequate concurrency with less lock granularity. Therefore, databases that use optimistic locking may lock at the page level and not at the row level. Conversely, optimistic locking does not fare well in an environment where there's a high probability that two users will simultaneously try to update the same row.
From the database vendor's point of view, page-level locking is advantageous because fewer locks must be placed - particularly during batch operations that affect many rows. This means the resource requirements of the lock manager module in the database management system are lower, and this can help improve the performance of the database server. However, users are invariably the slowest part of any database application, so you'll usually get better overall performance in an environment where one user cannot block another.
Why You Should Care
Understanding how your database manages locks can be critically important. Consider an Orders table. New records are added continuously as new orders are received. Because the Order data does not include a field (or fields) that would form a natural primary key, you decide to use an artificially generated Order Number as a surrogate key. Order Numbers will be assigned sequentially as orders are received.
Because your application must frequently select groups of orders, you create a clustered index on the Order Number column. A clustered index provides superior performance when retrieving adjacent records. This is because the records are physically stored in key order within the database pages.
Unfortunately, this design will probably produce poor performance if the database uses page-level locking. Because sequential adjacent keys are being assigned and a clustered index is being used, each new record added will probably be placed on the same page as the preceding record. Because the database locks at the page level, two users cannot add new orders to the same page simultaneously. Each new order must wait until the page lock placed by the preceding order is released. In this case you would get much better performance by randomly assigning the keys. This will reduce the chance that successive records will be added to the same page.
Transactions
Database servers also require the ability to group changes to the database into transactions. Transactions consist of one or more changes to one or more tables in the database that must be treated as a single unit. This is so that either all or none of the changes that comprise the transaction occur.
Transaction processing occurs in three steps:
First, tell the database you want to begin a transaction. This informs the database that all changes - until further notice - are to be treated as a single unit.
Next, the changes are made to the tables in the database.
Finally, notify the database system that you want to either commit or rollback the transaction. If you commit the transaction, the changes become permanent. All the changes are "undone" with a rollback.
Transaction processing is vital to ensure the database's logical integrity. Let's say that Fred transfers $100 from his savings account to his checking account. This transaction would proceed as follows:
Start a new transaction.
Update the savings account balance to show a withdrawal of $100.
Update the checking account balance to reflect an increase of $100.
Either commit or rollback the transaction.
Suppose the system crashes after step 2, but before step 3. Without transaction control, Fred would have lost $100. With transaction control, when the system is restarted, the database management system (DBMS) will automatically rollback any transactions not committed at the time of the system's crash. This guarantees that the database will be left in a consistent state.
You also need transaction control for read transactions that will read more than a single record. This is to ensure that the read returns a consistent view of the data. We'll discuss this requirement in more detail in the next section.
Transaction Isolation
Transaction isolation governs how simultaneously executing transactions interact with each other. Many of today's database servers were originally designed to process short update transactions intermixed with single row reads.
The perfect example of this is an automated teller machine (ATM). An ATM reads the balance in a single account, or updates the balance in one or more accounts. In this environment, transactions are short, and reads involve a single row at a time, so transaction isolation is not a serious concern. However, many of today's database applications do not fit this model.
Short update transactions are still the norm. However, the advent of executive information systems has introduced long running read transactions that span entire tables - sometimes entire databases.
Let's consider the following scenario. An executive requests the total value of the company's inventory by warehouse. While the query is scanning the inventory table, a user moves a pallet of platinum bars from warehouse A to warehouse B and commits the transaction. It's possible for the query to count the platinum in both warehouses, thus producing an erroneous inventory valuation report.
The question becomes, "Which updates should a read transaction see, and when should it see them?" This is what the transaction isolation level controls. There are three basic isolation levels:
Dirty Read - This isolation level allows any record in the database to be read whether or not it has been committed.
Read Committed - This level allows read transactions to see only those changes that were committed.
Repeatable Read - A repeatable read allows the read transaction to immediately see a snapshot of the database when the transaction began. Neither committed nor uncommitted updates that occur after the read transaction starts will be seen.
Note that the TransIsolation property of Delphi's TDatabase component allows you to set all three of these isolation levels. However, this doesn't mean that your server supports the isolation level you have selected. In addition, if you're using an ODBC driver, the driver must also support the isolation level you set. Search on "Transactions | Transaction Isolation Levels" in the Delphi online Help to view a table showing what each of these isolation levels maps to on your server.
In the example above, you need a repeatable read isolation to ensure the accuracy of your inventory valuation report. The problem is the price you must pay to get repeatable read in a database with a locking architecture. With the locking model, the only way to ensure that data does not change during a long read transaction is to prevent any updates from occurring until the read transaction ends. In many situations, the effect on users of stopping all updates for the duration of a long read transaction is unacceptable.
Versioning
Versioning is another model for concurrency control. It overcomes the problems that locking model databases have when the environment consists of a mixture of update and long read transactions. This model is called the versioning model. To date, InterBase is the only DBMS to use the versioning model.
Let's reconsider the preceding example. The read transaction to produce the inventory valuation report begins. When the update transaction to move the pallet of platinum from warehouse A to warehouse B is committed, a new version of each updated record is created. However, the old versions still exists in the database.
In a versioning database, each transaction is assigned a sequential transaction number. In addition, the DBMS maintains an inventory of all active transactions. The transaction inventory pages show whether the transaction is active, committed, or rolled back.
When an update transaction commits, the DBMS checks if there are transactions with lower transaction numbers that are still active. If so, a new version of the record is created that contains the updated values. Each version also contains the transaction number of the transaction that created it.
When a read transaction begins, it retrieves the next transaction number and a copy of the transaction inventory pages that show the status of all uncommitted transactions. As a read transaction requests each row in a table, the DBMS checks if the transaction number for the latest version of the row is greater than the transaction number of the transaction that's requesting it. The software also checks if the transaction was committed when the read transaction started.
Let's say the transaction number of the row's latest version is greater than the requesting transaction's number; or, the transaction which created the latest version was active when the read transaction started. With either scenario, the DBMS looks back through the chain of prior versions. The software continues until it encounters a version with a transaction number that is less than the transaction number of the transaction that is trying to read the row and whose transaction status was committed when the read transaction started.
When the DBMS finds the most recent version that meets these criteria, it returns that version. The result is repeatable read transaction isolation without preventing updates during the life of the read transaction.
Consider the following example of a row for which four versions exist:
Tran=100 (status=committed)
Tran=80 (status=active when read started)
Tran=60 (status=rolled back)
Tran=40 (status=committed when read started)
Assume that a read transaction with transaction number 90 attempts to read this row. The read transaction will not see the version of the row created by transaction 100 because the update that created this version took place after transaction 90 began. Also, transaction 90 cannot read the version created by transaction 80, even though it has a lower transaction number. This is because transaction 80 isn't yet committed. Although the version for transaction 60 still exists on disk, transaction 60 has rolled back - and rolled back versions are always ignored. Therefore, the version that transaction 90 will read is the version created by transaction 40.
Note that in this example, transaction 80 is not allowed to commit. When transaction 80 attempts to commit, the DBMS will discover that transaction 100 has committed, and transaction 80 will be rolled back.
Advantages of Versioning
For a more complete understanding of how the locking and versioning models compare you must examine two things:
the types of concurrency conflicts that can occur in a multi-user database, and
how each model behaves in each case.
The following examples assume that the locking model uses a shared read lock and an exclusive write lock to implement optimistic locking. Multiple users can place read locks, but no user can place a write lock if another user has either a read or write lock. If one user has a write lock, another user can neither read nor write the row. This is typical of databases that use locking architecture.
Consider the case where a husband and wife go to different ATMs at the same time to withdraw money from their joint checking account. Without concurrency control, the following sequence of events occurs:
Fred reads the account's balance as $1,000.
Ethel reads the account's balance as $1,000.
Fred posts a $700 withdrawal.
Ethel posts a $500 withdrawal.
At this point, the account balance is -$200 and the bank is not happy. This happened because without a concurrency control mechanism, Fred's update is lost as far as Ethel is concerned. She never sees the change in the account balance. However, under the locking model:
Fred reads the account's balance causing a read lock.
Ethel reads the account's balance, also causing a read lock.
Fred posts his withdrawal attempting a write lock that fails because of Ethel's read lock.
Ethel posts her withdrawal attempting a write lock that fails because of Fred's read lock.
A deadlock now exists. Hopefully, the DBMS will detect the deadlock and rollback one of the transactions.
Under the versioning model, Fred reads the account's balance and Ethel reads the account's balance. Then, Fred posts his withdrawal, which causes a new version with a new balance to be written. When Ethel posts her withdrawal, it's rolled back when the newer version is detected.
A different problem occurs if a user does not commit a transaction. Let's say Fred withdraws money from the account and this updates the balance. Ethel reads the balance and Fred cancels the transaction before committing. Now Ethel has seen the wrong balance. In this case, a dependency exists between the two transactions. Ethel's transaction produces the correct results only if Fred's transaction commits. This illustrates the danger of reading uncommitted data.
Using locking, Fred reads the balance that places a read lock, and then commits his withdrawal that places a write lock during the update. Ethel reads the balance, which attempts a read lock but must wait because of Fred's write lock. Fred cancels the transaction before committing. This rolls back and releases the write lock. Ethel can now read and get the correct balance.
Under versioning, Fred withdraws the money. This updates the balance and creates a new uncommitted version. At her machine, Ethel reads the balance, but it does not reflect Fred's uncommitted withdrawal. Fred rolls back, so the version showing the withdrawal is marked rolled back. This illustrates a performance advantage of versioning because Ethel does not have to wait to read the balance.
The following is a different example, but it's the same as our earlier scenario of moving platinum from one warehouse to another:
Fred requests the total of all accounts.
Ethel transfers money from savings to checking while Fred's transaction is running.
Fred receives the wrong total. The analysis of the data is inconsistent because the data's state was not preserved throughout the life of the read transaction.
Under locking, Fred requests a total of all accounts, thereby placing a read lock. Ethel transfers money but cannot place a write lock to commit the transfer because of Fred's read lock. Ethel must wait until the read transaction finishes. Finally, Fred gets the right total and releases the read lock and Ethel's transaction can proceed.
Under versioning, Fred requests the total. At her ATM, Ethel transfers money from savings to checking, resulting in new versions which Fred's transaction does not see. Fred gets the correct total and Ethel's update is not delayed.
Another variation of the repeatable read problem occurs if you must reread the data in the course of the transaction. For example:
A query is started for all rows meeting certain criteria.
Another user inserts a new row that meets the criteria.
Repeat the query and you will get one additional row. The appearance of this "phantom row" is not consistent within the transaction.
With a database that uses the locking model, the only way to prevent this inconsistency is to read lock the whole table for the duration of the transaction. Thus the sequence of events is:
Place a read lock on the table.
Query for all records meeting certain criteria.
Another user attempts to insert a record, but is blocked by the table-level read lock.
Repeat the query and you'll get the same results because other users cannot commit changes.
Under versioning there's no problem, because the newly inserted record has a higher transaction number than the read transaction. Therefore, it's ignored on the second and subsequent reads that are part of the same transaction.
Disadvantages of Versioning
So far it looks as if the versioning model handles most concurrency conflicts better than the locking model. However, this is not always the case. In this example, Fred and Ethel are both told to make their salaries equal:
Fred reads his salary.
Ethel reads her salary.
Fred sets Ethel's salary equal to his.
Ethel sets Fred's salary equal to hers.
Under versioning, the result is that their salaries are simply swapped. Using locking you can prevent this by locking both records. For example, both Fred and Ethel read their own salaries and place read locks. Fred sets Ethel's salary equal to his, but cannot commit because of Ethel's read lock. Likewise, Ethel sets Fred's salary equal to hers, but cannot commit because of Fred's read lock.
Once again, you have a deadlock that the database system should resolve by rolling back one transaction. Another solution using locking is to write lock the entire table. For example, Fred write locks the table and reads his salary. Ethel then tries to read her salary, but is blocked by Fred's table-level write lock. Fred sets Ethel's salary equal to his and releases the write lock. Ethel's transaction is now free to proceed.
Under versioning, Fred reads his salary and Ethel reads hers. Fred sets Ethel's salary equal to his and commits. Then Ethel sets Fred's salary equal to hers and commits. Once again the salaries are swapped, because versioning allows both transactions to process concurrently. The only way to solve this problem with the versioning model is as follows:
Fred reads his salary.
Ethel reads her salary.
Fred sets Ethel's salary equal to his.
Fred sets his salary to itself, creating a newer version.
Ethel sets Fred's salary equal to hers, but it rolls back because a newer version exists.
Here the problem is solved by setting Fred's salary equal to itself. This forces the creation of a new record version for Fred's salary. Versioning architecture will not allow a change to be committed when a version of the record to be updated exists (which was created after the start of the current transaction). Therefore, Ethel's update rolls back.
Recovery
One very important issue in any database application is recovery time when the server crashes. No matter how robust your hardware and software and/or how reliable your electric power supply, there's always a possibility the server will fail. Both locking and versioning databases will recover automatically when the server is restarted. However, there's a significant difference in the recovery time.
Locking-model databases write each transaction to a log file. To recover after a crash, the DBMS must read the log file and rollback all the transactions that were active at the time of the crash by copying information from the log to the database.
A versioning database does not have a log file. The record versions in the database already provide all the information required to recover. No data needs to be copied from one place to another. Instead, when the DBMS comes back on line, it simply goes through the transaction inventory pages and changes the status of all active transactions to rolled back. At most this will take a few seconds, even on a large database or one with a large number of active transactions. Thus, crash recovery is another area where the versioning model excels.
Other Issues
At first it may appear that a versioning database has a significant disadvantage. This is because the multiple record versions will cause the database size to temporarily increase rapidly compared to a locking database. While this is true, don't forget that other databases also grow as their log files expand.
However, versioning databases will certainly grow rapidly if something is not done to control the proliferation of record versions. The DBMS performs some of the housekeeping for you automatically. Each time a record is accessed, the DBMS checks if any prior versions of that record are no longer needed. A version is obsolete if its transaction rolled back, or if there is a later committed version of the record and there are no active transactions with a transaction number less than the transaction number of the newer committed version. Versions that are obsolete are automatically deleted and the space they occupied in the database pages is reused.
Many rows in many databases are visited infrequently. To remove unnecessary versions of these rows, the database must be periodically "swept." A sweep operation visits every row in every table in the database and deletes outdated versions. You can run the sweep while the database is in use, but the sweep will impact performance while it's running.
InterBase, by default, will automatically start a sweep after 20,000 transactions. This isn't the best way to manage sweeping because you have no control over when the sweep will start. In addition, the user who starts the transaction that triggers the sweep is locked until the sweep finishes. It's better to periodically start a sweep manually when database use is low.
Conclusion
Selecting the correct database for your application requires a clear understanding of the types of transactions the system must process. Many applications today require a mixture of multi-row read transactions and updates. In this environment, versioning has a clear advantage because it can process read and write transactions concurrently while still providing repeatable read to ensure accuracy.
Versioning also provides rapid crash recovery because there's no log file to process. When a versioning database restarts, it simply marks all open but uncommitted transactions as rolled back, and it's ready to go.
As stated earlier, InterBase is the only DBMS to use the versioning model. In addition to the advantages of the versioning model, InterBase has the smallest disk and memory footprint (it ships on two diskettes), is self tuning, and runs on NetWare, Windows NT, and a wide variety of UNIX platforms. Therefore, InterBase is highly scalable.
Feliratkozás:
Megjegyzések küldése (Atom)
Nincsenek megjegyzések:
Megjegyzés küldése