wiki:SERVER_DATABASE_PERSISTENCE
Last modified 15 years ago Last modified on 03/01/10 14:55:03

Error: Macro BackLinksMenu(None) failed
compressed data is corrupt

Error: Macro TicketQuery(summary=SERVER_DATABASE_PERSISTENCE, format=table, col=summary|owner|status|type|component|priority|effort|importance, rows=description|analysis_owners|analysis_reviewers|analysis_score|design_owners|design_reviewers|design_score|implementation_owners|implementation_reviewers|implementation_score|test_owners|test_reviewers|test_score|) failed
current transaction is aborted, commands ignored until end of transaction block

Analysis

Overview

The Server have to keep it's resource data in a flexible and extend-able database. The information should be stored and loaded when the Server is restarted, the current functionality should continue to work.

Task requirements

  • Choose an appropriate database for storing.
  • Create a good database schema for storing the resources and the history
  • Store in the database the resources and the history
    • Should be able to pick a resource data at a specified revision
    • Should be able delete and create a resource in the database
    • Should be able to store the changes of an resource in the database.
  • Create an API that communicates with the database using JDBC.
    • Create a JDBC template to query easy the DB
    • Think of a way to manage the DB connection
      • Connecting to the DB should be treadsafe.
  • Make the facade and the web use the database
  • Preserve the current functionality - collaboration, skipping changes, web upload/download/browse/delete
  • Make a mock server from the old implementation of accessing resources in the server memory (optional)

Task result

Source code and tests.

Implementation idea

  • Use H2 database for the database.
  • Use ThreadLocal connection variables for the ThreadSafe connections.
  • Create special WebResourceAccess to replace the current mem accesses in the web, but to preserve working with helpers.
  • Use the current Persistence API to store the immutables in the DB.

No related tickets for now...

How to demo

  • Show the working web and collaboration in Sophie but with the DB implementation
    • Web interface
      • Upload a book through the web interface
      • Browse the books in the web interface
      • Download and delete the book.
    • Collaboration
      • Save a book on the server
      • Make some changes from different Sophie clients
      • Undo the change, redo them
      • See the actions in all the clients

Design

Database and JDBC template

Database and schema

  • The choosen database is H2 DB -> Homepage
    • The other canditates were:
      • Derby or JavaDB, it is Sun's embedded database used mainly for Java Application, but in comparison to the H2 there are no inner optimizations with the transactions and there are some problems
      • Hypersonic -> The embedded DB of Hibernate, it is simmilar ot H2, but its main purpose is to be used with Hibernate.
    • The H2 database has a JDBC implementation and comes with one jar, added as dependency to the server.core module.
    • It can be embedded in the memory (The virtual machine), in file or to be used as server and client
    • In Sophie 2 it will be used as Embedded in the memory (JVM) and stored into a file.
    • JDBC can access the DB with similar URLS "jdbc:h2:path_on_the_machine;AUTO_SERVER=TRUE", the AUTO_SERVER_TRUE here means that if another client tries to connect to the DB our clent will continue to be connected, and not rejected, also means that if another client is connected already to the DB, our client will not be rejected...
  • Schema -> The schema will be constructed in sucha way that the queries to the DB could be optimal (It will be normalized and if you want for example to see the value of a given key of a given resource at a given revision, you will not need to select the change caused the revision, ie every logiical kind of data will be iiiin separate table)
    • Tables that have unique data will be named in the following convention : T_TABLENAME (T comes from table)
    • Tables that can be viewed with a series of joins on the other tables and are used mainly for ease when selecting will be names MV_TABLENAME (The V is from view, the M comes from multiple (tables))
    • Tables
      • T_RESOURCES
        • Has three columns, unique DB id (autoincrement), parent id, that points to an unique id in the same DB and a resource name.
        • The base table for the resources, it keeps the parent-child relations.
        • From this table the ChildrenKey value can be calculated (all the rows that have specified parent id)
        • The resourc enames here are the last part of their ResourceRefs (unique).
      • MV_RESOURCE_PATHS - view used for easy listing of resources, contains id -> foreig key to an id in the T_RESOURCES table and a resource path, ztarting from the main dir on the server '/'
        • This view is used for listing the paths of the resources, and retrieving paths, because using the T_RESOURCES table for constructing a path must iterate all the parents.
        • The children key value can be retrieved from here too with query using LIKE path_to_parent%
      • T_REVISIONS
        • Has two columns, a unique db id (autoincrement) and RevisionId in the form of string.
        • Used to store all the revisions of the resources on the server.
      • T_RESOURCE_REVISIONS
        • Has two columns, a foreign key to a resource db id in the T_RESOURCES and a foreign key to a revision db id in the T_REVISIONS.
        • Shows which revisions are for which resources, some reosurces can have some of the revisions the same (for example the causing change of the revision updated a parent resource and two child resources).
      • T_RESOURCE_LIFES
        • Have four columns, a foreign key to a T_RESOURCES DB id, two foreign keys to T_REVISIONS db id (SET_REVISION_ID and CHANGE_REVISION_ID) and a boolean value.
        • Shows if a resource is deleted or not, because all the history (even for the deleted resources) must be saved.
        • The two revision ids have the following meaning. The SET_REVISION_ID is the id of the revision the resource is set to alive or deleted and the CHANGE_REVISION_ID is the id of the next revision there is a change in the status (can be null).
        • When the resource 'life status' is changed the row with null for CHANGE_REVISION_ID should be set to the current revision and new row must be added for the resource with its new 'life status', current revision id as SET_REVISION_ID and null as CHANGE_REVISION_ID.
        • This let the developers to track undo of deleting or creating resoures on the server.
        • At this moment this table is not used optimally.
      • T_CHANGES
        • Has two columns, foreign key to a T_REVISIONS DB id, of the revision the change causes and the change stored to string.
        • At the moment the string is XML created by the Sophie 2 Persistence API
      • T_KEYS
        • Has two columns unique DB id (autoincrement) and key path -> string
        • The path is the last part of the key, for example title, page-size, background, color, etc...
        • Something like all the possible keys in the DB model.
      • T_KEY_VALUE_CHANGES
        • It has five colums, unique DB id (autoincrement), foreign key to the db id of T_RESOURCES, foreign key to the DB id of T_KEYS and two foreign ids to T_REVISIONS, SET_REVISION_ID and CHANGE_REVISION_ID
        • It represents a key value of a resource (the key and resource foreign id) at revision (SET_REVISION_ID).
        • The CHANGE_REVISION_ID is the id of the next revision (null if the SET_REVISION_ID is the current revision of the key of the resource)
        • The two revision ids represent history links. The idea is when changin a key, to find its current value row (CHANGE_REVISION_ID is null there for that key of that resource),

to modify the CHANGE_REVISION_ID to be the DB id of the new revision and to add a new row in the table for the same key and resource but with SET_REVISION_ID the DB id of the current revision and CHANGE_REVISION_ID - null.

  • This way we can track history in the DB easely, get a value of a key of a resource at random revision, find previous revision, etc...
  • The writes of a revision can be get from here (all the records that the DB id of the revision is SET_REVISION_ID). That helps the skip algorithm.
  • T_KEY_VALUES
    • Has two colums, a foreign key to the DB id of T_KEY_VALUE_CHANGES and a Clob column for persisted as string or binary data Immutables.
    • This table contains the values for T_KEY_VALUE_CHANGES.
  • T_REVISION_READS
    • Has two colums, a foreign key to the DB id of T_REVISIONS and a VARCHAR column.
    • This table represents the readed keys of a revision
    • The varchar columns represent a full path of key (for example childre:Book.book.s2:children:Page A:title)
    • The table can be represented as the T_KEY_VALUE_CHANGES table, but here we don't need links for the history, don't care of actual values, the meaning of the reads is simpler, so the structure of the table is simple.
    • Reads are added rarely compared to writes, so the table is perfect for its purpose.
  • Schema diagram: DB Schema

JDBC Template

  • For our purposes using JDBC is enough there is no need of ORM like Hibernate and bean model for it, because we will have small database.
  • The queries for selecting/updating/inserting data can be categorized and there will be one class that gives methods for using them. This class will capsulate all the actions using JDBC and will take care with the JDBC API for the developers.
  • The base API is plased in the org.sophie2.server.core.persistence.db.jdbc package of the server core module, some of the implementations of the interfaces that are used for specific actions with resources are in the org.sophie2.server.core.persistence.db package, where is the Server Resource Persistence API, i.e. specific classes for managing resources.
  • Class : JdbcTemplate -> Responsible for releasing all the JDBC resources that are used for query like ResultSets and Statements. It is not responsible for managing JDBC Connections.
    • The JdbcTemplate class can be constructed with a ConenctionManager (used for retrieving of threadsafe connections to the DB and commit and rollback of transactions)
    • Public methods:
      • public <T> T execute(SQLCallback<T> callback) -> Used for executing SQLCallbacks (Actions that executeeeeee sql queries) to the database. It returns the result of the callback. If there is any error, the current transaction to the DB will be rollbacked and the connection closed (This method must be use by all methods executing queries, it takes care of errors in the JDBC or in the DB)
      • public <T> T execute(final ConnectionCallback<T> action) throws JdbcException -> Used to execute a ConnectionCallback (Actions which use JDBC connections). This execute method uses the mentioned above one, creating a SQLCallback that executes the ConnectionCallback in a connection retrieved by the ConnectionManager. (This method takes care of retrieving the connections, it should release the connection if there is no user made transaction and there no errors)
      • public <T> List<T> queryForList(String sql, RowMapper<T> rowMapper, Object... parameters) -> Method for executing queries to the database which return list data, for example for selecting a list of resource names...
      • public <T> T queryForObject(String sql, RowMapper<T> rowMapper, Object... parameters) -> Used for executing queries that hase a single result, for example the name of a resource with a given database ID.
      • public <T> T query(final String sql, final ResultSetMapper<T> mapper, final Object... parameters) -> Executes a query with a list of parameters, used by the two methods above.
      • public <T, P> T query(final String sql, final ResultSetMapper<T> mapper, final ParametersSetter<P> parametersSetter) -> The same as the above one, but its query parameters are managed by a given ParameterSetter.
      • public int update(final String sql, final Object... parameters) Used for a single update to the databse, it returns the update count of the query.
      • public int[] updateBatch(final String sql, final Collection<Object[]> parameters) Executes a number of updates to the database with one query string, but with different parameters, used by the above method for one update.
      • public int[] updateBatch(final String sql, final ParametersSetter<Object[]> parametersSetter) The same as the above, but using ParameterSetter to manage the parameters and used by the above.
      • public int[] updateBatch(final String sql, final ParametersSetter<Object[]> parametersSetter) Executes an insert to the database, there are analogical insertBatch method as the update ones.
      • public ConnectionManager getConnectionManager() -> Getter of the ConnectionManager of the template.
      • commit() and rollback() methods for user made transactions that can be commited or rollbacked by the user, the use the ConnectionManager.
    • As you can see there are plenty of public methods some of which should be protected or private (the batch methods, the query and execute methods), They are public because the user may not be able to change the template but wants to write qa query different from just insert/update one query or selllect one object or one list of objects, if you have oppinion on which of these methods should be private/protected, please write it in the review.
  • Interface : SQLCallback<T> -> Represents a callback to the database using sql query.
    • Public methods:
      • public T doSQL() throws SQLException -> Executes a SQL query using the JDBC API, which can throw SQLException...
    • Its implementations are used to manage the connection from the connection manager and use a ConnectionCallback.
  • Interface : ConnectionCallback<T> -> Represents a callback to the database with managed JDBC connection.
    • Public methods:
      • public T doInConnection(Connection conn) throws SQLException -> Executes a SQL query using the provided connection.
    • The SQLCallbacks retrieve and manage connections, and their doSQL method instantiates ConnectionCallbacks that use that connections. The execute methods of the JDBCTemplate deal with the SQLExceptions and if it is needed throw our JdbcException.
  • class : JDBCException -> The exception thrown when something is wrong with the JDBCTemplate queries. It is unchecked and in general represents unchecked SQLException.
  • class : ConnectionManager-> Manages the connections to the database, provides thread safe connections and provides methods for strating transactions and rollback of transactions.
    • The manager is constructed by a DataSource (From the JDBC API, the data source can provide a connections from a pool, for reusing...)
    • Public methods:
      • public Connection getCurrentConnection() -> Retrieves the current connection, if there is no such connection, gets one form the DataSource.
      • public void releaseCurrentConnection() throws SQLException -> Releases the current connection if it is not used by a transaction.
      • public void commitCurrentTransation() throws SQLException -> Commits the current transaction.
      • public void rollbackCurrentTransation() throws SQLException -> Rollbacks the current tansaction.
      • public DataSource getDataSource() -> Getter of the data source of the manager.
  • Interface : RowMapper<T> -> Used to map a row of data selected from the database to an object or list of objects.
    • Public methods:
      • public T mapRow(int rowNum, ResultSet rs) throws SQLException -> Maps a specified row of the passed ResultSet (JDBC API) to an object or list of objects.
    • Implementations:
  • Interface : ResultSetMapper<T> -> Maps a ResultSet (JDBC API, contains a result from query) to an Object or list of objects, its implementations often use RowMappers to iterate over the ResultSets.
    • Public methods:
      • public void mapRow(int rowNum, ResultSet resSet) throws SQLException -> Maps a row from the ResultSet to a value.
      • public T getResult() -> Getter for the result of the mapping. Ususally the implementator will iterate through a ResultSet and for each of its rows will call mapRow ot the mapper, then after all the rows are iterated will retrieve the result with this method. OFten the implementations use RowMappers to map the columns of a given row.
    • Implementations:
      • KeyMapper -> Maps a result to pair keypath and its id in the database.
      • ListMapper<T> -> Maps a result to a list of objects, uses RowMapper to map the separate rows.
      • SingleResultMapper<T> -> Maps a result to a single object, used with select queries that return only one row.
  • Interface : ParameterSetter<T> -> Manages setting the parameters to a PreparedStatement (JDBC API)
    • Public methods:
      • public void setParameters(int batchIndex, PreparedStatement stmnt) throws SQLException -> Sets parameters to a specified statement, it can set different parameters to one statement and its first parameter means which batch of parameters will be set.
      • public int getBatchesCount() -> The number of the batches of the parameters of the setter.
    • Implementations:
      • DefaultParameterSetter -> Manages setting the parameters to a given statement. It is constructed with its parameters, represented as an array of objects for every batch. All the batches are contained in a Collection.

The API Used by the Facade and the Web Interface

ResourceDAO

  • DAOs (comes from Data Access Objects) are something like services that has simple methods for managing a database. They do contain only logic that keeps the data in database valid and logic for selecting of specified data.
  • Our DAOs are ment to use the JDBCTemplate and the ConnectionManager to manage the database.
  • There is only one DAO for now - ResourceDAO, in future there cаn be more, SecurityDAO for example...
  • Class : ResourceDAO -> Gives simple methods to persist/retrieve and edit resources, their keys and revisions (history).
    • Keeps inner constants with the names of the queries stored in the queries.properties file in the server.core module.
    • Uses the SqlQueries util to load the queries from the properties file.
    • Can be constructed only with an instance of JDBCTemplate. So to use the DAO a developer needs to retrieve a JDBC DataSource to a database, construct from it JDBCTemplate and use the template to instantiate the DAO.
    • Public methods:
      • public JdbcTemplate getJdbcTemplate() -> Getter of the JDBCTemplate in the DAO.
      • public Long findResourceIdByPath(String resourcePath) -> Finds the DB id of an resource by it's path on the server, for example "/resources/My Book.book.s2" is such path. With that id the resource can be edited, or viewed freely.
      • public Long createResource(final Long revisionId, DBResource resource) -> Creates a resource in the database and flags it like 'alive'. Returns the DB id of the new resource. the DBResource POJO keeps the data of the new resource to be inserted.
      • public Long createRevision(DBRevision revision) -> Creates a revision in the database, needed to be done before creating a resource for example,
      • public Long findOrCreateKey(String keyPath) -> Retrieves the db id of a key, if the key does not exists creates a record for it in hte DB. Proper key paths are for example 'kind' or 'title'...
      • public void changeKeys(final Long revisionId, final Set<DBKeyChange> changes) -> Changes specified key values in the DB, and for that change new revision for the resource owning the keys is created. The revisions are global for now, so every revision changes the main resources directory. Only the sub resources that have changed keys are taken in mind retrieving the changes for the revision though...
      • public void readKeys(Long revisionId, List<Key<?>> reads) -> Registers reads for specified keys at a given revision.
      • public void readKeys(Long revisionId, List<Key<?>> reads) and public Map<Key<?>, Long> getRevisionCausingWrites(String resourcePath, String revisionId) -> Used to retrieve the changed and readed keys causing a specified revision. Used for the skip algorithm.
      • public Map<Key<?>, Long> getRevisionModel(String resourcePath, String revisionId) -> Gets a model skeleton for a given revision of a given resource, this skeleton is usefull for building a lazy DB ResourceModel. public String getValueForKey(Long valueId) is used with the value ids in the result map corresponding to keys.
      • public String findRevisionNotAfter(String revisionId) -> Finds the revision id of the revision which is the passed one, or if the passed one does not exist, the last existing one before it. Can be used with the FUTURE revision id to find the current revision on the server.
      • public String findValue(String resourcePath, String keyPath, String revisionId) -> Finds a value of a key of specified resource at specified revision.
      • public String getPreviousRevision(String revId, String prefix) -> Retrieves the previous revision of a resource with specified revision.
      • public SortedMap<String, String> findHistory(String resourcePath, String from, String to, Integer offset, Integer limit) -> Retrieves a history for a resource (specified by its path) in the form of map containing revision ids as keys and the causing changes of the revisions for that ids (persisted to strings) as values.
      • public List<String> findChildResources(String resourcePath) -> Retrieves the value of the ChildrenKey.
      • public List<String> findResourceKeys(String resourcePath) Finds the names of all the modified keys for a resource.
      • All these methods are created because of the needs of the resource model logic, like retrieving history, model at revision, skiping, changing and reading keys...
    • Objects used to contain data for the ResourceDAO
      • DBKeyChange -> Contains a value for a key of a given resource, constructed by the DB ids of the key and the resource and the value as a IO Reader. The JDBC API persists streams...
      • DBResource -> contains a name ofor a new resource and its parent resource DB id.
      • DBRevision -> contains a revision id for a new revision and it's causing change stored to a IO Reader.

Resource Helpers

  • The old logic will be kept as a mock server in the memory only.
    • For that to happen ServerResourceHelpers will be registered as extensions. These helpers will be used to retrieve ResourceServices for the facade, to open accesses to resources and to initialize server module environment.
    • May be in the s2s ServerModule there will be extension point for them. Ideas form the reviewers here? And you can give better names for the classes too.
    • Interface : ServerResourceHelper -> Initializes Server model environment (like DB or connection to DB, ResourceLocator or some other thing)
      • Methods:
        • void initialize() -> Initializes the environment, for example creates a ResourceLocator -> AppLocator or creates connection to a DB.
        • ResourceService getResourceService(String serverUrl) -> Retrieves ResourceServices for example the old mem access implementation is moved in such service, the ServerFacade works with services and they generate its Response to the client.
        • ResourceAccess getResourceAccess(ResourceRefR4 ref, String serverLocation) -> Retrieves (opens) ResourceAccess to a resource specified by its ref. Accesses for books on the server can be retrieved by this way for example.
    • Implementations - For now two, the old mem access implementation and the new DB implementation:
      • ServerResourceHelperAccessImpl -> Creates a memory resource dir when initialized and opens memory accesses to resources in it. The ResourceService provided by it works with memory resources.
      • ServerResourceHelperDBImpl -> When initialized it connects to the database in the dist directory of the core module, if the schema does not exist, it creates it using the schema.sql file and puts a resource dirctory in it (the main directory), creates services using a dao created by JDBCTemplate conected to the DB and opens DBResourceAccesses to the database.

Resource Services

  • The services manage the resource model.
  • They are used by the ServerFacade and contain some of the Response logic, like the ResponseExceptions, if there is some problem on the server.
  • The services are something like a server alternative of the ResourceAccesses, there is a special DBResourceAccess that uses service to adapt the logic of the accesses on the server. However the facade needs the Response logic of the services and works with them.
  • Interface : ResourceService -> Can be retrieved form the Resource Helpers and constructed/cached by them.
    • Methods:
      • <T> T findValue(ResourceRefR4 ref, Key<T> key, RevisionId revisionId) throws ResponseException -> Finds value of a key for resource specified by the passed ref at the specified revision. If there is problem finding the value throws a ResponseException for the ServerFacade. Can retrieve any value at any revision of the resource.
      • <T> T findValue(ResourceRefR4 ref, Key<T> key) throws ResponseException -> The same as the above but for the current revision. This method is the server alternative of ResourceAccess.getRaw(Key<T>).
      • List<HistoryEntry> findHistory(ResourceRefR4 ref, RevisionId from, RevisionId to, int offset, int limit) throws ResponseException -> Retrieves the history for a given resource in given state of its existence. Needed for syncing from the Facade.
      • public RevisionId changeResource(ResourceRefR4 ref, Change change) throws ResponseException -> Registers change, alternative to the ResourcesAccess' registerChange method.
    • Implementations:
      • ResourceServiceAccessImpl -> The implementation till now, using the mem accesses. The old logic stays here...
      • ResourceServiceDBImpl -> The new logic for managing the resource model on server:
        • Needed classes to turn the resource object model (Keys, ResourceModels, Changes, Immutables, etc..) to storable in the DB data.
          • Keys -> The have values and ids, in the database there are only simple ids, the table that connects resource, revision, and key with value needs the key parts to be iterated (and if the path is not existing to be created in the DB, if the server receive key path it is already created by a changer...) and for the last part of for the given sub-resource to be set the new value, or read a value from the DB.
          • Resources -> The same as the keys, making new resource is just set of a children key.
          • Revisions -> Their id is storable in the database (String) and their causing Changes are immutables.
          • Immutables -> Using the Persistence API they are stored to XML or binary data, that is passed as a stream (IO Reader) to the JDBC API, which can handle it.
          • So the only problem is finding a key path in the database and creating the missing paths (Resources) if any for it...
            • For that thing is used the class ResourceChangeHierarchy. It iterates by the Key parts and devides them to paths to Resource and normal paths, for the last part of the key it build the resource path in the database and at that stage the ResourceDAO's methods with key paths and resource paths can be used. The hierarchy uses ResourceVisitors to update or check resources and retrieve, update, set, check keys. The visitors have two methods - visitResource and visitKey.
      • Finding values:
        • The service uses the hierarchy with a special visitor ReadKeyVisitor which reads a key from the DB. It visitResource method don't do anything, but the public void visitKey(String resourcePath, String keyPath, Object value) throws ResponseException method uses the built from the hierarchy resource path and keyPath with the findValue method of the ResourceDAO to directly retrieve the value of the key from the database. There two problems here, the Children and Root keys, that are not kept just as immutable values in the database. For that reason this visitor extends the RootAndChildrenVisitor class.which handles them with the findResourceKeys and findChildResources methods from the ResourceDAO.
        • All key based visitors extend the KeyResourceVisitor Which devides the keys to three kind - regular/children/roots because their values in the database are set or retrieved differently.
      • Finding the history. This thing is simple, the findHistory method of the ResourceDAO is directly used and then the String representations of the changes in the database are converted to Change objects with the Sophie 2 Persistence API.
    • Doing changes -> The third function of the service is more complicate because of the different types of the changes, so it will be described in a different section.

Doing changes

  • Basically the implementation of the changeResource method of the Service for DB.
  • Everything is done in one transaction, if some part of the change can not be persisted all the changes are rollbacked.
  • Algorithm:
    • Two things must be stored, the persisted to XML Change and the new values of the keys (ChangeEffect). Also new revision should be created.
    • Persisting the Change to XML is easy, it is Immutable and can be persisted with the Sophie 2 Persistence API. There is a Reader connected to the persisted value.
    • Getting the Effect:
      • If the change is MetaChange (Skip/Unskip/Undo/Redo) the skip algorithm is used to get its effect.
      • If the chage is ModelChange (change created from AutoAction) the effect is retrieved from its getEffect method, which uses ResourceChanger. For model is used a ResourceReader which get raw gets the values from the DB using the findValue method of the service.
      • When the change is stored to reader and the effect is retrieved a new revision is created in the DB using the ResourceDAO's createRevision method with parameters a new revision id and the IO Reader to the stored Change.
      • The reads from the ChangeEffect are stored to the DB via the readKeys of the dao.
      • The writes from the ChangeEffect are stored to the DB via the changeKeys method of the dao.
        • They are converted to DB format using a special ResourceVisitor, that creates the paths for the Resources (the way of setting root and children keys in the DB). It stores the regular keys to IO Readers.

Redo/Undo algorithm

  • The current logic of Skipping changes is generalized and moved to a special class SkipRevisionAlgorithm (think of a better name if you care...) It uses a special interface SkipContext to retrieve the data needed by the algorith to skip the changes.
  • Interface : SkipContext (Inner for the SkipRevisionAlgorithm, if you want it can be pulled up). Represents the data (ResourceModel, writes, reads) for given revision, the algorith works with such contexts, not revisions, and retrieves the data from them. This way we can use different implementation to retrieve the data from different places... DB, Accesses, something else...
    • Methods:
      • RevisionId getId() -> Gets the id of the revision of the SKipContext.
      • SkipContext getPreviousRevisionContext() -> Retrieves the previous context (the SkipContext of the previous revision)
      • ResourceReader getReader() -> Gets the model of this context. Its reader because the model can be something different then simple ResourceModel.
      • ResourceReader getModifiedReader(Map<Key<?>, Object> writes) -> Getter for a modified resource reader which will behave as the one returned by {@link #getReader()} but with modified values specified by <code>writes</code>.
      • Change getCausingChange() -> Retrieves the causing change of the revision of the SkipContext.
      • <Key<?>> getWrittenKeys() and Iterable<Key<?>> getReads() -> The writes and the reads for the revision of the Context
    • Implementations :
      • Class : ResourceRevisionSkipContext (better name?) embedded in the ResourceRevision class, the default SkipContext that uses ChangeEffect and ResourceModel to implement its methods. The old implementation.
      • Class : DBChangeContext -> The db implementation of the SkipContext:
        • Getting the model -> A lazy reader to the database is created.
          • If there is a model generated in the form of Key to Long (DB id of the value of the key), use a RootAndChildrenVisitor implementation to retrieve the value of the key from the DB. If the key is Root or Children the RootAndChildrenVisitor has logic to retrieve it, if it is regular yhe dao's getValueForKey method is used. The values are cached after retrieving.
          • If there is no model initialized yet but we have a previous context initialized (it is lazy initialized when wanted with the getPreviousRevisionContext method) the value is retrieved from it's model or the current context's writes (If they contain it).
          • If there is no model initialized and prev context, it is initialized via the dao's getRevisionModel method and the value is get from it using the logic described above.
      • Getting the read and writed keys -> Simple using the dao's getRevisionCausingReads and getRevisionCausingWrites methods, lazy again.
      • Getting the causing change -> using the dao's findHistory method in this way : findHistory(the_path_of_the_context_resource, revision_id_of_the_context, revision_id_of_the_context, 0, 1). Lazy.
      • Getting the previous context -> lazy again, A new context for the same resource but with the id of the previous revision is constructed. The previous revision is retrieved with the dao's getPreviousRevision method.
      • Getting modified reader -> A reader which getRaw method uses the Reader form the getReader method if the wanted key is not in the writes, otherwise retrieves the key from the writes.

Web interface

  • The web interface currently works with ResourceHs. The logic will be kept, but the helpers will be constructed with a special ResourceAccess implementation which uses the ResourceServiceDBImpl to retrieve values and change resources.
  • This access should extend the DelegatingAccess class to be able to register changes. The getRef method of the DelegatingAccess class will be rewritten, so it will be no more final, but it must always return absolute ref. In the current case the ref it returns is absolute too. There is a comment there about that.
    • Class DBResourceAccess -> The implementation of DelegatingAccess that delegates to the DB via ResourceServiceDBImpl
      • The constructor is private. Such access can be created with open, which calls the static method public static DBResourceAccess findWebAccess(ResourceServiceDBImpl service, ResourceRefR4 ref, String parentLocation) or with the method itself.
        • The service is used to manipulate the Db, the resource ref can be absolute or relative ref to the resource which access is wanted, and the parent location is the full location of the parent resource (fro example http://localhost:8003/resources).
      • Getting raw values -> Uses the findValue of it's service with smart parameters generated from the parent location, resource ref and the key wanted.
      • Registering changes -> Calls directly the changeResource method of the service (The ResponseException here could be rethrown, but may be if the change can not be registered should be caught silently. The access use must be only on the server so the ResponseException used to notify clients is not important)
      • The cloneHeadRevision method is rewritten, because of the persistence logic, it uses a special resource model which getRaw method uses the getReader of the DBChangeContext of the current revision.

Problems

  • Modules and names -> We should deside to rename some thing, to move them and if is needed to create new modules (Persistence module for the persistence implementations and Mock module for the old implementations)
  • Bugs
    • A bug with the locations still exists, you can not upload a resource with the same location as an existing, deleted one from the Web UI. The problem is that th elives in the DB are not used properly.
    • A problem with setting a whole model as a value for a key in the DB. Upload from the client makes the book buggy if not reopen.
  • May be the Expirience team should try it from the branch to find bugs if there are any hidden.

Tests

  • DBTest -> The test to be extended from all the DB tests.
  • DBModelTest -> Test parent for all the tests using the database with the S2S persistence schema, it creates the schema and deletes the DB after testing.
  • DBContextTest -> Tests the DBChangeContext class and with it most of the ResourceDAO's methods.
  • ResourceDAOTest -> Tests the most important of the ResourceDAO's methods.
  • The web UI was tested manually.

Implementation

The implementation is done according to the desing in the: branches/private/tsachev/paragraphs
There is still bug with the location of book elements that were added before the book was uploaded.
There are still different bugs when many people are working on the same book.

Testing

(Place the testing results here.)

Comments

(Write comments for this or later revisions here.)

Attachments