2.1.3.6. SQL Database-based Data Services¶
This module defines a general (but abstract) implementation of the EDM-based data-access-layer (DAL) using Python’s DB API: http://www.python.org/dev/peps/pep-0249/
It also contains a concrete implementation derived from the above that uses the standard SQLite module for storage. For more information about SQLite see: http://www.sqlite.org/
2.1.3.6.1. Data Access Layer API¶
2.1.3.6.1.1. Entity Containers¶
There are primarily two use cases here:
- Create a derived class of SQLEntityContainer to provide platform specific modifications to the way SQL queries are constructed and database connections are created and managed.
- Create a derived class of SQLEntityContainer to provide modified name mappings for a specific database and metadata model.
These two use cases can be supported through multiple (diamond) inheritance. This makes it easier for you to separate the code required. In practice, implementations for different database platforms are likely to be shared (perhaps as part of future releases of Pyslet itself) whereas modifications to the name mangler to map this API to an existing database will be project specific.
For example, to achieve platform specific modifications you’ll override SQLEntityContainer and provide new implementations for methods such as SQLEntityContainer.get_collection_class():
class MyDBContainer(SQLEntityContainer):
def get_collection_class(self):
return MyDBEntityCollection
To achieve modified property name mappings you’ll override SQLEntityContainer and provide new implementations for methods such as SQLEntityContainer.mangle_name():
class SouthwindDBContainer(SQLEntityContainer):
def mangle_name(self,source_path):
# do some custom name mangling here....
Normally, you’ll want to achieve both use cases, so to actually instantiate your database you’ll select the container class that represents the database platform and then combine it with the class that contains your data-specific modifications:
import MyDB, Southwind
# easy to configure constants at the top of your script
DBCONTAINER_CLASS=MyDB.MyDBContainer
DBCONTAINER_ARGS={
'username':"southwind",
'password':"password"
}
MAX_CONNECTIONS=100
class SouthwindDB(Southwind.SouthwindDBContainer,DBCONTAINER_CLASS):
pass
# .... load the metadata from disk and then do something like this
db=SouthwindDB(container=SouthwindMetadata,max_connections=MAX_CONNECTIONS,**DBCONTAINER_ARGS)
- class pyslet.odata2.sqlds.SQLEntityContainer(container, dbapi, streamstore=None, max_connections=10, field_name_joiner=u'_', **kwargs)¶
Bases: object
Object used to represent an Entity Container (aka Database).
Keyword arguments on construction:
- container
- The EntityContainer that defines this database.
- streamstore
- An optional StreamStore that will be used to store media resources in the container. If absent, media resources actions will generate NotImplementedError.
- dbapi
The DB API v2 compatible module to use to connect to the database.
This implementation is compatible with modules regardless of their thread-safety level (provided they declare it correctly!).
- max_connections (optional)
The maximum number of connections to open to the database. If your program attempts to open more than this number (defaults to 10) then it will block until a connection becomes free. Connections are always shared within the same thread so this argument should be set to the expected maximum number of threads that will access the database.
If using a module with thread-safety level 0 max_connections is ignored and is effectively 1, so use of the API is then best confined to single-threaded programs. Multi-threaded programs can still use the API but it will block when there is contention for access to the module and context switches will force the database connection to be closed and reopened.
- field_name_joiner (optional)
- The character used by the name mangler to join compound names, for example, to obtain the column name of a complex property like “Address/City”. The default is “_”, resulting in names like “Address_City” but it can be changed here. Note: all names are quoted using quote_identifier() before appearing in SQL statements.
This class is designed to work with diamond inheritance and super. All derived classes must call __init__ through super and pass all unused keyword arguments. For example:
class MyDBContainer: def __init__(self,myDBConfig,**kwargs): super(MyDBContainer,self).__init__(**kwargs) # do something with myDBConfig....
- streamstore = None¶
the EntityContainer
- dbapi = None¶
the optional StreamStore
- module_lock = None¶
the DB API compatible module
- fk_table = None¶
A mapping from an entity set name to a FK mapping of the form:
{<association set end>: (<nullable flag>, <unique keys flag>),...}
The outer mapping has one entry for each entity set (even if the corresponding foreign key mapping is empty).
Each foreign key mapping has one entry for each foreign key reference that must appear in that entity set’s table. The key is an AssociationSetEnd that is bound to the entity set (the other end will be bound to the target entity set). This allows us to distinguish between the two ends of a recursive association.
- aux_table = None¶
A mapping from the names of symmetric association sets to:
(<entity set A>, <name prefix A>, <entity set B>,
<name prefix B>, <unique keys>)
- mangled_names = None¶
A mapping from source path tuples to mangled and quoted names to use in SQL queries. For example:
(u'Customer'):u'"Customer"' (u'Customer', u'Address', u'City') : u"Address_City" (u'Customer', u'Orders') : u"Customer_Orders"
Note that the first element of the tuple is the entity set name but the default implementation does not use this in the mangled name for primitive fields as they are qualified in contexts where a name clash is possible. However, mangled navigation property names do include the table name prefix as they used as pseudo-table names.
- field_name_joiner = None¶
Default string used to join complex field names in SQL queries, e.g. Address_City
- ro_names = None¶
The set of names that should be considered read only by the SQL insert and update generation code. The items in the set are source paths, as per mangled_names. The set is populated on construction using the ro_name() method.
- mangle_name(source_path)¶
Mangles a source path into a quoted SQL name
This is a key extension point to use when you are wrapping an existing database with the API. It allows you to control the names used for entity sets (tables) and properties (columns) in SQL queries.
- source_path
A tuple or list of strings describing the path to a property in the metadata model.
For entity sets, this is a tuple with a single entry in it, the entity set name.
For data properties this is a tuple containing the path, including the entity set name e.g., (“Customers”,”Address”,”City”) for the City property in a complex property ‘Address’ in entity set “Customers”.
For navigation properties the tuple is the navigation property name prefixed with the entity set name, e.g., (“Customers”,”Orders”). This name is only used as a SQL alias for the target table, to remove ambiguity from certain queries that include a join across the navigation property. The mangled name must be distinct from the entity set name itself. from other such aliases and from other column names in this table.
Foreign key properties contain paths starting with both the entity set and the association set names (see SQLForeignKeyCollection for details) unless the association is symmetric, in which case they also contain the navigation property name (see SQLAssociationCollection for details of these more complex cases).
The default implementation strips the entity set name away and uses the default joining character to create a compound name before calling quote_identifier() to obtain the SQL string. All names are mangled once, on construction, and from then on looked up in the dictionary of mangled names.
If you need to override this method to modify the names used in your database you should ensure all other names (including any unrecognized by your program) are passed to the default implementation for mangling.
- ro_name(source_path)¶
Test if a source_path identifies a read-only property
This is a an additional extension point to use when you are wrapping an existing database with the API. It allows you to manage situations where an entity property has an implied value and should be treated read only.
There are two key use cases, auto-generated primary keys (such as auto-increment integer keys) and foreign keys which are exposed explicitly as foreign keys and should only be updated through an associated navigation property.
- source_path
- A tuple or list of strings describing the path to a property in the metadata model. See mangle_name() for more information.
The default implementation returns False.
If you override this method you must ensure all other names (including any unrecognized by your program) are passed to the default implementation using super.
- source_path_generator(entity_set)¶
Utility generator for source path tuples for entity_set
- get_collection_class()¶
Returns the collection class used to represent a generic entity set.
Override this method to provide a class derived from SQLEntityCollection when you are customising this implementation for a specific database engine.
Returns the collection class used to represent a symmetric relation.
Override this method to provide a class derived from SQLAssociationCollection when you are customising this implementation for a specific database engine.
- get_fk_class()¶
Returns the class used when the FK is in the source table.
Override this method to provide a class derived from SQLForeignKeyCollection when you are customising this implementation for a specific database engine.
- get_rk_class()¶
Returns the class used when the FK is in the target table.
Override this method to provide a class derived from SQLReverseKeyCollection when you are customising this implementation for a specific database engine.
- create_all_tables()¶
Creates all tables in this container.
Tables are created in a sensible order to ensure that foreign key constraints do not fail but this method is not compatible with databases that contain circular references though, e.g., Table A -> Table B with a foreign key and Table B -> Table A with a foreign key. Such databases will have to be created by hand. You can use the create_table_query methods to act as a starting point for your script.
- open()¶
Creates and returns a new connection object.
Must be overridden by database specific implementations because the underlying DB ABI does not provide a standard method of connecting.
- break_connection(connection)¶
Called when closing or cleaning up locked connections.
This method is called when the connection is locked (by a different thread) and the caller wants to force that thread to relinquish control.
The assumption is that the database is stuck in some lengthy transaction and that break_connection can be used to terminate the transaction and force an exception in the thread that initiated it - resulting in a subsequent call to release_connection() and a state which enables this thread to acquire the connection’s lock so that it can close it.
The default implementation does nothing, which might cause the close method to stall until the other thread relinquishes control normally.
- close(timeout=5)¶
Closes this database.
This method goes through each open connection and attempts to acquire it and then close it. The object is put into a mode that disables acquire_connection() (it returns None from now on).
- timeout
Defaults to 5 seconds. If connections are locked by other running threads we wait for those threads to release them, calling break_connection() to speed up termination if possible.
If None (not recommended!) this method will block indefinitely until all threads properly call release_connection().
Any locks we fail to acquire in the timeout are ignored and the connections are left open for the python garbage collector to dispose of.
- quote_identifier(identifier)¶
Given an identifier returns a safely quoted form of it.
By default we strip double quote and then use them to enclose it. E.g., if the string u’Employee_Name’ is passed then the string u‘“Employee_Name”’ is returned.
- prepare_sql_type(simple_value, params, nullable=None)¶
Given a simple value, returns a SQL-formatted name of its type.
Used to construct CREATE TABLE queries.
- simple_value
- A pyslet.odata2.csdl.SimpleValue instance which should have been created from a suitable pyslet.odata2.csdl.Property definition.
- params
- A SQLParams object. If simple_value is non-NULL, a DEFAULT value is added as part of the type definition.
- nullable
- Optional Boolean that can be used to override the nullable status of the associated property definition.
For example, if the value was created from an Int32 non-nullable property and has value 0 then this might return the string u’INTEGER NOT NULL DEFAULT ?’ with 0 being added to params
You should override this implementation if your database platform requires special handling of certain datatypes. The default mappings are given below.
EDM Type SQL Equivalent Edm.Binary BINARY(MaxLength) if FixedLength specified Edm.Binary VARBINARY(MaxLength) if no FixedLength Edm.Boolean BOOLEAN Edm.Byte SMALLINT Edm.DateTime TIMESTAMP Edm.DateTimeOffset CHARACTER(20), ISO 8601 string representation is used Edm.Decimal DECIMAL(Precision,Scale), defaults 10,0 Edm.Double FLOAT Edm.Guid BINARY(16) Edm.Int16 SMALLINT Edm.Int32 INTEGER Edm.Int64 BIGINT Edm.SByte SMALLINT Edm.Single REAL Edm.String CHAR(MaxLength) or VARCHAR(MaxLength) Edm.String NCHAR(MaxLength) or NVARCHAR(MaxLength) if Unicode=”true” Edm.Time TIME Parameterized CREATE TABLE queries are unreliable in my experience so the current implementation of the native create_table methods ignore default values when calling this method.
- prepare_sql_value(simple_value)¶
Returns a python object suitable for passing as a parameter
- simple_value
- A pyslet.odata2.csdl.SimpleValue instance.
You should override this method if your database requires special handling of parameter values. The default implementation performs the following conversions
EDM Type Python value added as parameter NULL None Edm.Binary (byte) string Edm.Boolean True or False Edm.Byte int Edm.DateTime Timestamp instance from DB API module Edm.DateTimeOffset string (ISO 8601 basic format) Edm.Decimal Decimal instance Edm.Double float Edm.Guid (byte) string Edm.Int16 int Edm.Int32 int Edm.Int64 long Edm.SByte int Edm.Single float Edm.String (unicode) string Edm.Time Time instance from DB API module
- read_sql_value(simple_value, new_value)¶
Updates simple_value from new_value.
- simple_value
- A pyslet.odata2.csdl.SimpleValue instance.
- new_value
- A value returned by the underlying DB API, e.g., from a cursor fetch operation
This method performs the reverse transformation to prepare_sql_value() and may need to be overridden to convert new_value into a form suitable for passing to the underlying set_from_value() method.
- new_from_sql_value(sql_value)¶
Returns a new simple value with value sql_value
The return value is a pyslet.odata2.csdl.SimpleValue instance.
- sql_value
- A value returned by the underlying DB API, e.g., from a cursor fetch operation
This method creates a new instance, selecting the most appropriate type to represent sql_value. By default pyslet.odata2.csdl.EDMValue.NewSimpleValueFromValue() is used.
You may need to override this method to identify the appropriate value type.
For an example of how to create a platform-specific implementation see SQLite below.
2.1.3.6.1.2. Entity Collections¶
These classes are documented primarily to facilitate the creation of alternative implementations designed to run over other DB API based data layers. The documentation goes a bit further than is necessary to help promote an understanding of the way the API is implemented.
- class pyslet.odata2.sqlds.SQLEntityCollection(container, qualify_names=False, **kwargs)¶
Bases: pyslet.odata2.sqlds.SQLCollectionBase
Represents a collection of entities from an EntitySet.
This class is the heart of the SQL implementation of the API, constructing and executing queries to implement the core methods from pyslet.odata2.csdl.EntityCollection.
- insert_entity(entity)¶
Inserts entity into the collection.
We override this method, rerouting it to a SQL-specific implementation that takes additional arguments.
- insert_entity_sql(entity, from_end=None, fk_values=None, transaction=None)¶
Inserts entity into the collection.
This method is not designed to be overridden by other implementations but it does extend the default functionality for a more efficient implementation and to enable better transactional processing. The additional parameters are documented here.
- from_end
An optional pyslet.odata2.csdl.AssociationSetEnd bound to this entity set. If present, indicates that this entity is being inserted as part of a single transaction involving an insert or update to the other end of the association.
This suppresses any check for a required link via this association (as it is assumed that the link is present, or will be, in the same transaction).
- fk_values
- If the association referred to by from_end is represented by a set of foreign keys stored in this entity set’s table (see SQLReverseKeyCollection) then fk_values is the list of (mangled column name, value) tuples that must be inserted in order to create the link.
- transaction
- An optional transaction. If present, the connection is left uncommitted.
The method functions in three phases.
Process all bindings for which we hold the foreign key. This includes inserting new entities where deep inserts are being used or calculating foreign key values where links to existing entities have been specified on creation.
In addition, all required links are checked and raise errors if no binding is present.
A simple SQL INSERT statement is executed to add the record to the database along with any foreign keys generated in (1) or passed in fk_values.
Process all remaining bindings. Although we could do this using the update_bindings() method of DeferredValue we handle this directly to retain transactional integrity (where supported).
Links to existing entities are created using the insert_link method available on the SQL-specific SQLNavigationCollection.
Deep inserts are handled by a recursive call to this method. After step 1, the only bindings that remain are (a) those that are stored at the other end of the link and so can be created by passing values for from_end and fk_values in a recursive call or (b) those that are stored in a separate table which are created by combining a recursive call and a call to insert_link.
Required links are always created in step 1 because the overarching mapping to SQL forces such links to be represented as foreign keys in the source table (i.e., this table) unless the relationship is 1-1, in which case the link is created in step 3 and our database is briefly in violation of the model. If the underlying database API does not support transactions then it is possible for this state to persist resulting in an orphan entity or entities, i.e., entities with missing required links. A failed rollback() call will log this condition along with the error that caused it.
- update_entity(entity)¶
Updates entity
This method follows a very similar pattern to InsertMethod(), using a three-phase process.
- Process all bindings for which we hold the foreign key.
This includes inserting new entities where deep inserts are being used or calculating foreign key values where links to existing entities have been specified on update.
- A simple SQL UPDATE statement is executed to update the
record in the database along with any updated foreign keys generated in (1).
- Process all remaining bindings while retaining transactional
integrity (where supported).
Links to existing entities are created using the insert_link or replace methods available on the SQL-specific SQLNavigationCollection. The replace method is used when a navigation property that links to a single entity has been bound. Deep inserts are handled by calling insert_entity_sql before the link is created.
The same transactional behaviour as insert_entity_sql() is exhibited.
- update_link(entity, link_end, target_entity, no_replace=False, transaction=None)¶
Updates a link when this table contains the foreign key
- entity
- The entity being linked from (must already exist)
- link_end
- The AssociationSetEnd bound to this entity set that represents this entity set’s end of the assocation being modified.
- target_entity
- The entity to link to or None if the link is to be removed.
- no_replace
- If True, existing links will not be replaced. The affect is to force the underlying SQL query to include a constraint that the foreign key is currently NULL. By default this argument is False and any existing link will be replaced.
- transaction
- An optional transaction. If present, the connection is left uncommitted.
- delete_entity(entity, from_end=None, transaction=None)¶
Deletes an entity
Called by the dictionary-like del operator, provided as a separate method to enable it to be called recursively when doing cascade deletes and to support transactions.
- from_end
An optional AssociationSetEnd bound to this entity set that represents the link from which we are being deleted during a cascade delete.
The purpose of this parameter is prevent cascade deletes from doubling back on themselves and causing an infinite loop.
- transaction
- An optional transaction. If present, the connection is left uncommitted.
- delete_link(entity, link_end, target_entity, transaction=None)¶
Deletes the link between entity and target_entity
The foreign key for this link must be held in this entity set’s table.
- entity
- The entity in this entity set that the link is from.
- link_end
- The AssociationSetEnd bound to this entity set that represents this entity set’s end of the assocation being modified.
- target_entity
- The target entity that defines the link to be removed.
- transaction
- An optional transaction. If present, the connection is left uncommitted.
- clear_links(link_end, target_entity, transaction=None)¶
Deletes all links to target_entity
The foreign key for this link must be held in this entity set’s table.
- link_end
- The AssociationSetEnd bound to this entity set that represents this entity set’s end of the assocation being modified.
- target_entity
- The target entity that defines the link(s) to be removed.
- transaction
- An optional transaction. If present, the connection is left uncommitted.
- create_table_query()¶
Returns a SQL statement and params for creating the table.
- create_table()¶
Executes the SQL statement create_table_query()
- class pyslet.odata2.sqlds.SQLCollectionBase(container, qualify_names=False, **kwargs)¶
Bases: pyslet.odata2.core.EntityCollection
A base class to provide core SQL functionality.
Additional keyword arguments:
- container
- A SQLEntityContainer instance.
- qualify_names
- An optional boolean (defaults to False) indicating whether or not the column names must be qualified in all queries.
On construction a data connection is acquired from container, this may prevent other threads from using the database until the lock is released by the close() method.
- table_name = None¶
the parent container (database) for this collection
- dbc = None¶
a connection to the database
- close()¶
Closes the cursor and database connection if they are open.
- set_page(top, skip=0, skiptoken=None)¶
Sets the values for paging.
Our implementation uses a special format for skiptoken. It is a comma-separated list of simple literal values corresponding to the values required by the ordering augmented with the key values to ensure uniqueness.
For example, if $orderby=A,B on an entity set with key K then the skiptoken will typically have three values comprising the last values returned for A,B and K in that order. In cases where the resulting skiptoken would be unreasonably large an additional integer (representing a further skip) may be appended and the whole token expressed relative to an earlier skip point.
- join_clause()¶
A utility method to return the JOIN clause.
Defaults to an empty expression.
- where_clause(entity, params, use_filter=True, use_skip=False, null_cols=())¶
A utility method that generates the WHERE clause for a query
- entity
- An optional entity within this collection that is the focus of this query. If not None the resulting WHERE clause will restrict the query to this entity only.
- params
- The SQLParams object to add parameters to.
- use_filter
- Defaults to True, indicates if this collection’s filter should be added to the WHERE clause.
- use_skip
- Defaults to False, indicates if the skiptoken should be used in the where clause. If True then the query is limited to entities appearing after the skiptoken’s value (see below).
- null_cols
- An iterable of mangled column names that must be NULL (defaults to an empty tuple). This argument is used during updates to prevent the replacement of non-NULL foreign keys.
The operation of the skiptoken deserves some explanation. When in play the skiptoken contains the last value of the order expression returned. The order expression always uses the keys to ensure unambiguous ordering. The clause added is best served with an example. If an entity has key K and an order expression such as “tolower(Name) desc” then the query will contain something like:
SELECT K, Nname, DOB, LOWER(Name) AS o_1, K .... WHERE (o_1 < ? OR (o_1 = ? AND K > ?))
The values from the skiptoken will be passed as parameters.
- where_entity_clause(where, entity, params)¶
Adds the entity constraint expression to a list of SQL expressions.
- where
- The list to append the entity expression to.
- entity
- An expression is added to restrict the query to this entity
- where_skiptoken_clause(where, params)¶
Adds the entity constraint expression to a list of SQL expressions.
- where
- The list to append the skiptoken expression to.
- set_orderby(orderby)¶
Sets the orderby rules for this collection.
We override the default implementation to calculate a list of field name aliases to use in ordered queries. For example, if the orderby expression is “tolower(Name) desc” then each SELECT query will be generated with an additional expression, e.g.:
SELECT ID, Name, DOB, LOWER(Name) AS o_1 ... ORDER BY o_1 DESC, ID ASC
The name “o_1” is obtained from the name mangler using the tuple:
(entity_set.name,'o_1')
Subsequent order expressions have names ‘o_2’, ‘o_3’, etc.
Notice that regardless of the ordering expression supplied the keys are always added to ensure that, when an ordering is required, a defined order results even at the expense of some redundancy.
- orderby_clause()¶
A utility method to return the orderby clause.
- params
- The SQLParams object to add parameters to.
- orderby_cols(column_names, params, force_order=False)¶
A utility to add the column names and aliases for the ordering.
- column_names
- A list of SQL column name/alias expressions
- params
- The SQLParams object to add parameters to.
- force_order
- Forces the addition of an ordering by key if an orderby expression has not been set.
- insert_fields(entity)¶
A generator for inserting mangled property names and values.
- entity
- Any instance of Entity
The yielded values are tuples of (mangled field name, SimpleValue instance).
Read only fields are never generated, even if they are keys. This allows automatically generated keys to be used and also covers the more esoteric use case where a foreign key constraint exists on the primary key (or part thereof) - in the latter case the relationship should be marked as required to prevent unexpected constraint violations.
Otherwise, only selected fields are yielded so if you attempt to insert a value without selecting the key fields you can expect a constraint violation unless the key is read only.
- auto_fields(entity)¶
A generator for selecting auto mangled property names and values.
- entity
- Any instance of Entity
The yielded values are tuples of (mangled field name, SimpleValue instance).
Only fields that are read only are yielded with the caveat that they must also be either selected or keys. The purpose of this method is to assist with reading back automatically generated field values after an insert or update.
- key_fields(entity)¶
A generator for selecting mangled key names and values.
- entity
- Any instance of Entity
The yielded values are tuples of (mangled field name, SimpleValue instance). Only the keys fields are yielded.
- select_fields(entity)¶
A generator for selecting mangled property names and values.
- entity
- Any instance of Entity
The yielded values are tuples of (mangled field name, SimpleValue instance). Only selected fields are yielded with the caveat that the keys are always selected.
- update_fields(entity)¶
A generator for updating mangled property names and values.
- entity
- Any instance of Entity
The yielded values are tuples of (mangled field name, SimpleValue instance).
Neither read only fields nor key are generated. All other fields are yielded but unselected fields are set to NULL before being yielded. This implements OData’s PUT semantics. See merge_fields() for an alternative.
- merge_fields(entity)¶
A generator for merging mangled property names and values.
- entity
- Any instance of Entity
The yielded values are tuples of (mangled field name, SimpleValue instance).
Neither read only fields, keys nor unselected fields are generated. All other fields are yielded implementing OData’s MERGE semantics. See update_fields() for an alternative.
- stream_field(entity)¶
Returns information for selecting the stream ID.
- entity
- Any instance of Entity
Returns a tuples of (mangled field name, SimpleValue instance).
- sql_expression(expression, params, context='AND')¶
Converts an expression into a SQL expression string.
- expression
- A pyslet.odata2.core.CommonExpression instance.
- params
- A SQLParams object of the appropriate type for this database connection.
- context
- A string containing the SQL operator that provides the context in which the expression is being converted, defaults to ‘AND’. This is used to determine if the resulting expression must be bracketed or not. See sql_bracket() for a useful utility function to illustrate this.
This method is basically a grand dispatcher that sends calls to other node-specific methods with similar signatures. The effect is to traverse the entire tree rooted at expression.
The result is a string containing the parameterized expression with appropriate values added to the params object in the same sequence that they appear in the returned SQL expression.
When creating derived classes to implement database-specific behaviour you should override the individual evaluation methods rather than this method. All related methods have the same signature.
Where methods are documented as having no default implementation, NotImplementedError is raised.
- sql_bracket(query, context, operator)¶
A utility method for bracketing a SQL query.
- query
- The query string
- context
- A string representing the SQL operator that defines the context in which the query is to placed. E.g., ‘AND’
- operator
- The dominant operator in the query.
This method is used by operator-specific conversion methods. The query is not parsed, it is merely passed in as a string to be bracketed (or not) depending on the values of context and operator.
The implementation is very simple, it checks the precedence of operator in context and returns query bracketed if necessary:
collection.sql_bracket("Age+3","*","+")=="(Age+3)" collection.sql_bracket("Age*3","+","*")=="Age*3"
- sql_expression_member(expression, params, context)¶
Converts a member expression, e.g., Address/City
This implementation does not support the use of navigation properties but does support references to complex properties.
It outputs the mangled name of the property, qualified by the table name if qualify_names is True.
- sql_expression_cast(expression, params, context)¶
Converts the cast expression: no default implementation
- sql_expression_generic_binary(expression, params, context, operator)¶
A utility method for implementing binary operator conversion.
The signature of the basic sql_expression() is extended to include an operator argument, a string representing the (binary) SQL operator corresponding to the expression object.
- sql_expression_mul(expression, params, context)¶
Converts the mul expression: maps to SQL “*”
- sql_expression_div(expression, params, context)¶
Converts the div expression: maps to SQL “/”
- sql_expression_mod(expression, params, context)¶
Converts the mod expression: no default implementation
- sql_expression_add(expression, params, context)¶
Converts the add expression: maps to SQL “+”
- sql_expression_sub(expression, params, context)¶
Converts the sub expression: maps to SQL “-“
- sql_expression_lt(expression, params, context)¶
Converts the lt expression: maps to SQL “<”
- sql_expression_gt(expression, params, context)¶
Converts the gt expression: maps to SQL “>”
- sql_expression_le(expression, params, context)¶
Converts the le expression: maps to SQL “<=”
- sql_expression_ge(expression, params, context)¶
Converts the ge expression: maps to SQL “>=”
- sql_expression_isof(expression, params, context)¶
Converts the isof expression: no default implementation
- sql_expression_eq(expression, params, context)¶
Converts the eq expression: maps to SQL “=”
- sql_expression_ne(expression, params, context)¶
Converts the ne expression: maps to SQL “<>”
- sql_expression_and(expression, params, context)¶
Converts the and expression: maps to SQL “AND”
- sql_expression_or(expression, params, context)¶
Converts the or expression: maps to SQL “OR”
- sql_expression_endswith(expression, params, context)¶
Converts the endswith function: maps to “op[0] LIKE ‘%’+op[1]”
This is implemented using the concatenation operator
- sql_expression_indexof(expression, params, context)¶
Converts the indexof method: maps to POSITION( op[0] IN op[1] )
- sql_expression_replace(expression, params, context)¶
Converts the replace method: no default implementation
- sql_expression_startswith(expression, params, context)¶
Converts the startswith function: maps to “op[0] LIKE op[1]+’%’”
This is implemented using the concatenation operator
- sql_expression_tolower(expression, params, context)¶
Converts the tolower method: maps to LOWER function
- sql_expression_toupper(expression, params, context)¶
Converts the toupper method: maps to UCASE function
- sql_expression_trim(expression, params, context)¶
Converts the trim method: maps to TRIM function
- sql_expression_substring(expression, params, context)¶
Converts the substring method
maps to SUBSTRING( op[0] FROM op[1] [ FOR op[2] ]
- sql_expression_substringof(expression, params, context)¶
Converts the substringof function
maps to “op[1] LIKE ‘%’+op[0]+’%’”
To do this we need to invoke the concatenation operator.
This method has been poorly defined in OData with the parameters being switched between versions 2 and 3. It is being withdrawn as a result and replaced with contains in OData version 4. We follow the version 3 convention here of “first parameter in the second parameter” which fits better with the examples and with the intuitive meaning:
substringof(A,B) == A in B
- sql_expression_concat(expression, params, context)¶
Converts the concat method: maps to ||
- sql_expression_length(expression, params, context)¶
Converts the length method: maps to CHAR_LENGTH( op[0] )
- sql_expression_year(expression, params, context)¶
Converts the year method: maps to EXTRACT(YEAR FROM op[0])
- sql_expression_month(expression, params, context)¶
Converts the month method: maps to EXTRACT(MONTH FROM op[0])
- sql_expression_day(expression, params, context)¶
Converts the day method: maps to EXTRACT(DAY FROM op[0])
- sql_expression_hour(expression, params, context)¶
Converts the hour method: maps to EXTRACT(HOUR FROM op[0])
- sql_expression_minute(expression, params, context)¶
Converts the minute method: maps to EXTRACT(MINUTE FROM op[0])
- sql_expression_second(expression, params, context)¶
Converts the second method: maps to EXTRACT(SECOND FROM op[0])
- sql_expression_round(expression, params, context)¶
Converts the round method: no default implementation
- sql_expression_floor(expression, params, context)¶
Converts the floor method: no default implementation
- sql_expression_ceiling(expression, params, context)¶
Converts the ceiling method: no default implementation
Bases: pyslet.odata2.sqlds.SQLCollectionBase, pyslet.odata2.core.NavigationCollection
Abstract class representing all navigation collections.
Additional keyword arguments:
- aset_name
- The name of the association set that defines this relationship. This additional parameter is used by the name mangler to obtain the field name (or table name) used for the foreign keys.
Inserts a link to entity into this collection.
- transaction
- An optional transaction. If present, the connection is left uncommitted.
Replaces all links with a single link to entity.
- transaction
- An optional transaction. If present, the connection is left uncommitted.
A utility method that deletes the link to entity in this collection.
This method is called during cascaded deletes to force-remove a link prior to the deletion of the entity itself.
- transaction
- An optional transaction. If present, the connection is left uncommitted.
- class pyslet.odata2.sqlds.SQLForeignKeyCollection(**kwargs)¶
Bases: pyslet.odata2.sqlds.SQLNavigationCollection
The collection of entities obtained by navigation via a foreign key
This object is used when the foreign key is stored in the same table as from_entity. This occurs when the relationship is one of:
0..1 to 1 Many to 1 Many to 0..1
The name mangler looks for the foreign key in the field obtained by mangling:
(entity set name, association set name, key name)
For example, suppose that a link exists from entity set Orders[*] to entity set Customers[0..1] and that the key field of Customer is “CustomerID”. If the association set that binds Orders to Customers with this link is called OrdersToCustomers then the foreign key would be obtained by looking up:
('Orders','OrdersToCustomers','CustomerID')
By default this would result in the field name:
'OrdersToCustomers_CustomerID'
This field would be looked up in the ‘Orders’ table. The operation of the name mangler can be customised by overriding the SQLEntityContainer.mangle_name() method in the container.
- join_clause()¶
Overridden to provide a join to from_entity‘s table.
The join clause introduces an additional name that is looked up by the name mangler. To avoid name clashes when the relationship is recursive the join clause introduces an alias for the table containing from_entity. To continue the example above, if the link from Orders to Customers is bound to a navigation property in the reverse direction called, say, ‘AllOrders’ in the target entity set then this alias is looked up using:
('Customers','AllOrders')
By default this would just be the string ‘AllOrders’ (the name of the navigation property). The resulting join looks something like this:
SELECT ... FROM Customers INNER JOIN Orders AS AllOrders ON Customers.CustomerID=AllOrders.OrdersToCustomers_CustomerID ... WHERE AllOrders.OrderID = ?;
The value of the OrderID key property in from_entity is passed as a parameter when executing the expression.
There is an awkward case when the reverse navigation property has not been bound, in this case the link’s role name is used instead, this provides a best guess as to what the navigation property name would have been had it been bound; it must be unique within the context of target entity_set’s type - a benign constraint on the model’s metadata description.
- where_clause(entity, params, use_filter=True, use_skip=False)¶
Adds the constraint for entities linked from from_entity only.
We continue to use the alias set in the join_clause() where an example WHERE clause is illustrated.
- class pyslet.odata2.sqlds.SQLReverseKeyCollection(**kwargs)¶
Bases: pyslet.odata2.sqlds.SQLNavigationCollection
The collection of entities obtained by navigation to a foreign key
This object is used when the foreign key is stored in the target table. This occurs in the reverse of the cases where SQLReverseKeyCollection is used, i.e:
1 to 0..1 1 to Many 0..1 to ManyThe implementation is actually simpler in this direction as no JOIN clause is required.
- where_clause(entity, params, use_filter=True, use_skip=False)¶
Adds the constraint to entities linked from from_entity only.
- delete_link(entity, transaction=None)¶
Called during cascaded deletes.
This is actually a no-operation as the foreign key for this association is in the entity’s record itself and will be removed automatically when entity is deleted.
- clear_links(transaction=None)¶
Deletes all links from this collection’s from_entity
- transaction
- An optional transaction. If present, the connection is left uncommitted.
- class pyslet.odata2.sqlds.SQLAssociationCollection(**kwargs)¶
Bases: pyslet.odata2.sqlds.SQLNavigationCollection
The collection obtained by navigation using an auxiliary table
This object is used when the relationship is described by two sets of foreign keys stored in an auxiliary table. This occurs mainly when the link is Many to Many but it is also used for 1 to 1 relationships. This last use may seem odd but it is used to represent the symmetry of the relationship. In practice, a single set of foreign keys is likely to exist in one table or the other and so the relationship is best modelled by a 0..1 to 1 relationship even if the intention is that the records will always exist in pairs.
The name of the auxiliary table is obtained from the name mangler using the association set’s name. The keys use a more complex mangled form to cover cases where there is a recursive Many to Many relation (such as a social network of friends between User entities). The names of the keys are obtained by mangling:
( association set name, target entity set name, navigation property name, key name )
An example should help. Suppose we have entities representing sports Teams(TeamID) and sports Players(PlayerID) and that you can navigate from Player to Team using the “PlayedFor” navigation property and from Team to Player using the “Players” navigation property. Both navigation properties are collections so the relationship is Many to Many. If the association set that binds the two entity sets is called PlayersAndTeams then the the auxiliary table name will be mangled from:
('PlayersAndTeams')
and the fields will be mangled from:
('PlayersAndTeams','Teams','PlayedFor','TeamID') ('PlayersAndTeams','Players','Players','PlayerID')
By default this results in column names ‘Teams_PlayedFor_TeamID’ and ‘Players_Players_PlayerID’. If you are modelling an existing database then ‘TeamID’ and ‘PlayerID’ on their own are more likely choices. You would need to override the SQLEntityContainer.mangle_name() method in the container to catch these cases and return the shorter column names.
- join_clause()¶
Overridden to provide the JOIN to the auxiliary table.
Unlike the foreign key JOIN clause there is no need to use an alias in this case as the auxiliary table is assumed to be distinct from the the table it is being joined to.
- where_clause(entity, params, use_filter=True, use_skip=False)¶
Provides the from_entity constraint in the auxiliary table.
- insert_entity(entity)¶
Rerouted to a SQL-specific implementation
- insert_entity_sql(entity, transaction=None)¶
Inserts entity into the base collection and creates the link.
This is always done in two steps, bound together in a single transaction (where supported). If this object represents a 1 to 1 relationship then, briefly, we’ll be in violation of the model. This will only be an issue in non-transactional systems.
- delete_link(entity, transaction=None)¶
Called during cascaded deletes to force-remove a link prior to the deletion of the entity itself.
This method is also re-used for simple deletion of the link in this case as the link is in the auxiliary table itself.
- clear_links(transaction=None)¶
Deletes all links from this collection’s from_entity
- transaction
- An optional transaction. If present, the connection is left uncommitted.
- classmethod clear_links_unbound(container, from_end, from_entity, transaction)¶
Special class method for deleting all the links from from_entity
This is a class method because it has to work even if there is no navigation property bound to this end of the association.
- container
- The SQLEntityContainer containing this association set.
- from_end
- The AssociationSetEnd that represents the end of the association that from_entity is bound to.
- from_entity
- The entity to delete links from
- transaction
- The current transaction (required)
This is a class method because it has to work even if there is no navigation property bound to this end of the association. If there was a navigation property then an instance could be created and the simpler clear_links() method used.
- classmethod create_table_query(container, aset_name)¶
Returns a SQL statement and params to create the auxiliary table.
This is a class method to enable the table to be created before any entities are created.
- classmethod create_table(container, aset_name)¶
Executes the SQL statement create_table_query()
2.1.3.6.2. SQLite¶
This module also contains a fully functional implementation of the API based on the sqlite3 module. The first job with any SQL implementation is to create a base collection class that implements any custom expression handling.
In the case of SQLite we override a handful of the standard SQL functions only. Notice that this class is derived from SQLCollectionBase, an abstract class. If your SQL platform adheres to the SQL standard very closely, or you are happy for SQL-level errors to be generated when unsupported SQL syntax is generated by some filter or orderby expressions then you can skip the process of creating customer collection classes completely.
- class pyslet.odata2.sqlds.SQLiteEntityCollectionBase(container, qualify_names=False, **kwargs)¶
Bases: pyslet.odata2.sqlds.SQLCollectionBase
Base class for SQLite SQL custom mappings.
This class provides some SQLite specific mappings for certain functions to improve compatibility with the OData expression language.
- sql_expression_length(expression, params, context)¶
Converts the length method: maps to length( op[0] )
- sql_expression_year(expression, params, context)¶
Converts the year method
maps to CAST(strftime(‘%Y’,op[0]) AS INTEGER)
- sql_expression_month(expression, params, context)¶
Converts the month method
maps to CAST(strftime(‘%m’,op[0]) AS INTEGER)
- sql_expression_day(expression, params, context)¶
Converts the day method
maps to CAST(strftime(‘%d’,op[0]) AS INTEGER)
- sql_expression_hour(expression, params, context)¶
Converts the hour method
maps to CAST(strftime(‘%H’,op[0]) AS INTEGER)
- sql_expression_minute(expression, params, context)¶
Converts the minute method
maps to CAST(strftime(‘%M’,op[0]) AS INTEGER)
- sql_expression_second(expression, params, context)¶
Converts the second method
maps to CAST(strftime(‘%S’,op[0]) AS INTEGER)
- sql_expression_tolower(expression, params, context)¶
Converts the tolower method
maps to lower(op[0])
- sql_expression_toupper(expression, params, context)¶
Converts the toupper method
maps to upper(op[0])
To ensure that our custom implementations are integrated in to all the collection classes we have to create specific classes for all collection types. These classes have no implementation!
- class pyslet.odata2.sqlds.SQLiteEntityCollection(container, qualify_names=False, **kwargs)¶
Bases: pyslet.odata2.sqlds.SQLiteEntityCollectionBase, pyslet.odata2.sqlds.SQLEntityCollection
SQLite-specific collection for entity sets
- class pyslet.odata2.sqlds.SQLiteForeignKeyCollection(**kwargs)¶
Bases: pyslet.odata2.sqlds.SQLiteEntityCollectionBase, pyslet.odata2.sqlds.SQLForeignKeyCollection
SQLite-specific collection for navigation from a foreign key
- class pyslet.odata2.sqlds.SQLiteReverseKeyCollection(**kwargs)¶
Bases: pyslet.odata2.sqlds.SQLiteEntityCollectionBase, pyslet.odata2.sqlds.SQLReverseKeyCollection
SQLite-specific collection for navigation to a foreign key
- class pyslet.odata2.sqlds.SQLiteAssociationCollection(**kwargs)¶
Bases: pyslet.odata2.sqlds.SQLiteEntityCollectionBase, pyslet.odata2.sqlds.SQLAssociationCollection
SQLite-specific collection for symmetric association sets
Finally, we can override the main container class to provide a complete implementation of our API using the sqlite3 module.
- class pyslet.odata2.sqlds.SQLiteEntityContainer(file_path, sqlite_options={}, **kwargs)¶
Bases: pyslet.odata2.sqlds.SQLEntityContainer
Creates a container that represents a SQLite database.
Additional keyword arguments:
- file_path
- The path to the SQLite database file.
- sqlite_options
A dictionary of additional options to pass as named arguments to the connect method. It defaults to an empty dictionary, you won’t normally need to pass additional options and you shouldn’t change the isolation_level as the collection classes have been designed to work in the default mode.
For more information see sqlite3
All other keyword arguments required to initialise the base class must be passed on construction except dbapi which is automatically set to the Python sqlite3 module.
- get_collection_class()¶
Overridden to return SQLiteEntityCollection
Overridden to return SQLiteAssociationCollection
- get_fk_class()¶
Overridden to return SQLiteForeignKeyCollection
- get_rk_class()¶
Overridden to return SQLiteReverseKeyCollection
- open()¶
Calls the underlying connect method.
Passes the file_path used to construct the container as the only parameter. You can pass the string ‘:memory:’ to create an in-memory database.
Other connection arguments are not currently supported, you can derive a more complex implementation by overriding this method and (optionally) the __init__ method to pass in values for .
- break_connection(connection)¶
Calls the underlying interrupt method.
- prepare_sql_type(simple_value, params, nullable=None)¶
Performs SQLite custom mappings
We inherit most of the type mappings but the following types use custom mappings:
EDM Type SQLite Equivalent Edm.Decimal TEXT Edm.Guid BLOB Edm.Time REAL Edm.Int64 INTEGER
- prepare_sql_value(simple_value)¶
Returns a python value suitable for passing as a parameter.
We inherit most of the value mappings but the following types have custom mappings.
EDM Type Python value added as parameter Edm.Binary buffer object Edm.Decimal string representation obtained with str() Edm.Guid buffer object containing bytes representation Edm.Time value of pyslet.iso8601.Time.GetTotalSeconds() Our use of buffer type is not ideal as it generates warning when Python is run with the -3 flag (to check for Python 3 compatibility) but it seems unavoidable at the current time.
- read_sql_value(simple_value, new_value)¶
Reverses the transformation performed by prepare_sql_value
- new_from_sql_value(sql_value)¶
Returns a new simple value instance initialised from sql_value
Overridden to ensure that buffer objects returned by the underlying DB API are converted to strings. Otherwise sql_value is passed directly to the parent.
2.1.3.6.3. Utility Classes¶
Some miscellaneous classes documented mainly to make the implementation of the collection classes easier to understand.
- class pyslet.odata2.sqlds.SQLTransaction(api, dbc)¶
Bases: object
Class used to model a transaction.
Python’s DB API uses transactions by default, hiding the details from the caller. Essentially, the first execute call on a connection issues a BEGIN statement and the transaction ends with either a commit or a rollback. It is generally considered a bad idea to issue a SQL command and then leave the connection with an open transaction.
The purpose of this class is to help us write methods that can operate either as a single transaction or as part of sequence of methods that form a single transaction. It also manages cursor creation and closing and logging.
Essentially, the class is used as follows:
t=SQLTransaction(db_module,db_connection) try: t.begin() t.execute("UPDATE SOME_TABLE SET SOME_COL='2'") t.commit() except Exception as e: t.rollback(e) finally: t.close(e)
The transaction object can be passed to a sub-method between the begin and commit calls provided that method follows the same pattern as the above for the try, except and finally blocks. The object keeps track of these ‘nested’ transactions and delays the commit or rollback until the outermost method invokes them.
- api = None¶
the database module
- dbc = None¶
the database connection
- cursor = None¶
the database cursor to use for executing commands
- noCommit = None¶
used to manage nested transactions
- queryCount = None¶
records the number of successful commands
- begin()¶
Begins a transaction
If a transaction is already in progress a nested transaction is started which has no affect on the database connection itself.
- execute(sqlcmd, params)¶
Executes sqlcmd as part of this transaction.
- sqlcmd
- A string containing the query
- params
- A SQLParams object containing any parameterized values.
- commit()¶
Ends this transaction with a commit
Nested transactions do nothing.
- rollback(err=None, swallow=False)¶
Calls the underlying database connection rollback method.
Nested transactions do not rollback the connection, they do nothing except re-raise err (if required).
If rollback is not supported the resulting error is absorbed.
- err
The exception that triggered the rollback. If not None then this is logged at INFO level when the rollback succeeds.
If the transaction contains at least one successfully executed query and the rollback fails then err is logged at ERROR rather than INFO level indicating that the data may now be in violation of the model.
- swallow
- A flag (defaults to False) indicating that err should be swallowed, rather than re-raised.
- class pyslet.odata2.sqlds.SQLParams¶
Bases: object
An abstract class used to build parameterized queries.
Python’s DB API support three different conventions for specifying parameters and each module indicates the convention in use. The SQL construction methods in this module abstract away this variability for maximum portability using different implementations of the basic SQLParams class.
- add_param(value)¶
Adds a value to this set of parameters
Returns the string to include in the query in place of this value.
- value:
- The native representation of the value in a format suitable for passing to the underlying DB API.
- class pyslet.odata2.sqlds.QMarkParams¶
Bases: pyslet.odata2.sqlds.SQLParams
A class for building parameter lists using ‘?’ syntax.
- class pyslet.odata2.sqlds.NumericParams¶
Bases: pyslet.odata2.sqlds.SQLParams
A class for building parameter lists using ‘:1’, ‘:2’,... syntax
- class pyslet.odata2.sqlds.NamedParams¶
Bases: pyslet.odata2.sqlds.SQLParams
A class for building parameter lists using ‘:A’, ‘:B”,... syntax
Although there is more freedom with named parameters, in order to support the ordered lists of the other formats we just invent parameter names using ‘:p1’, ‘:p2’, etc.
2.1.3.6.4. Misc Definitions¶
- pyslet.odata2.sqlds.SQL_TIMEOUT = 90¶
int(x=0) -> int or long int(x, base=10) -> int or long
Convert a number or string to an integer, or return 0 if no arguments are given. If x is floating point, the conversion truncates towards zero. If x is outside the integer range, the function returns a long instead.
If x is not a number or if base is given, then x must be a string or Unicode object representing an integer literal in the given base. The literal can be preceded by ‘+’ or ‘-‘ and be surrounded by whitespace. The base defaults to 10. Valid bases are 0 and 2-36. Base 0 means to interpret the base from the string as an integer literal. >>> int(‘0b100’, base=0) 4
- class pyslet.odata2.sqlds.UnparameterizedLiteral(value)¶
Bases: pyslet.odata2.core.LiteralExpression
Class used as a flag that this literal is safe and does not need to be parameterized.
This is used in the query converter to prevent things like this happening when the converter itself constructs a LIKE expression:
"name" LIKE ?+?+? ; params=[u'%',u"Smith",u'%']
- pyslet.odata2.sqlds.SQLOperatorPrecedence = {'>=': 4, '<>': 4, '<=': 4, 'AND': 2, 'LIKE': 4, '+': 5, '*': 6, '-': 5, ',': 0, '/': 6, 'OR': 1, 'NOT': 3, '=': 4, '<': 4, '>': 4}¶
Look-up table for SQL operator precedence calculations.
The keys are strings representing the operator, the values are integers that allow comparisons for operator precedence. For example:
SQLOperatorPrecedence['+']<SQLOperatorPrecedence['*'] SQLOperatorPrecedence['<']==SQLOperatorPrecedence['>']
- class pyslet.odata2.sqlds.DummyLock¶
Bases: object
An object to use in place of a real Lock, can always be acquired
2.1.3.6.5. Exceptions¶
- class pyslet.odata2.sqlds.DatabaseBusy¶
Bases: pyslet.odata2.sqlds.SQLError
Raised when a database connection times out.
- class pyslet.odata2.sqlds.SQLError¶
Bases: exceptions.Exception
Base class for all module exceptions.