SQLAlchemy has a variety of extensions available which provide extra functionality to SA, either via explicit usage or by augmenting the core behavior. Several of these extensions are designed to work together.
Author: Mike Bayer
Version: 0.4.4 or greater
declarative
intends to be a fully featured replacement for the very old activemapper
extension. Its goal is to redefine the organization of class, Table
, and mapper()
constructs such that they can all be defined "at once" underneath a class declaration. Unlike activemapper
, it does not redefine normal SQLAlchemy configurational semantics - regular Column
, relation()
and other schema or ORM constructs are used in almost all cases.
declarative
is a so-called "micro declarative layer"; it does not generate table or column names and requires almost as fully verbose a configuration as that of straight tables and mappers. As an alternative, the Elixir project is a full community-supported declarative layer for SQLAlchemy, and is recommended for its active-record-like semantics, its convention-based configuration, and plugin capabilities.
SQLAlchemy object-relational configuration involves the usage of Table, mapper(), and class objects to define the three areas of configuration. declarative moves these three types of configuration underneath the individual mapped class. Regular SQLAlchemy schema and ORM constructs are used in most cases:
from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class SomeClass(Base): __tablename__ = 'some_table' id = Column('id', Integer, primary_key=True) name = Column('name', String(50))
Above, the declarative_base
callable produces a new base class from which all mapped classes inherit from. When the class definition is
completed, a new Table
and mapper()
have been generated, accessible via the __table__
and __mapper__
attributes on the
SomeClass
class.
Attributes may be added to the class after its construction, and they will be added to the underlying Table
and mapper()
definitions as
appropriate:
SomeClass.data = Column('data', Unicode) SomeClass.related = relation(RelatedInfo)
Classes which are mapped explicitly using mapper()
can interact freely with declarative classes.
The declarative_base
base class contains a MetaData
object where newly defined Table
objects are collected. This is accessed via the metadata
class level accessor, so to create tables we can say:
engine = create_engine('sqlite://') Base.metadata.create_all(engine)
The Engine
created above may also be directly associated with the declarative base class using the engine
keyword argument, where it will be associated with the underlying MetaData
object and allow SQL operations involving that metadata and its tables to make use of that engine automatically:
Base = declarative_base(engine=create_engine('sqlite://'))
Or, as MetaData
allows, at any time using the bind
attribute:
Base.metadata.bind = create_engine('sqlite://')
The declarative_base
can also receive a pre-created MetaData
object, which allows a declarative setup to be associated with an already existing traditional collection of Table
objects:
mymetadata = MetaData() Base = declarative_base(metadata=mymetadata)
Relations to other classes are done in the usual way, with the added feature that the class specified to relation()
may be a string name. The
"class registry" associated with Base
is used at mapper compilation time to resolve the name into the actual class object, which is expected to
have been defined once the mapper configuration is used:
class User(Base): __tablename__ = 'users' id = Column('id', Integer, primary_key=True) name = Column('name', String(50)) addresses = relation("Address", backref="user") class Address(Base): __tablename__ = 'addresses' id = Column('id', Integer, primary_key=True) email = Column('email', String(50)) user_id = Column('user_id', Integer, ForeignKey('users.id'))
Column constructs, since they are just that, are immediately usable, as below where we define a primary join condition on the Address
class
using them:
class Address(Base) __tablename__ = 'addresses' id = Column('id', Integer, primary_key=True) email = Column('email', String(50)) user_id = Column('user_id', Integer, ForeignKey('users.id')) user = relation(User, primaryjoin=user_id==User.id)
In addition to the main argument for relation
, other arguments
which depend upon the columns present on an as-yet undefined class
may also be specified as strings. These strings are evaluated as
Python expressions. The full namespace available within this
evaluation includes all classes mapped for this declarative base,
as well as the contents of the sqlalchemy
package, including
expression functions like desc
and func
:
class User(Base): # .... addresses = relation("Address", order_by="desc(Address.email)", primaryjoin="Address.user_id==User.id")
As an alternative to string-based attributes, attributes may also be defined after all classes have been created. Just add them to the target class after the fact:
User.addresses = relation(Address, primaryjoin=Address.user_id==User.id)
Synonyms are one area where declarative
needs to slightly change the usual SQLAlchemy configurational syntax. To define a
getter/setter which proxies to an underlying attribute, use synonym
with the instruments
argument:
class MyClass(Base): __tablename__ = 'sometable' _attr = Column('attr', String) def _get_attr(self): return self._some_attr def _set_attr(self, attr) self._some_attr = attr attr = synonym('_attr', instruments=property(_get_attr, _set_attr))
The above synonym is then usable as an instance attribute as well as a class-level expression construct:
x = MyClass() x.attr = "some value" session.query(MyClass).filter(MyClass.attr == 'some other value').all()
The synonym_for
decorator can accomplish the same task:
class MyClass(Base): __tablename__ = 'sometable' _attr = Column('attr', String) @synonym_for('_attr') @property def attr(self): return self._some_attr
Similarly, comparable_using
is a front end for the comparable_property
ORM function:
class MyClass(Base): __tablename__ = 'sometable' name = Column('name', String) @comparable_using(MyUpperCaseComparator) @property def uc_name(self): return self.name.upper()
As an alternative to __tablename__
, a direct Table
construct may be used. The Column
objects, which in this case require their names, will be added to the mapping just like a regular mapping to a table:
class MyClass(Base): __table__ = Table('my_table', Base.metadata, Column('id', Integer, primary_key=True), Column('name', String(50)) )
Other table-based attributes include __table_args__
, which is
either a dictionary as in:
class MyClass(Base) __tablename__ = 'sometable' __table_args__ = {'mysql_engine':'InnoDB'}
or a dictionary-containing tuple in the form
(arg1, arg2, ..., {kwarg1:value, ...})
, as in:
class MyClass(Base) __tablename__ = 'sometable' __table_args__ = (ForeignKeyConstraint(['id'], ['remote_table.id']), {'autoload':True})
Mapper arguments are specified using the __mapper_args__
class variable. Note that the column objects declared on the class are immediately
usable, as in this joined-table inheritance example:
class Person(Base): __tablename__ = 'people' id = Column('id', Integer, primary_key=True) discriminator = Column('type', String(50)) __mapper_args__ = {'polymorphic_on':discriminator} class Engineer(Person): __tablename__ = 'engineers' __mapper_args__ = {'polymorphic_identity':'engineer'} id = Column('id', Integer, ForeignKey('people.id'), primary_key=True) primary_language = Column('primary_language', String(50))
For single-table inheritance, the __tablename__
and __table__
class variables are optional on a class when the class inherits from another
mapped class.
As a convenience feature, the declarative_base()
sets a default constructor on classes which takes keyword arguments, and assigns them to the
named attributes:
e = Engineer(primary_language='python')
Note that declarative
has no integration built in with sessions, and is only intended as an optional syntax for the regular usage of mappers
and Table objects. A typical application setup using scoped_session
might look like:
engine = create_engine('postgres://scott:tiger@localhost/test') Session = scoped_session(sessionmaker(transactional=True, autoflush=False, bind=engine)) Base = declarative_base()
Mapped instances then make usage of Session
in the usual way.
Author: Mike Bayer and Jason Kirtland
Version: 0.3.1 or greater
associationproxy
is used to create a simplified, read/write view of a relationship. It can be used to cherry-pick fields from a collection of related objects or to greatly simplify access to associated objects in an association relationship.
Consider this "association object" mapping:
users_table = Table('users', metadata, Column('id', Integer, primary_key=True), Column('name', String(64)), ) keywords_table = Table('keywords', metadata, Column('id', Integer, primary_key=True), Column('keyword', String(64)) ) userkeywords_table = Table('userkeywords', metadata, Column('user_id', Integer, ForeignKey("users.id"), primary_key=True), Column('keyword_id', Integer, ForeignKey("keywords.id"), primary_key=True) ) class User(object): def __init__(self, name): self.name = name class Keyword(object): def __init__(self, keyword): self.keyword = keyword mapper(User, users_table, properties={ 'kw': relation(Keyword, secondary=userkeywords_table) }) mapper(Keyword, keywords_table)
Above are three simple tables, modeling users, keywords and a many-to-many relationship between the two. These Keyword
objects are little more than a container for a name, and accessing them via the relation is awkward:
user = User('jek') user.kw.append(Keyword('cheese inspector')) print user.kw # [<__main__.Keyword object at 0xb791ea0c>] print user.kw[0].keyword # 'cheese inspector' print [keyword.keyword for keyword in u._keywords] # ['cheese inspector']
With association_proxy
you have a "view" of the relation that contains just the .keyword
of the related objects. The proxy is a Python property, and unlike the mapper relation, is defined in your class:
from sqlalchemy.ext.associationproxy import association_proxy class User(object): def __init__(self, name): self.name = name # proxy the 'keyword' attribute from the 'kw' relation keywords = association_proxy('kw', 'keyword') # ... >>> user.kw [<__main__.Keyword object at 0xb791ea0c>] >>> user.keywords ['cheese inspector'] >>> user.keywords.append('snack ninja') >>> user.keywords ['cheese inspector', 'snack ninja'] >>> user.kw [<__main__.Keyword object at 0x9272a4c>, <__main__.Keyword object at 0xb7b396ec>]
The proxy is read/write. New associated objects are created on demand when values are added to the proxy, and modifying or removing an entry through the proxy also affects the underlying collection.
creator
function can be used to create instances instead.
Above, the Keyword.__init__
takes a single argument keyword
, which maps conveniently to the value being set through the proxy. A creator
function could have been used instead if more flexiblity was required.
Because the proxies are backed a regular relation collection, all of the usual hooks and patterns for using collections are still in effect. The most convenient behavior is the automatic setting of "parent"-type relationships on assignment. In the example above, nothing special had to be done to associate the Keyword to the User. Simply adding it to the collection is sufficient.
back to section topAssociation proxies are also useful for keeping association objects out the way during regular use. For example, the userkeywords
table might have a bunch of auditing columns that need to get updated when changes are made- columns that are updated but seldom, if ever, accessed in your application. A proxy can provide a very natural access pattern for the relation.
from sqlalchemy.ext.associationproxy import association_proxy # users_table and keywords_table tables as above, then: userkeywords_table = Table('userkeywords', metadata, Column('user_id', Integer, ForeignKey("users.id"), primary_key=True), Column('keyword_id', Integer, ForeignKey("keywords.id"), primary_key=True), # add some auditing columns Column('updated_at', DateTime, default=datetime.now), Column('updated_by', Integer, default=get_current_uid, onupdate=get_current_uid), ) def _create_uk_by_keyword(keyword): """A creator function.""" return UserKeyword(keyword=keyword) class User(object): def __init__(self, name): self.name = name keywords = association_proxy('user_keywords', 'keyword', creator=_create_uk_by_keyword) class Keyword(object): def __init__(self, keyword): self.keyword = keyword def __repr__(self): return 'Keyword(%s)' % repr(self.keyword) class UserKeyword(object): def __init__(self, user=None, keyword=None): self.user = user self.keyword = keyword mapper(User, users_table, properties={ 'user_keywords': relation(UserKeyword) }) mapper(Keyword, keywords_table) mapper(UserKeyword, userkeywords_table, properties={ 'user': relation(User), 'keyword': relation(Keyword), }) user = User('log') kw1 = Keyword('new_from_blammo') # Adding a Keyword requires creating a UserKeyword association object user.user_keywords.append(UserKeyword(user, kw1)) # And accessing Keywords requires traversing UserKeywords print user.user_keywords[0] # <__main__.UserKeyword object at 0xb79bbbec> print user.user_keywords[0].keyword # Keyword('new_from_blammo') # Lots of work. # It's much easier to go through the association proxy! for kw in (Keyword('its_big'), Keyword('its_heavy'), Keyword('its_wood')): user.keywords.append(kw) print user.keywords # [Keyword('new_from_blammo'), Keyword('its_big'), Keyword('its_heavy'), Keyword('its_wood')]
stocks = Table("stocks", meta, Column('symbol', String(10), primary_key=True), Column('description', String(100), nullable=False), Column('last_price', Numeric) ) brokers = Table("brokers", meta, Column('id', Integer,primary_key=True), Column('name', String(100), nullable=False) ) holdings = Table("holdings", meta, Column('broker_id', Integer, ForeignKey('brokers.id'), primary_key=True), Column('symbol', String(10), ForeignKey('stocks.symbol'), primary_key=True), Column('shares', Integer) )
Above are three tables, modeling stocks, their brokers and the number of shares of a stock held by each broker. This situation is quite different from the association example above. shares
is a property of the relation, an important one that we need to use all the time.
For this example, it would be very convenient if Broker
objects had a dictionary collection that mapped Stock
instances to the shares held for each. That's easy.
from sqlalchemy.ext.associationproxy import association_proxy from sqlalchemy.orm.collections import attribute_mapped_collection def _create_holding(stock, shares): """A creator function, constructs Holdings from Stock and share quantity.""" return Holding(stock=stock, shares=shares) class Broker(object): def __init__(self, name): self.name = name holdings = association_proxy('by_stock', 'shares', creator=_create_holding) class Stock(object): def __init__(self, symbol, description=None): self.symbol = symbol self.description = description self.last_price = 0 class Holding(object): def __init__(self, broker=None, stock=None, shares=0): self.broker = broker self.stock = stock self.shares = shares mapper(Stock, stocks_table) mapper(Broker, brokers_table, properties={ 'by_stock': relation(Holding, collection_class=attribute_mapped_collection('stock')) }) mapper(Holding, holdings_table, properties={ 'stock': relation(Stock), 'broker': relation(Broker) })
Above, we've set up the 'by_stock' relation collection to act as a dictionary, using the .stock
property of each Holding as a key.
Populating and accessing that dictionary manually is slightly inconvenient because of the complexity of the Holdings association object:
stock = Stock('ZZK') broker = Broker('paj') broker.holdings[stock] = Holding(broker, stock, 10) print broker.holdings[stock].shares # 10
The by_stock
proxy we've added to the Broker
class hides the details of the Holding
while also giving access to .shares
:
for stock in (Stock('JEK'), Stock('STPZ')): broker.holdings[stock] = 123 for stock, shares in broker.holdings.items(): print stock, shares # lets take a peek at that holdings_table after committing changes to the db print list(holdings_table.select().execute()) # [(1, 'ZZK', 10), (1, 'JEK', 123), (1, 'STEPZ', 123)]
Further examples can be found in the examples/
directory in the SQLAlchemy distribution.
The association_proxy
convenience function is not present in SQLAlchemy versions 0.3.1 through 0.3.7, instead instantiate the class directly:
from sqlalchemy.ext.associationproxy import AssociationProxy class Article(object): keywords = AssociationProxy('keyword_associations', 'keyword')
Author: Jason Kirtland
orderinglist
is a helper for mutable ordered relations. It will intercept
list operations performed on a relation collection and automatically
synchronize changes in list position with an attribute on the related objects.
(See advdatamapping_properties_entitycollections for more information on the general pattern.)
Example: Two tables that store slides in a presentation. Each slide has a number of bullet points, displayed in order by the 'position' column on the bullets table. These bullets can be inserted and re-ordered by your end users, and you need to update the 'position' column of all affected rows when changes are made.
slides_table = Table('Slides', metadata, Column('id', Integer, primary_key=True), Column('name', String)) bullets_table = Table('Bullets', metadata, Column('id', Integer, primary_key=True), Column('slide_id', Integer, ForeignKey('Slides.id')), Column('position', Integer), Column('text', String)) class Slide(object): pass class Bullet(object): pass mapper(Slide, slides_table, properties={ 'bullets': relation(Bullet, order_by=[bullets_table.c.position]) }) mapper(Bullet, bullets_table)
The standard relation mapping will produce a list-like attribute on each Slide containing all related Bullets, but coping with changes in ordering is totally your responsibility. If you insert a Bullet into that list, there is no magic- it won't have a position attribute unless you assign it it one, and you'll need to manually renumber all the subsequent Bullets in the list to accommodate the insert.
An orderinglist
can automate this and manage the 'position' attribute on all
related bullets for you.
mapper(Slide, slides_table, properties={ 'bullets': relation(Bullet, collection_class=ordering_list('position'), order_by=[bullets_table.c.position]) }) mapper(Bullet, bullets_table) s = Slide() s.bullets.append(Bullet()) s.bullets.append(Bullet()) s.bullets[1].position >>> 1 s.bullets.insert(1, Bullet()) s.bullets[2].position >>> 2
Use the ordering_list
function to set up the collection_class
on relations
(as in the mapper example above). This implementation depends on the list
starting in the proper order, so be SURE to put an order_by on your relation.
ordering_list
takes the name of the related object's ordering attribute as
an argument. By default, the zero-based integer index of the object's
position in the ordering_list
is synchronized with the ordering attribute:
index 0 will get position 0, index 1 position 1, etc. To start numbering at 1
or some other integer, provide count_from=1
.
Ordering values are not limited to incrementing integers. Almost any scheme
can implemented by supplying a custom ordering_func
that maps a Python list
index to any value you require. See the module
documentation for more
information, and also check out the unit tests for examples of stepped
numbering, alphabetical and Fibonacci numbering.
Author: Jonathan Ellis
SqlSoup creates mapped classes on the fly from tables, which are automatically reflected from the database based on name. It is essentially a nicer version of the "row data gateway" pattern.
>>> from sqlalchemy.ext.sqlsoup import SqlSoup >>> soup = SqlSoup('sqlite:///') >>> db.users.select(order_by=[db.users.c.name]) [MappedUsers(name='Bhargan Basepair',email='basepair@example.edu',password='basepair',classname=None,admin=1), MappedUsers(name='Joe Student',email='student@example.edu',password='student',classname=None,admin=0)]
Full SqlSoup documentation is on the SQLAlchemy Wiki.
back to section top