Updated connection to use MongoClient (#262, #274)

This commit is contained in:
Ross Lawley 2013-04-22 15:07:15 +00:00
parent 80db9e7716
commit c16e6d74e6
10 changed files with 181 additions and 125 deletions

View File

@ -4,6 +4,7 @@ Changelog
Changes in 0.8.X Changes in 0.8.X
================ ================
- Updated connection to use MongoClient (#262, #274)
- Fixed db_alias and inherited Documents (#143) - Fixed db_alias and inherited Documents (#143)
- Documentation update for document errors (#124) - Documentation update for document errors (#124)
- Deprecated `get_or_create` (#35) - Deprecated `get_or_create` (#35)

View File

@ -29,7 +29,7 @@ name - just supply the uri as the :attr:`host` to
ReplicaSets ReplicaSets
=========== ===========
MongoEngine now supports :func:`~pymongo.replica_set_connection.ReplicaSetConnection` MongoEngine supports :class:`~pymongo.mongo_replica_set_client.MongoReplicaSetClient`
to use them please use a URI style connection and provide the `replicaSet` name in the to use them please use a URI style connection and provide the `replicaSet` name in the
connection kwargs. connection kwargs.

View File

@ -1,15 +1,15 @@
========= #########
Upgrading Upgrading
========= #########
0.7 to 0.8 0.7 to 0.8
========== **********
Inheritance Inheritance
----------- ===========
Data Model Data Model
~~~~~~~~~~ ----------
The inheritance model has changed, we no longer need to store an array of The inheritance model has changed, we no longer need to store an array of
:attr:`types` with the model we can just use the classname in :attr:`_cls`. :attr:`types` with the model we can just use the classname in :attr:`_cls`.
@ -44,7 +44,7 @@ inherited classes like so: ::
Document Definition Document Definition
~~~~~~~~~~~~~~~~~~~ -------------------
The default for inheritance has changed - its now off by default and The default for inheritance has changed - its now off by default and
:attr:`_cls` will not be stored automatically with the class. So if you extend :attr:`_cls` will not be stored automatically with the class. So if you extend
@ -77,7 +77,7 @@ the case and the data is set only in the ``document._data`` dictionary: ::
AttributeError: 'Animal' object has no attribute 'size' AttributeError: 'Animal' object has no attribute 'size'
Querysets Querysets
~~~~~~~~~ =========
Querysets now return clones and should no longer be considered editable in Querysets now return clones and should no longer be considered editable in
place. This brings us in line with how Django's querysets work and removes a place. This brings us in line with how Django's querysets work and removes a
@ -98,8 +98,47 @@ update your code like so: ::
mammals = Animal.objects(type="mammal").filter(order="Carnivora") # The final queryset is assgined to mammals mammals = Animal.objects(type="mammal").filter(order="Carnivora") # The final queryset is assgined to mammals
[m for m in mammals] # This will return all carnivores [m for m in mammals] # This will return all carnivores
Client
======
PyMongo 2.4 came with a new connection client; MongoClient_ and started the
depreciation of the old :class:`~pymongo.connection.Connection`. MongoEngine
now uses the latest `MongoClient` for connections. By default operations were
`safe` but if you turned them off or used the connection directly this will
impact your queries.
Querysets
---------
Safe
^^^^
`safe` has been depreciated in the new MongoClient connection. Please use
`write_concern` instead. As `safe` always defaulted as `True` normally no code
change is required. To disable confirmation of the write just pass `{"w": 0}`
eg: ::
# Old
Animal(name="Dinasour").save(safe=False)
# new code:
Animal(name="Dinasour").save(write_concern={"w": 0})
Write Concern
^^^^^^^^^^^^^
`write_options` has been replaced with `write_concern` to bring it inline with
pymongo. To upgrade simply rename any instances where you used the `write_option`
keyword to `write_concern` like so::
# Old code:
Animal(name="Dinasour").save(write_options={"w": 2})
# new code:
Animal(name="Dinasour").save(write_concern={"w": 2})
Indexes Indexes
------- =======
Index methods are no longer tied to querysets but rather to the document class. Index methods are no longer tied to querysets but rather to the document class.
Although `QuerySet._ensure_indexes` and `QuerySet.ensure_index` still exist. Although `QuerySet._ensure_indexes` and `QuerySet.ensure_index` still exist.
@ -107,17 +146,19 @@ They should be replaced with :func:`~mongoengine.Document.ensure_indexes` /
:func:`~mongoengine.Document.ensure_index`. :func:`~mongoengine.Document.ensure_index`.
SequenceFields SequenceFields
-------------- ==============
:class:`~mongoengine.fields.SequenceField` now inherits from `BaseField` to :class:`~mongoengine.fields.SequenceField` now inherits from `BaseField` to
allow flexible storage of the calculated value. As such MIN and MAX settings allow flexible storage of the calculated value. As such MIN and MAX settings
are no longer handled. are no longer handled.
.. _MongoClient: http://blog.mongodb.org/post/36666163412/introducing-mongoclient
0.6 to 0.7 0.6 to 0.7
========== **********
Cascade saves Cascade saves
------------- =============
Saves will raise a `FutureWarning` if they cascade and cascade hasn't been set Saves will raise a `FutureWarning` if they cascade and cascade hasn't been set
to True. This is because in 0.8 it will default to False. If you require to True. This is because in 0.8 it will default to False. If you require
@ -135,7 +176,7 @@ via `save` eg ::
Remember: cascading saves **do not** cascade through lists. Remember: cascading saves **do not** cascade through lists.
ReferenceFields ReferenceFields
--------------- ===============
ReferenceFields now can store references as ObjectId strings instead of DBRefs. ReferenceFields now can store references as ObjectId strings instead of DBRefs.
This will become the default in 0.8 and if `dbref` is not set a `FutureWarning` This will become the default in 0.8 and if `dbref` is not set a `FutureWarning`
@ -164,7 +205,7 @@ migrate ::
item_frequencies item_frequencies
---------------- ================
In the 0.6 series we added support for null / zero / false values in In the 0.6 series we added support for null / zero / false values in
item_frequencies. A side effect was to return keys in the value they are item_frequencies. A side effect was to return keys in the value they are
@ -173,14 +214,14 @@ updated to handle native types rather than strings keys for the results of
item frequency queries. item frequency queries.
BinaryFields BinaryFields
------------ ============
Binary fields have been updated so that they are native binary types. If you Binary fields have been updated so that they are native binary types. If you
previously were doing `str` comparisons with binary field values you will have previously were doing `str` comparisons with binary field values you will have
to update and wrap the value in a `str`. to update and wrap the value in a `str`.
0.5 to 0.6 0.5 to 0.6
========== **********
Embedded Documents - if you had a `pk` field you will have to rename it from Embedded Documents - if you had a `pk` field you will have to rename it from
`_id` to `pk` as pk is no longer a property of Embedded Documents. `_id` to `pk` as pk is no longer a property of Embedded Documents.
@ -200,13 +241,13 @@ don't define :attr:`allow_inheritance` in their meta.
You may need to update pyMongo to 2.0 for use with Sharding. You may need to update pyMongo to 2.0 for use with Sharding.
0.4 to 0.5 0.4 to 0.5
=========== **********
There have been the following backwards incompatibilities from 0.4 to 0.5. The There have been the following backwards incompatibilities from 0.4 to 0.5. The
main areas of changed are: choices in fields, map_reduce and collection names. main areas of changed are: choices in fields, map_reduce and collection names.
Choice options: Choice options:
--------------- ===============
Are now expected to be an iterable of tuples, with the first element in each Are now expected to be an iterable of tuples, with the first element in each
tuple being the actual value to be stored. The second element is the tuple being the actual value to be stored. The second element is the
@ -214,7 +255,7 @@ human-readable name for the option.
PyMongo / MongoDB PyMongo / MongoDB
----------------- =================
map reduce now requires pymongo 1.11+- The pymongo `merge_output` and map reduce now requires pymongo 1.11+- The pymongo `merge_output` and
`reduce_output` parameters, have been depreciated. `reduce_output` parameters, have been depreciated.
@ -228,7 +269,7 @@ such the following have been changed:
Default collection naming Default collection naming
------------------------- =========================
Previously it was just lowercase, its now much more pythonic and readable as Previously it was just lowercase, its now much more pythonic and readable as
its lowercase and underscores, previously :: its lowercase and underscores, previously ::

View File

@ -1,5 +1,5 @@
import pymongo import pymongo
from pymongo import Connection, ReplicaSetConnection, uri_parser from pymongo import MongoClient, MongoReplicaSetClient, uri_parser
__all__ = ['ConnectionError', 'connect', 'register_connection', __all__ = ['ConnectionError', 'connect', 'register_connection',
@ -112,15 +112,15 @@ def get_connection(alias=DEFAULT_CONNECTION_NAME, reconnect=False):
conn_settings['slaves'] = slaves conn_settings['slaves'] = slaves
conn_settings.pop('read_preference', None) conn_settings.pop('read_preference', None)
connection_class = Connection connection_class = MongoClient
if 'replicaSet' in conn_settings: if 'replicaSet' in conn_settings:
conn_settings['hosts_or_uri'] = conn_settings.pop('host', None) conn_settings['hosts_or_uri'] = conn_settings.pop('host', None)
# Discard port since it can't be used on ReplicaSetConnection # Discard port since it can't be used on MongoReplicaSetClient
conn_settings.pop('port', None) conn_settings.pop('port', None)
# Discard replicaSet if not base string # Discard replicaSet if not base string
if not isinstance(conn_settings['replicaSet'], basestring): if not isinstance(conn_settings['replicaSet'], basestring):
conn_settings.pop('replicaSet', None) conn_settings.pop('replicaSet', None)
connection_class = ReplicaSetConnection connection_class = MongoReplicaSetClient
try: try:
_connections[alias] = connection_class(**conn_settings) _connections[alias] = connection_class(**conn_settings)

View File

@ -88,7 +88,7 @@ class SessionStore(SessionBase):
s.session_data = self._get_session(no_load=must_create) s.session_data = self._get_session(no_load=must_create)
s.expire_date = self.get_expiry_date() s.expire_date = self.get_expiry_date()
try: try:
s.save(force_insert=must_create, safe=True) s.save(force_insert=must_create)
except OperationError: except OperationError:
if must_create: if must_create:
raise CreateError raise CreateError

View File

@ -142,7 +142,7 @@ class Document(BaseDocument):
options.get('size') != max_size: options.get('size') != max_size:
msg = (('Cannot create collection "%s" as a capped ' msg = (('Cannot create collection "%s" as a capped '
'collection as it already exists') 'collection as it already exists')
% cls._collection) % cls._collection)
raise InvalidCollectionError(msg) raise InvalidCollectionError(msg)
else: else:
# Create the collection as a capped collection # Create the collection as a capped collection
@ -158,28 +158,24 @@ class Document(BaseDocument):
cls.ensure_indexes() cls.ensure_indexes()
return cls._collection return cls._collection
def save(self, safe=True, force_insert=False, validate=True, clean=True, def save(self, force_insert=False, validate=True, clean=True,
write_options=None, cascade=None, cascade_kwargs=None, write_concern=None, cascade=None, cascade_kwargs=None,
_refs=None, **kwargs): _refs=None, **kwargs):
"""Save the :class:`~mongoengine.Document` to the database. If the """Save the :class:`~mongoengine.Document` to the database. If the
document already exists, it will be updated, otherwise it will be document already exists, it will be updated, otherwise it will be
created. created.
If ``safe=True`` and the operation is unsuccessful, an
:class:`~mongoengine.OperationError` will be raised.
:param safe: check if the operation succeeded before returning
:param force_insert: only try to create a new document, don't allow :param force_insert: only try to create a new document, don't allow
updates of existing documents updates of existing documents
:param validate: validates the document; set to ``False`` to skip. :param validate: validates the document; set to ``False`` to skip.
:param clean: call the document clean method, requires `validate` to be :param clean: call the document clean method, requires `validate` to be
True. True.
:param write_options: Extra keyword arguments are passed down to :param write_concern: Extra keyword arguments are passed down to
:meth:`~pymongo.collection.Collection.save` OR :meth:`~pymongo.collection.Collection.save` OR
:meth:`~pymongo.collection.Collection.insert` :meth:`~pymongo.collection.Collection.insert`
which will be used as options for the resultant which will be used as options for the resultant
``getLastError`` command. For example, ``getLastError`` command. For example,
``save(..., write_options={w: 2, fsync: True}, ...)`` will ``save(..., write_concern={w: 2, fsync: True}, ...)`` will
wait until at least two servers have recorded the write and wait until at least two servers have recorded the write and
will force an fsync on the primary server. will force an fsync on the primary server.
:param cascade: Sets the flag for cascading saves. You can set a :param cascade: Sets the flag for cascading saves. You can set a
@ -205,8 +201,8 @@ class Document(BaseDocument):
if validate: if validate:
self.validate(clean=clean) self.validate(clean=clean)
if not write_options: if not write_concern:
write_options = {} write_concern = {}
doc = self.to_mongo() doc = self.to_mongo()
@ -216,11 +212,9 @@ class Document(BaseDocument):
collection = self._get_collection() collection = self._get_collection()
if created: if created:
if force_insert: if force_insert:
object_id = collection.insert(doc, safe=safe, object_id = collection.insert(doc, **write_concern)
**write_options)
else: else:
object_id = collection.save(doc, safe=safe, object_id = collection.save(doc, **write_concern)
**write_options)
else: else:
object_id = doc['_id'] object_id = doc['_id']
updates, removals = self._delta() updates, removals = self._delta()
@ -247,7 +241,7 @@ class Document(BaseDocument):
update_query["$unset"] = removals update_query["$unset"] = removals
if updates or removals: if updates or removals:
last_error = collection.update(select_dict, update_query, last_error = collection.update(select_dict, update_query,
upsert=upsert, safe=safe, **write_options) upsert=upsert, **write_concern)
created = is_new_object(last_error) created = is_new_object(last_error)
warn_cascade = not cascade and 'cascade' not in self._meta warn_cascade = not cascade and 'cascade' not in self._meta
@ -255,10 +249,9 @@ class Document(BaseDocument):
if cascade is None else cascade) if cascade is None else cascade)
if cascade: if cascade:
kwargs = { kwargs = {
"safe": safe,
"force_insert": force_insert, "force_insert": force_insert,
"validate": validate, "validate": validate,
"write_options": write_options, "write_concern": write_concern,
"cascade": cascade "cascade": cascade
} }
if cascade_kwargs: # Allow granular control over cascades if cascade_kwargs: # Allow granular control over cascades
@ -305,7 +298,7 @@ class Document(BaseDocument):
if ref and ref_id not in _refs: if ref and ref_id not in _refs:
if warn_cascade: if warn_cascade:
msg = ("Cascading saves will default to off in 0.8, " msg = ("Cascading saves will default to off in 0.8, "
"please explicitly set `.save(cascade=True)`") "please explicitly set `.save(cascade=True)`")
warnings.warn(msg, FutureWarning) warnings.warn(msg, FutureWarning)
_refs.append(ref_id) _refs.append(ref_id)
kwargs["_refs"] = _refs kwargs["_refs"] = _refs
@ -344,16 +337,21 @@ class Document(BaseDocument):
# Need to add shard key to query, or you get an error # Need to add shard key to query, or you get an error
return self._qs.filter(**self._object_key).update_one(**kwargs) return self._qs.filter(**self._object_key).update_one(**kwargs)
def delete(self, safe=False): def delete(self, **write_concern):
"""Delete the :class:`~mongoengine.Document` from the database. This """Delete the :class:`~mongoengine.Document` from the database. This
will only take effect if the document has been previously saved. will only take effect if the document has been previously saved.
:param safe: check if the operation succeeded before returning :param write_concern: Extra keyword arguments are passed down which
will be used as options for the resultant
``getLastError`` command. For example,
``save(..., write_concern={w: 2, fsync: True}, ...)`` will
wait until at least two servers have recorded the write and
will force an fsync on the primary server.
""" """
signals.pre_delete.send(self.__class__, document=self) signals.pre_delete.send(self.__class__, document=self)
try: try:
self._qs.filter(**self._object_key).delete(safe=safe) self._qs.filter(**self._object_key).delete(write_concern=write_concern)
except pymongo.errors.OperationFailure, err: except pymongo.errors.OperationFailure, err:
message = u'Could not delete document (%s)' % err.message message = u'Could not delete document (%s)' % err.message
raise OperationError(message) raise OperationError(message)
@ -428,9 +426,8 @@ class Document(BaseDocument):
.. versionchanged:: 0.6 Now chainable .. versionchanged:: 0.6 Now chainable
""" """
id_field = self._meta['id_field'] id_field = self._meta['id_field']
obj = self._qs.filter( obj = self._qs.filter(**{id_field: self[id_field]}
**{id_field: self[id_field]} ).limit(1).select_related(max_depth=max_depth)
).limit(1).select_related(max_depth=max_depth)
if obj: if obj:
obj = obj[0] obj = obj[0]
else: else:

View File

@ -221,7 +221,7 @@ class QuerySet(object):
""" """
return self._document(**kwargs).save() return self._document(**kwargs).save()
def get_or_create(self, write_options=None, auto_save=True, def get_or_create(self, write_concern=None, auto_save=True,
*q_objs, **query): *q_objs, **query):
"""Retrieve unique object or create, if it doesn't exist. Returns a """Retrieve unique object or create, if it doesn't exist. Returns a
tuple of ``(object, created)``, where ``object`` is the retrieved or tuple of ``(object, created)``, where ``object`` is the retrieved or
@ -239,9 +239,9 @@ class QuerySet(object):
don't accidently duplicate data when using this method. This is don't accidently duplicate data when using this method. This is
now scheduled to be removed before 1.0 now scheduled to be removed before 1.0
:param write_options: optional extra keyword arguments used if we :param write_concern: optional extra keyword arguments used if we
have to create a new document. have to create a new document.
Passes any write_options onto :meth:`~mongoengine.Document.save` Passes any write_concern onto :meth:`~mongoengine.Document.save`
:param auto_save: if the object is to be saved automatically if :param auto_save: if the object is to be saved automatically if
not found. not found.
@ -266,7 +266,7 @@ class QuerySet(object):
doc = self._document(**query) doc = self._document(**query)
if auto_save: if auto_save:
doc.save(write_options=write_options) doc.save(write_concern=write_concern)
return doc, True return doc, True
def first(self): def first(self):
@ -279,18 +279,13 @@ class QuerySet(object):
result = None result = None
return result return result
def insert(self, doc_or_docs, load_bulk=True, safe=False, def insert(self, doc_or_docs, load_bulk=True, write_concern=None):
write_options=None):
"""bulk insert documents """bulk insert documents
If ``safe=True`` and the operation is unsuccessful, an
:class:`~mongoengine.OperationError` will be raised.
:param docs_or_doc: a document or list of documents to be inserted :param docs_or_doc: a document or list of documents to be inserted
:param load_bulk (optional): If True returns the list of document :param load_bulk (optional): If True returns the list of document
instances instances
:param safe: check if the operation succeeded before returning :param write_concern: Extra keyword arguments are passed down to
:param write_options: Extra keyword arguments are passed down to
:meth:`~pymongo.collection.Collection.insert` :meth:`~pymongo.collection.Collection.insert`
which will be used as options for the resultant which will be used as options for the resultant
``getLastError`` command. For example, ``getLastError`` command. For example,
@ -305,9 +300,8 @@ class QuerySet(object):
""" """
Document = _import_class('Document') Document = _import_class('Document')
if not write_options: if not write_concern:
write_options = {} write_concern = {}
write_options.update({'safe': safe})
docs = doc_or_docs docs = doc_or_docs
return_one = False return_one = False
@ -319,7 +313,7 @@ class QuerySet(object):
for doc in docs: for doc in docs:
if not isinstance(doc, self._document): if not isinstance(doc, self._document):
msg = ("Some documents inserted aren't instances of %s" msg = ("Some documents inserted aren't instances of %s"
% str(self._document)) % str(self._document))
raise OperationError(msg) raise OperationError(msg)
if doc.pk and not doc._created: if doc.pk and not doc._created:
msg = "Some documents have ObjectIds use doc.update() instead" msg = "Some documents have ObjectIds use doc.update() instead"
@ -328,7 +322,7 @@ class QuerySet(object):
signals.pre_bulk_insert.send(self._document, documents=docs) signals.pre_bulk_insert.send(self._document, documents=docs)
try: try:
ids = self._collection.insert(raw, **write_options) ids = self._collection.insert(raw, **write_concern)
except pymongo.errors.OperationFailure, err: except pymongo.errors.OperationFailure, err:
message = 'Could not save document (%s)' message = 'Could not save document (%s)'
if re.match('^E1100[01] duplicate key', unicode(err)): if re.match('^E1100[01] duplicate key', unicode(err)):
@ -340,7 +334,7 @@ class QuerySet(object):
if not load_bulk: if not load_bulk:
signals.post_bulk_insert.send( signals.post_bulk_insert.send(
self._document, documents=docs, loaded=False) self._document, documents=docs, loaded=False)
return return_one and ids[0] or ids return return_one and ids[0] or ids
documents = self.in_bulk(ids) documents = self.in_bulk(ids)
@ -348,7 +342,7 @@ class QuerySet(object):
for obj_id in ids: for obj_id in ids:
results.append(documents.get(obj_id)) results.append(documents.get(obj_id))
signals.post_bulk_insert.send( signals.post_bulk_insert.send(
self._document, documents=results, loaded=True) self._document, documents=results, loaded=True)
return return_one and results[0] or results return return_one and results[0] or results
def count(self): def count(self):
@ -358,10 +352,15 @@ class QuerySet(object):
return 0 return 0
return self._cursor.count(with_limit_and_skip=True) return self._cursor.count(with_limit_and_skip=True)
def delete(self, safe=False): def delete(self, write_concern=None):
"""Delete the documents matched by the query. """Delete the documents matched by the query.
:param safe: check if the operation succeeded before returning :param write_concern: Extra keyword arguments are passed down which
will be used as options for the resultant
``getLastError`` command. For example,
``save(..., write_concern={w: 2, fsync: True}, ...)`` will
wait until at least two servers have recorded the write and
will force an fsync on the primary server.
""" """
queryset = self.clone() queryset = self.clone()
doc = queryset._document doc = queryset._document
@ -370,11 +369,14 @@ class QuerySet(object):
signals.pre_delete.has_receivers_for(self._document) or signals.pre_delete.has_receivers_for(self._document) or
signals.post_delete.has_receivers_for(self._document)) signals.post_delete.has_receivers_for(self._document))
if not write_concern:
write_concern = {}
# Handle deletes where skips or limits have been applied or has a # Handle deletes where skips or limits have been applied or has a
# delete signal # delete signal
if queryset._skip or queryset._limit or has_delete_signal: if queryset._skip or queryset._limit or has_delete_signal:
for doc in queryset: for doc in queryset:
doc.delete(safe=safe) doc.delete(write_concern=write_concern)
return return
delete_rules = doc._meta.get('delete_rules') or {} delete_rules = doc._meta.get('delete_rules') or {}
@ -386,7 +388,7 @@ class QuerySet(object):
if rule == DENY and document_cls.objects( if rule == DENY and document_cls.objects(
**{field_name + '__in': self}).count() > 0: **{field_name + '__in': self}).count() > 0:
msg = ("Could not delete document (%s.%s refers to it)" msg = ("Could not delete document (%s.%s refers to it)"
% (document_cls.__name__, field_name)) % (document_cls.__name__, field_name))
raise OperationError(msg) raise OperationError(msg)
for rule_entry in delete_rules: for rule_entry in delete_rules:
@ -396,36 +398,38 @@ class QuerySet(object):
ref_q = document_cls.objects(**{field_name + '__in': self}) ref_q = document_cls.objects(**{field_name + '__in': self})
ref_q_count = ref_q.count() ref_q_count = ref_q.count()
if (doc != document_cls and ref_q_count > 0 if (doc != document_cls and ref_q_count > 0
or (doc == document_cls and ref_q_count > 0)): or (doc == document_cls and ref_q_count > 0)):
ref_q.delete(safe=safe) ref_q.delete(write_concern=write_concern)
elif rule == NULLIFY: elif rule == NULLIFY:
document_cls.objects(**{field_name + '__in': self}).update( document_cls.objects(**{field_name + '__in': self}).update(
safe_update=safe, write_concern=write_concern, **{'unset__%s' % field_name: 1})
**{'unset__%s' % field_name: 1})
elif rule == PULL: elif rule == PULL:
document_cls.objects(**{field_name + '__in': self}).update( document_cls.objects(**{field_name + '__in': self}).update(
safe_update=safe, write_concern=write_concern,
**{'pull_all__%s' % field_name: self}) **{'pull_all__%s' % field_name: self})
queryset._collection.remove(queryset._query, safe=safe) queryset._collection.remove(queryset._query, write_concern=write_concern)
def update(self, safe_update=True, upsert=False, multi=True, def update(self, upsert=False, multi=True, write_concern=None, **update):
write_options=None, **update): """Perform an atomic update on the fields matched by the query.
"""Perform an atomic update on the fields matched by the query. When
``safe_update`` is used, the number of affected documents is returned.
:param safe_update: check if the operation succeeded before returning
:param upsert: Any existing document with that "_id" is overwritten. :param upsert: Any existing document with that "_id" is overwritten.
:param write_options: extra keyword arguments for :param multi: Update multiple documents.
:meth:`~pymongo.collection.Collection.update` :param write_concern: Extra keyword arguments are passed down which
will be used as options for the resultant
``getLastError`` command. For example,
``save(..., write_concern={w: 2, fsync: True}, ...)`` will
wait until at least two servers have recorded the write and
will force an fsync on the primary server.
:param update: Django-style update keyword arguments
.. versionadded:: 0.2 .. versionadded:: 0.2
""" """
if not update: if not update:
raise OperationError("No update parameters, would remove data") raise OperationError("No update parameters, would remove data")
if not write_options: if not write_concern:
write_options = {} write_concern = {}
queryset = self.clone() queryset = self.clone()
query = queryset._query query = queryset._query
@ -441,8 +445,7 @@ class QuerySet(object):
try: try:
ret = queryset._collection.update(query, update, multi=multi, ret = queryset._collection.update(query, update, multi=multi,
upsert=upsert, safe=safe_update, upsert=upsert, **write_concern)
**write_options)
if ret is not None and 'n' in ret: if ret is not None and 'n' in ret:
return ret['n'] return ret['n']
except pymongo.errors.OperationFailure, err: except pymongo.errors.OperationFailure, err:
@ -451,21 +454,21 @@ class QuerySet(object):
raise OperationError(message) raise OperationError(message)
raise OperationError(u'Update failed (%s)' % unicode(err)) raise OperationError(u'Update failed (%s)' % unicode(err))
def update_one(self, safe_update=True, upsert=False, write_options=None, def update_one(self, upsert=False, write_concern=None, **update):
**update): """Perform an atomic update on first field matched by the query.
"""Perform an atomic update on first field matched by the query. When
``safe_update`` is used, the number of affected documents is returned.
:param safe_update: check if the operation succeeded before returning
:param upsert: Any existing document with that "_id" is overwritten. :param upsert: Any existing document with that "_id" is overwritten.
:param write_options: extra keyword arguments for :param write_concern: Extra keyword arguments are passed down which
:meth:`~pymongo.collection.Collection.update` will be used as options for the resultant
``getLastError`` command. For example,
``save(..., write_concern={w: 2, fsync: True}, ...)`` will
wait until at least two servers have recorded the write and
will force an fsync on the primary server.
:param update: Django-style update keyword arguments :param update: Django-style update keyword arguments
.. versionadded:: 0.2 .. versionadded:: 0.2
""" """
return self.update(safe_update=True, upsert=upsert, multi=False, return self.update(upsert=upsert, multi=False, write_concern=None, **update)
write_options=None, **update)
def with_id(self, object_id): def with_id(self, object_id):
"""Retrieve the object matching the id provided. Uses `object_id` only """Retrieve the object matching the id provided. Uses `object_id` only
@ -498,7 +501,7 @@ class QuerySet(object):
if self._scalar: if self._scalar:
for doc in docs: for doc in docs:
doc_map[doc['_id']] = self._get_scalar( doc_map[doc['_id']] = self._get_scalar(
self._document._from_son(doc)) self._document._from_son(doc))
elif self._as_pymongo: elif self._as_pymongo:
for doc in docs: for doc in docs:
doc_map[doc['_id']] = self._get_as_pymongo(doc) doc_map[doc['_id']] = self._get_as_pymongo(doc)
@ -523,10 +526,10 @@ class QuerySet(object):
c = self.__class__(self._document, self._collection_obj) c = self.__class__(self._document, self._collection_obj)
copy_props = ('_mongo_query', '_initial_query', '_none', '_query_obj', copy_props = ('_mongo_query', '_initial_query', '_none', '_query_obj',
'_where_clause', '_loaded_fields', '_ordering', '_snapshot', '_where_clause', '_loaded_fields', '_ordering', '_snapshot',
'_timeout', '_class_check', '_slave_okay', '_read_preference', '_timeout', '_class_check', '_slave_okay', '_read_preference',
'_iter', '_scalar', '_as_pymongo', '_as_pymongo_coerce', '_iter', '_scalar', '_as_pymongo', '_as_pymongo_coerce',
'_limit', '_skip', '_hint', '_auto_dereference') '_limit', '_skip', '_hint', '_auto_dereference')
for prop in copy_props: for prop in copy_props:
val = getattr(self, prop) val = getattr(self, prop)

View File

@ -314,19 +314,27 @@ class IndexesTest(unittest.TestCase):
""" """
class User(Document): class User(Document):
meta = { meta = {
'allow_inheritance': True,
'indexes': ['user_guid'], 'indexes': ['user_guid'],
'auto_create_index': False 'auto_create_index': False
} }
user_guid = StringField(required=True) user_guid = StringField(required=True)
class MongoUser(User):
pass
User.drop_collection() User.drop_collection()
u = User(user_guid='123') User(user_guid='123').save()
u.save() MongoUser(user_guid='123').save()
self.assertEqual(1, User.objects.count()) self.assertEqual(2, User.objects.count())
info = User.objects._collection.index_information() info = User.objects._collection.index_information()
self.assertEqual(info.keys(), ['_id_']) self.assertEqual(info.keys(), ['_id_'])
User.ensure_indexes()
info = User.objects._collection.index_information()
self.assertEqual(info.keys(), ['_cls_1_user_guid_1', '_id_'])
User.drop_collection() User.drop_collection()
def test_embedded_document_index(self): def test_embedded_document_index(self):

View File

@ -278,24 +278,24 @@ class QuerySetTest(unittest.TestCase):
query = query.filter(boolfield=True) query = query.filter(boolfield=True)
self.assertEquals(query.count(), 1) self.assertEquals(query.count(), 1)
def test_update_write_options(self): def test_update_write_concern(self):
"""Test that passing write_options works""" """Test that passing write_concern works"""
self.Person.drop_collection() self.Person.drop_collection()
write_options = {"fsync": True} write_concern = {"fsync": True}
author, created = self.Person.objects.get_or_create( author, created = self.Person.objects.get_or_create(
name='Test User', write_options=write_options) name='Test User', write_concern=write_concern)
author.save(write_options=write_options) author.save(write_concern=write_concern)
self.Person.objects.update(set__name='Ross', self.Person.objects.update(set__name='Ross',
write_options=write_options) write_concern=write_concern)
author = self.Person.objects.first() author = self.Person.objects.first()
self.assertEqual(author.name, 'Ross') self.assertEqual(author.name, 'Ross')
self.Person.objects.update_one(set__name='Test User', write_options=write_options) self.Person.objects.update_one(set__name='Test User', write_concern=write_concern)
author = self.Person.objects.first() author = self.Person.objects.first()
self.assertEqual(author.name, 'Test User') self.assertEqual(author.name, 'Test User')
@ -592,10 +592,17 @@ class QuerySetTest(unittest.TestCase):
blogs.append(Blog(title="post %s" % i, posts=[post1, post2])) blogs.append(Blog(title="post %s" % i, posts=[post1, post2]))
Blog.objects.insert(blogs, load_bulk=False) Blog.objects.insert(blogs, load_bulk=False)
self.assertEqual(q, 1) # 1 for the insert self.assertEqual(q, 1) # 1 for the insert
Blog.drop_collection()
with query_counter() as q:
self.assertEqual(q, 0)
Blog.ensure_indexes()
self.assertEqual(q, 1)
Blog.objects.insert(blogs) Blog.objects.insert(blogs)
self.assertEqual(q, 3) # 1 for insert, and 1 for in bulk fetch (3 in total) self.assertEqual(q, 3) # 1 for insert, and 1 for in bulk fetch (3 in total)
Blog.drop_collection() Blog.drop_collection()
@ -619,7 +626,7 @@ class QuerySetTest(unittest.TestCase):
self.assertRaises(OperationError, throw_operation_error) self.assertRaises(OperationError, throw_operation_error)
# Test can insert new doc # Test can insert new doc
new_post = Blog(title="code", id=ObjectId()) new_post = Blog(title="code123", id=ObjectId())
Blog.objects.insert(new_post) Blog.objects.insert(new_post)
# test handles other classes being inserted # test handles other classes being inserted
@ -655,13 +662,13 @@ class QuerySetTest(unittest.TestCase):
Blog.objects.insert([blog1, blog2]) Blog.objects.insert([blog1, blog2])
def throw_operation_error_not_unique(): def throw_operation_error_not_unique():
Blog.objects.insert([blog2, blog3], safe=True) Blog.objects.insert([blog2, blog3])
self.assertRaises(NotUniqueError, throw_operation_error_not_unique) self.assertRaises(NotUniqueError, throw_operation_error_not_unique)
self.assertEqual(Blog.objects.count(), 2) self.assertEqual(Blog.objects.count(), 2)
Blog.objects.insert([blog2, blog3], write_options={ Blog.objects.insert([blog2, blog3], write_concern={"w": 0,
'continue_on_error': True}) 'continue_on_error': True})
self.assertEqual(Blog.objects.count(), 3) self.assertEqual(Blog.objects.count(), 3)
def test_get_changed_fields_query_count(self): def test_get_changed_fields_query_count(self):

View File

@ -10,7 +10,6 @@ from bson.tz_util import utc
from mongoengine import * from mongoengine import *
import mongoengine.connection import mongoengine.connection
from mongoengine.connection import get_db, get_connection, ConnectionError from mongoengine.connection import get_db, get_connection, ConnectionError
from mongoengine.context_managers import switch_db
class ConnectionTest(unittest.TestCase): class ConnectionTest(unittest.TestCase):
@ -26,7 +25,7 @@ class ConnectionTest(unittest.TestCase):
connect('mongoenginetest') connect('mongoenginetest')
conn = get_connection() conn = get_connection()
self.assertTrue(isinstance(conn, pymongo.connection.Connection)) self.assertTrue(isinstance(conn, pymongo.mongo_client.MongoClient))
db = get_db() db = get_db()
self.assertTrue(isinstance(db, pymongo.database.Database)) self.assertTrue(isinstance(db, pymongo.database.Database))
@ -34,7 +33,7 @@ class ConnectionTest(unittest.TestCase):
connect('mongoenginetest2', alias='testdb') connect('mongoenginetest2', alias='testdb')
conn = get_connection('testdb') conn = get_connection('testdb')
self.assertTrue(isinstance(conn, pymongo.connection.Connection)) self.assertTrue(isinstance(conn, pymongo.mongo_client.MongoClient))
def test_connect_uri(self): def test_connect_uri(self):
"""Ensure that the connect() method works properly with uri's """Ensure that the connect() method works properly with uri's
@ -52,7 +51,7 @@ class ConnectionTest(unittest.TestCase):
connect("testdb_uri", host='mongodb://username:password@localhost/mongoenginetest') connect("testdb_uri", host='mongodb://username:password@localhost/mongoenginetest')
conn = get_connection() conn = get_connection()
self.assertTrue(isinstance(conn, pymongo.connection.Connection)) self.assertTrue(isinstance(conn, pymongo.mongo_client.MongoClient))
db = get_db() db = get_db()
self.assertTrue(isinstance(db, pymongo.database.Database)) self.assertTrue(isinstance(db, pymongo.database.Database))
@ -65,7 +64,7 @@ class ConnectionTest(unittest.TestCase):
self.assertRaises(ConnectionError, get_connection) self.assertRaises(ConnectionError, get_connection)
conn = get_connection('testdb') conn = get_connection('testdb')
self.assertTrue(isinstance(conn, pymongo.connection.Connection)) self.assertTrue(isinstance(conn, pymongo.mongo_client.MongoClient))
db = get_db('testdb') db = get_db('testdb')
self.assertTrue(isinstance(db, pymongo.database.Database)) self.assertTrue(isinstance(db, pymongo.database.Database))