Search is not available for this dataset
id
stringlengths
1
8
text
stringlengths
72
9.81M
addition_count
int64
0
10k
commit_subject
stringlengths
0
3.7k
deletion_count
int64
0
8.43k
file_extension
stringlengths
0
32
lang
stringlengths
1
94
license
stringclasses
10 values
repo_name
stringlengths
9
59
1200
<NME> index.rst <BEF> Queries: PostgreSQL Simplified ============================== *Queries* is a BSD licensed opinionated wrapper of the psycopg2_ library for interacting with PostgreSQL. |Version| |License| The popular psycopg2_ package is a full-featured python client. Unfortunately as a developer, you're often repeating the same steps to get started with your applications that use it. Queries aims to reduce the complexity of psycopg2 while adding additional features to make writing PostgreSQL client applications both fast and easy. *Key features include*: - Simplified API - Support of Python 2.7+ and 3.4+ - PyPy support via psycopg2cffi_ - Asynchronous support for Tornado_ - Connection information provided by URI - Query results delivered as a generator based iterators - Automatically registered data-type support for UUIDs, Unicode and Unicode Arrays - Ability to directly access psycopg2_ :py:class:`~psycopg2.extensions.connection` and :py:class:`~psycopg2.extensions.cursor` objects - Internal connection pooling Installation ------------ Queries can be installed via the `Python Package Index <https://pypi.python.org/pypi/queries>`_ and can be installed by running :command:`easy_install queries` or :command:`pip install queries` When installing Queries, ``pip`` or ``easy_install`` will automatically install the proper dependencies for your platform. Contents -------- .. toctree:: :maxdepth: 1 usage session results tornado_session pool examples/index.rst history Issues ------ Please report any issues to the Github repo at `https://github.com/gmr/queries/issues <https://github.com/gmr/queries/issues>`_ Issues ------ Please report any issues to the Github repo at `https://github.com/gmr/queries/issues <https://github.com/gmr/rabbitpy/queries>`_ Source ------ Queries is inspired by `Kenneth Reitz's <https://github.com/kennethreitz/>`_ awesome work on `requests <http://docs.python-requests.org/en/latest/>`_. Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` .. _pypi: https://pypi.python.org/pypi/queries .. _psycopg2: https://pypi.python.org/pypi/psycopg2 .. _documentation: https://queries.readthedocs.org .. _URI: http://www.postgresql.org/docs/9.3/static/libpq-connect.html#LIBPQ-CONNSTRING .. _pgsql_wrapper: https://pypi.python.org/pypi/pgsql_wrapper .. _Tornado: http://tornadoweb.org .. _PEP343: http://legacy.python.org/dev/peps/pep-0343/ .. _psycopg2cffi: https://pypi.python.org/pypi/psycopg2cffi .. |Version| image:: https://img.shields.io/pypi/v/queries.svg? :target: https://pypi.python.org/pypi/queries .. |License| image:: https://img.shields.io/github/license/gmr/queries.svg? :target: https://github.com/gmr/queries :target: https://travis-ci.org/gmr/queries .. |Downloads| image:: https://pypip.in/d/queries/badge.svg? :target: https://pypi.python.org/pypi/queries <MSG> Merge pull request #5 from jchassoul/patch-1 Update index.rst <DFF> @@ -51,7 +51,7 @@ Contents Issues ------ -Please report any issues to the Github repo at `https://github.com/gmr/queries/issues <https://github.com/gmr/rabbitpy/queries>`_ +Please report any issues to the Github repo at `https://github.com/gmr/queries/issues <https://github.com/gmr/queries/issues>`_ Source ------ @@ -84,4 +84,4 @@ Indices and tables :target: https://travis-ci.org/gmr/queries .. |Downloads| image:: https://pypip.in/d/queries/badge.svg? - :target: https://pypi.python.org/pypi/queries \ No newline at end of file + :target: https://pypi.python.org/pypi/queries
2
Merge pull request #5 from jchassoul/patch-1
2
.rst
rst
bsd-3-clause
gmr/queries
1201
<NME> .codeclimate.yml <BEF> ADDFILE <MSG> Add codeclimate <DFF> @@ -0,0 +1,7 @@ +languages: + Python: true +exclude_paths: + - setup.py + - docs/* + - tests/* + - examples/*
7
Add codeclimate
0
.yml
codeclimate
bsd-3-clause
gmr/queries
1202
<NME> .codeclimate.yml <BEF> ADDFILE <MSG> Add codeclimate <DFF> @@ -0,0 +1,7 @@ +languages: + Python: true +exclude_paths: + - setup.py + - docs/* + - tests/* + - examples/*
7
Add codeclimate
0
.yml
codeclimate
bsd-3-clause
gmr/queries
1203
<NME> .gitignore <BEF> .DS_Store .idea *.pyc dist *.egg-info atlassian-ide-plugin.xml docs/_build .coverage env <MSG> Ignore the docs/_build dir in git <DFF> @@ -4,3 +4,4 @@ build dist *.egg-info atlassian-ide-plugin.xml +docs/_build \ No newline at end of file
1
Ignore the docs/_build dir in git
0
gitignore
bsd-3-clause
gmr/queries
1204
<NME> .gitignore <BEF> .DS_Store .idea *.pyc dist *.egg-info atlassian-ide-plugin.xml docs/_build .coverage env <MSG> Ignore the docs/_build dir in git <DFF> @@ -4,3 +4,4 @@ build dist *.egg-info atlassian-ide-plugin.xml +docs/_build \ No newline at end of file
1
Ignore the docs/_build dir in git
0
gitignore
bsd-3-clause
gmr/queries
1205
<NME> .travis.yml <BEF> sudo: false language: python dist: xenial env: global: - PATH=$HOME/.local/bin:$PATH - AWS_DEFAULT_REGION=us-east-1 - secure: "inURdx4ldkJqQXL1TyvKImC3EnL5TixC1DlNMBYi5ttygwAk+mSSSw8Yc7klB6D1m6q79xUlHRk06vbz23CsXTM4AClC5Emrk6XN2GlUKl5WI+z+A2skI59buEhLWe7e2KzhB/AVx2E3TfKa0oY7raM0UUnaOkpV1Cj+mHKPIT0=" - secure: "H32DV3713a6UUuEJujrG7SfUX4/5WrwQy/3DxeptC6L7YPlTYxHBdEsccTfN5z806EheIl4BdIoxoDtq7PU/tWQoG1Lp2ze60mpwrniHajhFnjk7zP6pHvkhGLr8flhSmAb6CQBreNFOHTLWBMGPfi7k1Q9Td9MHbRo/FsTxqsM=" - test - name: upload coverage - name: deploy if: tag IS present services: - postgres services: - postgresql install: - pip install awscli - pip install -r requires/testing.txt - python setup.py develop script: nosetests after_success: - aws s3 cp .coverage "s3://com-gavinroy-travis/queries/$TRAVIS_BUILD_NUMBER/.coverage.${TRAVIS_PYTHON_VERSION}" jobs: include: - python: 2.7 - python: 3.4 - python: 3.5 - python: 3.6 - stage: upload coverage if: repo IS gmr/queries services: [] python: 3.7 install: - pip install awscli coverage codecov script: - pip install awscli coverage codecov script: - mkdir coverage - aws s3 cp --recursive s3://com-gavinroy-travis/queries/$TRAVIS_BUILD_NUMBER/ coverage - cd coverage - coverage combine - coverage report after_success: codecov - stage: deploy if: repo IS gmr/queries python: 3.6 services: [] install: true services: [] install: true script: true after_success: true deploy: distributions: sdist bdist_wheel provider: pypi user: crad on: tags: true all_branches: true password: secure: UWQWui+QhAL1cz6oW/vqjEEp6/EPn1YOlItNJcWHNOO/WMMOlaTVYVUuXp+y+m52B+8PtYZZCTHwKCUKe97Grh291FLxgd0RJCawA40f4v1gmOFYLNKyZFBGfbC69/amxvGCcDvOPtpChHAlTIeokS5EQneVcAhXg2jXct0HTfI= <MSG> Looks like travis config has changed <DFF> @@ -12,7 +12,6 @@ stages: - test - name: upload coverage - name: deploy - if: tag IS present services: - postgres @@ -38,7 +37,7 @@ jobs: - stage: upload coverage if: repo IS gmr/queries services: [] - python: 3.7 + python: 3.6 install: - pip install awscli coverage codecov script: @@ -52,7 +51,7 @@ jobs: - coverage report after_success: codecov - stage: deploy - if: repo IS gmr/queries + if: repo = gmr/queries python: 3.6 services: [] install: true
2
Looks like travis config has changed
3
.yml
travis
bsd-3-clause
gmr/queries
1206
<NME> .travis.yml <BEF> sudo: false language: python dist: xenial env: global: - PATH=$HOME/.local/bin:$PATH - AWS_DEFAULT_REGION=us-east-1 - secure: "inURdx4ldkJqQXL1TyvKImC3EnL5TixC1DlNMBYi5ttygwAk+mSSSw8Yc7klB6D1m6q79xUlHRk06vbz23CsXTM4AClC5Emrk6XN2GlUKl5WI+z+A2skI59buEhLWe7e2KzhB/AVx2E3TfKa0oY7raM0UUnaOkpV1Cj+mHKPIT0=" - secure: "H32DV3713a6UUuEJujrG7SfUX4/5WrwQy/3DxeptC6L7YPlTYxHBdEsccTfN5z806EheIl4BdIoxoDtq7PU/tWQoG1Lp2ze60mpwrniHajhFnjk7zP6pHvkhGLr8flhSmAb6CQBreNFOHTLWBMGPfi7k1Q9Td9MHbRo/FsTxqsM=" - test - name: upload coverage - name: deploy if: tag IS present services: - postgres services: - postgresql install: - pip install awscli - pip install -r requires/testing.txt - python setup.py develop script: nosetests after_success: - aws s3 cp .coverage "s3://com-gavinroy-travis/queries/$TRAVIS_BUILD_NUMBER/.coverage.${TRAVIS_PYTHON_VERSION}" jobs: include: - python: 2.7 - python: 3.4 - python: 3.5 - python: 3.6 - stage: upload coverage if: repo IS gmr/queries services: [] python: 3.7 install: - pip install awscli coverage codecov script: - pip install awscli coverage codecov script: - mkdir coverage - aws s3 cp --recursive s3://com-gavinroy-travis/queries/$TRAVIS_BUILD_NUMBER/ coverage - cd coverage - coverage combine - coverage report after_success: codecov - stage: deploy if: repo IS gmr/queries python: 3.6 services: [] install: true services: [] install: true script: true after_success: true deploy: distributions: sdist bdist_wheel provider: pypi user: crad on: tags: true all_branches: true password: secure: UWQWui+QhAL1cz6oW/vqjEEp6/EPn1YOlItNJcWHNOO/WMMOlaTVYVUuXp+y+m52B+8PtYZZCTHwKCUKe97Grh291FLxgd0RJCawA40f4v1gmOFYLNKyZFBGfbC69/amxvGCcDvOPtpChHAlTIeokS5EQneVcAhXg2jXct0HTfI= <MSG> Looks like travis config has changed <DFF> @@ -12,7 +12,6 @@ stages: - test - name: upload coverage - name: deploy - if: tag IS present services: - postgres @@ -38,7 +37,7 @@ jobs: - stage: upload coverage if: repo IS gmr/queries services: [] - python: 3.7 + python: 3.6 install: - pip install awscli coverage codecov script: @@ -52,7 +51,7 @@ jobs: - coverage report after_success: codecov - stage: deploy - if: repo IS gmr/queries + if: repo = gmr/queries python: 3.6 services: [] install: true
2
Looks like travis config has changed
3
.yml
travis
bsd-3-clause
gmr/queries
1207
<NME> session.rst <BEF> .. py:module:: queries.session Session API =========== The Session class allows for a unified (and simplified) view of interfacing with a PostgreSQL database server. Connection details are passed in as a PostgreSQL URI and connections are pooled by default, allowing for reuse of connections across modules in the Python runtime without having to pass around the object handle. While you can still access the raw psycopg2_ :py:class:`~psycopg2.extensions.connection` methods designed to simplify the interaction with PostgreSQL. For psycopg2_ functionality outside of what is exposed in Session, simply use the :py:meth:`queries.Session.connection` or py:meth:`queries.Session.cursor` properties to gain access to either object just as you would in a program using psycopg2_ directly. psycopg2_ directly. Example Usage ------------- The following example connects to the ``postgres`` database on ``localhost`` as the ``postgres`` user and then queries a table, iterating over the results: .. code:: python import queries with queries.Session('postgresql://postgres@localhost/postgres') as session: for row in session.query('SELECT * FROM table'): print row Class Documentation ------------------- .. autoclass:: queries.Session :members: .. _psycopg2: https://pypi.python.org/pypi/psycopg2 <MSG> Fix a missing : in docs <DFF> @@ -13,7 +13,7 @@ in how you use the :py:class:`queries.Session` object, there are convenience methods designed to simplify the interaction with PostgreSQL. For psycopg2_ functionality outside of what is exposed in Session, simply -use the :py:meth:`queries.Session.connection` or py:meth:`queries.Session.cursor` +use the :py:meth:`queries.Session.connection` or :py:meth:`queries.Session.cursor` properties to gain access to either object just as you would in a program using psycopg2_ directly.
1
Fix a missing : in docs
1
.rst
rst
bsd-3-clause
gmr/queries
1208
<NME> session.rst <BEF> .. py:module:: queries.session Session API =========== The Session class allows for a unified (and simplified) view of interfacing with a PostgreSQL database server. Connection details are passed in as a PostgreSQL URI and connections are pooled by default, allowing for reuse of connections across modules in the Python runtime without having to pass around the object handle. While you can still access the raw psycopg2_ :py:class:`~psycopg2.extensions.connection` methods designed to simplify the interaction with PostgreSQL. For psycopg2_ functionality outside of what is exposed in Session, simply use the :py:meth:`queries.Session.connection` or py:meth:`queries.Session.cursor` properties to gain access to either object just as you would in a program using psycopg2_ directly. psycopg2_ directly. Example Usage ------------- The following example connects to the ``postgres`` database on ``localhost`` as the ``postgres`` user and then queries a table, iterating over the results: .. code:: python import queries with queries.Session('postgresql://postgres@localhost/postgres') as session: for row in session.query('SELECT * FROM table'): print row Class Documentation ------------------- .. autoclass:: queries.Session :members: .. _psycopg2: https://pypi.python.org/pypi/psycopg2 <MSG> Fix a missing : in docs <DFF> @@ -13,7 +13,7 @@ in how you use the :py:class:`queries.Session` object, there are convenience methods designed to simplify the interaction with PostgreSQL. For psycopg2_ functionality outside of what is exposed in Session, simply -use the :py:meth:`queries.Session.connection` or py:meth:`queries.Session.cursor` +use the :py:meth:`queries.Session.connection` or :py:meth:`queries.Session.cursor` properties to gain access to either object just as you would in a program using psycopg2_ directly.
1
Fix a missing : in docs
1
.rst
rst
bsd-3-clause
gmr/queries
1209
<NME> .travis.yml <BEF> sudo: false language: python dist: xenial env: global: - PATH=$HOME/.local/bin:$PATH - AWS_DEFAULT_REGION=us-east-1 - secure: "inURdx4ldkJqQXL1TyvKImC3EnL5TixC1DlNMBYi5ttygwAk+mSSSw8Yc7klB6D1m6q79xUlHRk06vbz23CsXTM4AClC5Emrk6XN2GlUKl5WI+z+A2skI59buEhLWe7e2KzhB/AVx2E3TfKa0oY7raM0UUnaOkpV1Cj+mHKPIT0=" - secure: "H32DV3713a6UUuEJujrG7SfUX4/5WrwQy/3DxeptC6L7YPlTYxHBdEsccTfN5z806EheIl4BdIoxoDtq7PU/tWQoG1Lp2ze60mpwrniHajhFnjk7zP6pHvkhGLr8flhSmAb6CQBreNFOHTLWBMGPfi7k1Q9Td9MHbRo/FsTxqsM=" stages: - test - name: coverage - name: deploy if: tag IS present services: - postgresql install: - pip install awscli - pip install -r requires/testing.txt - python setup.py develop script: nosetests after_success: - aws s3 cp .coverage "s3://com-gavinroy-travis/queries/$TRAVIS_BUILD_NUMBER/.coverage.${TRAVIS_PYTHON_VERSION}" jobs: include: - python: 2.7 - python: 3.4 - python: 3.5 - python: 3.6 - python: 3.7 - python: 3.8 - python: pypy - python: pypy3 - stage: coverage if: repo = gmr/queries services: [] python: 3.7 install: - pip install awscli coverage codecov script: - mkdir coverage - aws s3 cp --recursive s3://com-gavinroy-travis/queries/$TRAVIS_BUILD_NUMBER/ coverage - cd coverage - coverage combine - cd .. - mv coverage/.coverage . - coverage report after_success: codecov - stage: deploy if: repo = gmr/queries python: 3.6 services: [] install: true script: true after_success: true deploy: distributions: sdist bdist_wheel provider: pypi user: crad on: tags: true all_branches: true password: secure: UWQWui+QhAL1cz6oW/vqjEEp6/EPn1YOlItNJcWHNOO/WMMOlaTVYVUuXp+y+m52B+8PtYZZCTHwKCUKe97Grh291FLxgd0RJCawA40f4v1gmOFYLNKyZFBGfbC69/amxvGCcDvOPtpChHAlTIeokS5EQneVcAhXg2jXct0HTfI= <MSG> Remove pypy and pypy3 from travis tests <DFF> @@ -36,8 +36,6 @@ jobs: - python: 3.6 - python: 3.7 - python: 3.8 - - python: pypy - - python: pypy3 - stage: coverage if: repo = gmr/queries services: []
0
Remove pypy and pypy3 from travis tests
2
.yml
travis
bsd-3-clause
gmr/queries
1210
<NME> .travis.yml <BEF> sudo: false language: python dist: xenial env: global: - PATH=$HOME/.local/bin:$PATH - AWS_DEFAULT_REGION=us-east-1 - secure: "inURdx4ldkJqQXL1TyvKImC3EnL5TixC1DlNMBYi5ttygwAk+mSSSw8Yc7klB6D1m6q79xUlHRk06vbz23CsXTM4AClC5Emrk6XN2GlUKl5WI+z+A2skI59buEhLWe7e2KzhB/AVx2E3TfKa0oY7raM0UUnaOkpV1Cj+mHKPIT0=" - secure: "H32DV3713a6UUuEJujrG7SfUX4/5WrwQy/3DxeptC6L7YPlTYxHBdEsccTfN5z806EheIl4BdIoxoDtq7PU/tWQoG1Lp2ze60mpwrniHajhFnjk7zP6pHvkhGLr8flhSmAb6CQBreNFOHTLWBMGPfi7k1Q9Td9MHbRo/FsTxqsM=" stages: - test - name: coverage - name: deploy if: tag IS present services: - postgresql install: - pip install awscli - pip install -r requires/testing.txt - python setup.py develop script: nosetests after_success: - aws s3 cp .coverage "s3://com-gavinroy-travis/queries/$TRAVIS_BUILD_NUMBER/.coverage.${TRAVIS_PYTHON_VERSION}" jobs: include: - python: 2.7 - python: 3.4 - python: 3.5 - python: 3.6 - python: 3.7 - python: 3.8 - python: pypy - python: pypy3 - stage: coverage if: repo = gmr/queries services: [] python: 3.7 install: - pip install awscli coverage codecov script: - mkdir coverage - aws s3 cp --recursive s3://com-gavinroy-travis/queries/$TRAVIS_BUILD_NUMBER/ coverage - cd coverage - coverage combine - cd .. - mv coverage/.coverage . - coverage report after_success: codecov - stage: deploy if: repo = gmr/queries python: 3.6 services: [] install: true script: true after_success: true deploy: distributions: sdist bdist_wheel provider: pypi user: crad on: tags: true all_branches: true password: secure: UWQWui+QhAL1cz6oW/vqjEEp6/EPn1YOlItNJcWHNOO/WMMOlaTVYVUuXp+y+m52B+8PtYZZCTHwKCUKe97Grh291FLxgd0RJCawA40f4v1gmOFYLNKyZFBGfbC69/amxvGCcDvOPtpChHAlTIeokS5EQneVcAhXg2jXct0HTfI= <MSG> Remove pypy and pypy3 from travis tests <DFF> @@ -36,8 +36,6 @@ jobs: - python: 3.6 - python: 3.7 - python: 3.8 - - python: pypy - - python: pypy3 - stage: coverage if: repo = gmr/queries services: []
0
Remove pypy and pypy3 from travis tests
2
.yml
travis
bsd-3-clause
gmr/queries
1211
<NME> setup.py <BEF> import os import platform import setuptools # PYPY vs cpython if platform.python_implementation() == 'PyPy': install_requires = ['psycopg2cffi>=2.7.2,<3'] else: install_requires = ['psycopg2>=2.5.1,<3'] # Install tornado if generating docs on readthedocs if os.environ.get('READTHEDOCS', None) == 'True': install_requires.append('tornado') setup(name='queries', version='1.4.0', description="Simplified PostgreSQL client built upon Psycopg2", maintainer="Gavin M. Roy", maintainer_email="[email protected]", maintainer='Gavin M. Roy', maintainer_email='[email protected]', url='https://github.com/gmr/queries', install_requires=install_requires, extras_require={'tornado': 'tornado<6'}, license='BSD', package_data={'': ['LICENSE', 'README.rst']}, packages=['queries'], classifiers=[ 'Development Status :: 5 - Production/Stable', 'Intended Audience :: Developers', 'License :: OSI Approved :: BSD License', 'Operating System :: OS Independent', 'Programming Language :: Python :: 2', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.4', 'Programming Language :: Python :: 3.5', 'Programming Language :: Python :: 3.6', 'Programming Language :: Python :: 3.7', 'Programming Language :: Python :: 3.8', 'Programming Language :: Python :: Implementation :: CPython', 'Programming Language :: Python :: Implementation :: PyPy', 'Topic :: Database', 'Topic :: Software Development :: Libraries'], zip_safe=True) <MSG> Bump the version and history <DFF> @@ -14,7 +14,7 @@ if os.environ.get('READTHEDOCS', None) == 'True': install_requires.append('tornado') setup(name='queries', - version='1.4.0', + version='1.5.0', description="Simplified PostgreSQL client built upon Psycopg2", maintainer="Gavin M. Roy", maintainer_email="[email protected]",
1
Bump the version and history
1
.py
py
bsd-3-clause
gmr/queries
1212
<NME> setup.py <BEF> import os import platform import setuptools # PYPY vs cpython if platform.python_implementation() == 'PyPy': install_requires = ['psycopg2cffi>=2.7.2,<3'] else: install_requires = ['psycopg2>=2.5.1,<3'] # Install tornado if generating docs on readthedocs if os.environ.get('READTHEDOCS', None) == 'True': install_requires.append('tornado') setup(name='queries', version='1.4.0', description="Simplified PostgreSQL client built upon Psycopg2", maintainer="Gavin M. Roy", maintainer_email="[email protected]", maintainer='Gavin M. Roy', maintainer_email='[email protected]', url='https://github.com/gmr/queries', install_requires=install_requires, extras_require={'tornado': 'tornado<6'}, license='BSD', package_data={'': ['LICENSE', 'README.rst']}, packages=['queries'], classifiers=[ 'Development Status :: 5 - Production/Stable', 'Intended Audience :: Developers', 'License :: OSI Approved :: BSD License', 'Operating System :: OS Independent', 'Programming Language :: Python :: 2', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.4', 'Programming Language :: Python :: 3.5', 'Programming Language :: Python :: 3.6', 'Programming Language :: Python :: 3.7', 'Programming Language :: Python :: 3.8', 'Programming Language :: Python :: Implementation :: CPython', 'Programming Language :: Python :: Implementation :: PyPy', 'Topic :: Database', 'Topic :: Software Development :: Libraries'], zip_safe=True) <MSG> Bump the version and history <DFF> @@ -14,7 +14,7 @@ if os.environ.get('READTHEDOCS', None) == 'True': install_requires.append('tornado') setup(name='queries', - version='1.4.0', + version='1.5.0', description="Simplified PostgreSQL client built upon Psycopg2", maintainer="Gavin M. Roy", maintainer_email="[email protected]",
1
Bump the version and history
1
.py
py
bsd-3-clause
gmr/queries
1213
<NME> session.py <BEF> """The Session class allows for a unified (and simplified) view of interfacing with a PostgreSQL database server. Connection details are passed in as a PostgreSQL URI and connections are pooled by default, allowing for reuse of connections across modules in the Python runtime without having to pass around the object handle. While you can still access the raw `psycopg2` connection and cursor objects to provide ultimate flexibility in how you use the queries.Session object, there are convenience methods designed to simplify the interaction with PostgreSQL. For `psycopg2` functionality outside of what is exposed in Session, simply use the Session.connection or Session.cursor properties to gain access to either object just as you would in a program using psycopg2 directly. Example usage: .. code:: python import queries with queries.Session('pgsql://postgres@localhost/postgres') as session: for row in session.Query('SELECT * FROM table'): print row """ import hashlib import logging import psycopg2 from psycopg2 import extensions, extras from queries import pool, results, utils LOGGER = logging.getLogger(__name__) DEFAULT_ENCODING = 'UTF8' DEFAULT_URI = 'postgresql://localhost:5432' class Session(object): """The Session class allows for a unified (and simplified) view of interfacing with a PostgreSQL database server. The Session object can act as a context manager, providing automated cleanup and simple, Pythonic way of interacting with the object. :param str uri: PostgreSQL connection URI :param psycopg2.extensions.cursor: The cursor type to use :param int pool_idle_ttl: How long idle pools keep connections open :param int pool_max_size: The maximum size of the pool to use """ _conn = None _cursor = None _tpc_id = None _uri = None # Connection status constants INTRANS = extensions.STATUS_IN_TRANSACTION PREPARED = extensions.STATUS_PREPARED READY = extensions.STATUS_READY SETUP = extensions.STATUS_SETUP # Transaction status constants TX_ACTIVE = extensions.TRANSACTION_STATUS_ACTIVE TX_IDLE = extensions.TRANSACTION_STATUS_IDLE TX_INERROR = extensions.TRANSACTION_STATUS_INERROR TX_INTRANS = extensions.TRANSACTION_STATUS_INTRANS TX_UNKNOWN = extensions.TRANSACTION_STATUS_UNKNOWN def __init__(self, uri=DEFAULT_URI, cursor_factory=extras.RealDictCursor, pool_idle_ttl=pool.DEFAULT_IDLE_TTL, pool_max_size=pool.DEFAULT_MAX_SIZE, autocommit=True): """Connect to a PostgreSQL server using the module wide connection and set the isolation level. :param str uri: PostgreSQL connection URI :param psycopg2.extensions.cursor: The cursor type to use :param int pool_idle_ttl: How long idle pools keep connections open :param int pool_max_size: The maximum size of the pool to use """ self._pool_manager = pool.PoolManager.instance() self._uri = uri # Ensure the pool exists in the pool manager if self.pid not in self._pool_manager: self._pool_manager.create(self.pid, pool_idle_ttl, pool_max_size) self._conn = self._connect() self._cursor_factory = cursor_factory self._cursor = self._get_cursor(self._conn) self._autocommit(autocommit) @property def backend_pid(self): """Return the backend process ID of the PostgreSQL server that this session is connected to. :rtype: int """ return self._conn.get_backend_pid() .. code:: python rows = list(session.callproc('now')) :param str name: The procedure name :param list args: The list of arguments to pass in :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError :raises: queries.InternalError :raises: queries.InterfaceError :raises: queries.NotSupportedError :raises: queries.OperationalError :raises: queries.ProgrammingError """ try: self._cursor.callproc(name, args) except psycopg2.Error as err: self._incr_exceptions() raise err finally: self._incr_executions() return results.Results(self._cursor) def close(self): """Explicitly close the connection and remove it from the connection pool if pooling is enabled. If the connection is already closed :raises: psycopg2.InterfaceError """ if not self._conn: raise psycopg2.InterfaceError('Connection not open') LOGGER.info('Closing connection %r in %s', self._conn, self.pid) self._pool_manager.free(self.pid, self._conn) self._pool_manager.remove_connection(self.pid, self._conn) # Un-assign the connection and cursor self._conn, self._cursor = None, None @property def connection(self): """Return the current open connection to PostgreSQL. :rtype: psycopg2.extensions.connection """ return self._conn @property def cursor(self): """Return the current, active cursor for the open connection. :rtype: psycopg2.extensions.cursor """ return self._cursor @property def encoding(self): """Return the current client encoding value. :rtype: str """ return self._conn.encoding @property def notices(self): """Return a list of up to the last 50 server notices sent to the client. :rtype: list """ return self._conn.notices @property def pid(self): """Return the pool ID used for connection pooling. :rtype: str """ return hashlib.md5(':'.join([self.__class__.__name__, .. code:: python rows = list(session.query('SELECT * FROM foo')) :param str sql: The SQL statement :param dict parameters: A dictionary of query parameters :param str sql: The SQL statement :param dict parameters: A dictionary of query parameters :rtype: queries.Results :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError :raises: queries.InternalError :raises: queries.InterfaceError :raises: queries.NotSupportedError :raises: queries.OperationalError :raises: queries.ProgrammingError """ try: self._cursor.execute(sql, parameters) except psycopg2.Error as err: self._incr_exceptions() raise err finally: self._incr_executions() return results.Results(self._cursor) def set_encoding(self, value=DEFAULT_ENCODING): """Set the client encoding for the session if the value specified is different than the current client encoding. :param str value: The encoding value to use """ if self._conn.encoding != value: self._conn.set_client_encoding(value) def __del__(self): """When deleting the context, ensure the instance is removed from caches, etc. """ self._cleanup() def __enter__(self): """For use as a context manager, return a handle to this object instance. :rtype: Session """ return self def __exit__(self, exc_type, exc_val, exc_tb): """When leaving the context, ensure the instance is removed from caches, etc. """ self._cleanup() def _autocommit(self, autocommit): """Set the isolation level automatically to commit or not after every query :param autocommit: Boolean (Default - True) """ self._conn.autocommit = autocommit def _cleanup(self): """Remove the connection from the stack, closing out the cursor""" if self._cursor: LOGGER.debug('Closing the cursor on %s', self.pid) self._cursor.close() self._cursor = None if self._conn: LOGGER.debug('Freeing %s in the pool', self.pid) try: pool.PoolManager.instance().free(self.pid, self._conn) except pool.ConnectionNotFoundError: pass self._conn = None def _connect(self): """Connect to PostgreSQL, either by reusing a connection from the pool if possible, or by creating the new connection. :rtype: psycopg2.extensions.connection :raises: pool.NoIdleConnectionsError """ # Attempt to get a cached connection from the connection pool try: connection = self._pool_manager.get(self.pid, self) LOGGER.debug("Re-using connection for %s", self.pid) except pool.NoIdleConnectionsError: if self._pool_manager.is_full(self.pid): raise # Create a new PostgreSQL connection kwargs = utils.uri_to_kwargs(self._uri) LOGGER.debug("Creating a new connection for %s", self.pid) connection = self._psycopg2_connect(kwargs) self._pool_manager.add(self.pid, connection) self._pool_manager.lock(self.pid, connection, self) # Added in because psycopg2ct connects and leaves the connection in # a weird state: consts.STATUS_DATESTYLE, returning from # Connection._setup without setting the state as const.STATUS_OK if utils.PYPY: connection.reset() # Register the custom data types self._register_unicode(connection) self._register_uuid(connection) return connection def _get_cursor(self, connection, name=None): """Return a cursor for the given cursor_factory. Specify a name to use server-side cursors. :param connection: The connection to create a cursor on :type connection: psycopg2.extensions.connection :param str name: A cursor name for a server side cursor :rtype: psycopg2.extensions.cursor """ cursor = connection.cursor(name=name, cursor_factory=self._cursor_factory) if name is not None: cursor.scrollable = True cursor.withhold = True return cursor def _incr_exceptions(self): """Increment the number of exceptions for the current connection.""" self._pool_manager.get_connection(self.pid, self._conn).exceptions += 1 def _incr_executions(self): """Increment the number of executions for the current connection.""" self._pool_manager.get_connection(self.pid, self._conn).executions += 1 def _psycopg2_connect(self, kwargs): """Return a psycopg2 connection for the specified kwargs. Extend for use in async session adapters. :param dict kwargs: Keyword connection args :rtype: psycopg2.extensions.connection """ return psycopg2.connect(**kwargs) @staticmethod def _register_unicode(connection): """Register the cursor to be able to receive Unicode string. :type connection: psycopg2.extensions.connection :param connection: Where to register things """ psycopg2.extensions.register_type(psycopg2.extensions.UNICODE, connection) psycopg2.extensions.register_type(psycopg2.extensions.UNICODEARRAY, connection) @staticmethod def _register_uuid(connection): """Register the UUID extension from the psycopg2.extra module :type connection: psycopg2.extensions.connection :param connection: Where to register things """ psycopg2.extras.register_uuid(conn_or_curs=connection) @property def _status(self): """Return the current connection status as an integer value. The status should match one of the following constants: - queries.Session.INTRANS: Connection established, in transaction - queries.Session.PREPARED: Prepared for second phase of transaction - queries.Session.READY: Connected, no active transaction :rtype: int """ if self._conn.status == psycopg2.extensions.STATUS_BEGIN: return self.READY return self._conn.status <MSG> Update the examples <DFF> @@ -106,7 +106,7 @@ class Session(object): .. code:: python - rows = list(session.callproc('now')) + rows = list(session.callproc('chr', [65])) :param str name: The procedure name :param list args: The list of arguments to pass in @@ -192,7 +192,7 @@ class Session(object): .. code:: python - rows = list(session.query('SELECT * FROM foo')) + rows = list(session.query('SELECT * FROM foo WHERE id=%s', [1])) :param str sql: The SQL statement :param dict parameters: A dictionary of query parameters
2
Update the examples
2
.py
py
bsd-3-clause
gmr/queries
1214
<NME> session.py <BEF> """The Session class allows for a unified (and simplified) view of interfacing with a PostgreSQL database server. Connection details are passed in as a PostgreSQL URI and connections are pooled by default, allowing for reuse of connections across modules in the Python runtime without having to pass around the object handle. While you can still access the raw `psycopg2` connection and cursor objects to provide ultimate flexibility in how you use the queries.Session object, there are convenience methods designed to simplify the interaction with PostgreSQL. For `psycopg2` functionality outside of what is exposed in Session, simply use the Session.connection or Session.cursor properties to gain access to either object just as you would in a program using psycopg2 directly. Example usage: .. code:: python import queries with queries.Session('pgsql://postgres@localhost/postgres') as session: for row in session.Query('SELECT * FROM table'): print row """ import hashlib import logging import psycopg2 from psycopg2 import extensions, extras from queries import pool, results, utils LOGGER = logging.getLogger(__name__) DEFAULT_ENCODING = 'UTF8' DEFAULT_URI = 'postgresql://localhost:5432' class Session(object): """The Session class allows for a unified (and simplified) view of interfacing with a PostgreSQL database server. The Session object can act as a context manager, providing automated cleanup and simple, Pythonic way of interacting with the object. :param str uri: PostgreSQL connection URI :param psycopg2.extensions.cursor: The cursor type to use :param int pool_idle_ttl: How long idle pools keep connections open :param int pool_max_size: The maximum size of the pool to use """ _conn = None _cursor = None _tpc_id = None _uri = None # Connection status constants INTRANS = extensions.STATUS_IN_TRANSACTION PREPARED = extensions.STATUS_PREPARED READY = extensions.STATUS_READY SETUP = extensions.STATUS_SETUP # Transaction status constants TX_ACTIVE = extensions.TRANSACTION_STATUS_ACTIVE TX_IDLE = extensions.TRANSACTION_STATUS_IDLE TX_INERROR = extensions.TRANSACTION_STATUS_INERROR TX_INTRANS = extensions.TRANSACTION_STATUS_INTRANS TX_UNKNOWN = extensions.TRANSACTION_STATUS_UNKNOWN def __init__(self, uri=DEFAULT_URI, cursor_factory=extras.RealDictCursor, pool_idle_ttl=pool.DEFAULT_IDLE_TTL, pool_max_size=pool.DEFAULT_MAX_SIZE, autocommit=True): """Connect to a PostgreSQL server using the module wide connection and set the isolation level. :param str uri: PostgreSQL connection URI :param psycopg2.extensions.cursor: The cursor type to use :param int pool_idle_ttl: How long idle pools keep connections open :param int pool_max_size: The maximum size of the pool to use """ self._pool_manager = pool.PoolManager.instance() self._uri = uri # Ensure the pool exists in the pool manager if self.pid not in self._pool_manager: self._pool_manager.create(self.pid, pool_idle_ttl, pool_max_size) self._conn = self._connect() self._cursor_factory = cursor_factory self._cursor = self._get_cursor(self._conn) self._autocommit(autocommit) @property def backend_pid(self): """Return the backend process ID of the PostgreSQL server that this session is connected to. :rtype: int """ return self._conn.get_backend_pid() .. code:: python rows = list(session.callproc('now')) :param str name: The procedure name :param list args: The list of arguments to pass in :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError :raises: queries.InternalError :raises: queries.InterfaceError :raises: queries.NotSupportedError :raises: queries.OperationalError :raises: queries.ProgrammingError """ try: self._cursor.callproc(name, args) except psycopg2.Error as err: self._incr_exceptions() raise err finally: self._incr_executions() return results.Results(self._cursor) def close(self): """Explicitly close the connection and remove it from the connection pool if pooling is enabled. If the connection is already closed :raises: psycopg2.InterfaceError """ if not self._conn: raise psycopg2.InterfaceError('Connection not open') LOGGER.info('Closing connection %r in %s', self._conn, self.pid) self._pool_manager.free(self.pid, self._conn) self._pool_manager.remove_connection(self.pid, self._conn) # Un-assign the connection and cursor self._conn, self._cursor = None, None @property def connection(self): """Return the current open connection to PostgreSQL. :rtype: psycopg2.extensions.connection """ return self._conn @property def cursor(self): """Return the current, active cursor for the open connection. :rtype: psycopg2.extensions.cursor """ return self._cursor @property def encoding(self): """Return the current client encoding value. :rtype: str """ return self._conn.encoding @property def notices(self): """Return a list of up to the last 50 server notices sent to the client. :rtype: list """ return self._conn.notices @property def pid(self): """Return the pool ID used for connection pooling. :rtype: str """ return hashlib.md5(':'.join([self.__class__.__name__, .. code:: python rows = list(session.query('SELECT * FROM foo')) :param str sql: The SQL statement :param dict parameters: A dictionary of query parameters :param str sql: The SQL statement :param dict parameters: A dictionary of query parameters :rtype: queries.Results :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError :raises: queries.InternalError :raises: queries.InterfaceError :raises: queries.NotSupportedError :raises: queries.OperationalError :raises: queries.ProgrammingError """ try: self._cursor.execute(sql, parameters) except psycopg2.Error as err: self._incr_exceptions() raise err finally: self._incr_executions() return results.Results(self._cursor) def set_encoding(self, value=DEFAULT_ENCODING): """Set the client encoding for the session if the value specified is different than the current client encoding. :param str value: The encoding value to use """ if self._conn.encoding != value: self._conn.set_client_encoding(value) def __del__(self): """When deleting the context, ensure the instance is removed from caches, etc. """ self._cleanup() def __enter__(self): """For use as a context manager, return a handle to this object instance. :rtype: Session """ return self def __exit__(self, exc_type, exc_val, exc_tb): """When leaving the context, ensure the instance is removed from caches, etc. """ self._cleanup() def _autocommit(self, autocommit): """Set the isolation level automatically to commit or not after every query :param autocommit: Boolean (Default - True) """ self._conn.autocommit = autocommit def _cleanup(self): """Remove the connection from the stack, closing out the cursor""" if self._cursor: LOGGER.debug('Closing the cursor on %s', self.pid) self._cursor.close() self._cursor = None if self._conn: LOGGER.debug('Freeing %s in the pool', self.pid) try: pool.PoolManager.instance().free(self.pid, self._conn) except pool.ConnectionNotFoundError: pass self._conn = None def _connect(self): """Connect to PostgreSQL, either by reusing a connection from the pool if possible, or by creating the new connection. :rtype: psycopg2.extensions.connection :raises: pool.NoIdleConnectionsError """ # Attempt to get a cached connection from the connection pool try: connection = self._pool_manager.get(self.pid, self) LOGGER.debug("Re-using connection for %s", self.pid) except pool.NoIdleConnectionsError: if self._pool_manager.is_full(self.pid): raise # Create a new PostgreSQL connection kwargs = utils.uri_to_kwargs(self._uri) LOGGER.debug("Creating a new connection for %s", self.pid) connection = self._psycopg2_connect(kwargs) self._pool_manager.add(self.pid, connection) self._pool_manager.lock(self.pid, connection, self) # Added in because psycopg2ct connects and leaves the connection in # a weird state: consts.STATUS_DATESTYLE, returning from # Connection._setup without setting the state as const.STATUS_OK if utils.PYPY: connection.reset() # Register the custom data types self._register_unicode(connection) self._register_uuid(connection) return connection def _get_cursor(self, connection, name=None): """Return a cursor for the given cursor_factory. Specify a name to use server-side cursors. :param connection: The connection to create a cursor on :type connection: psycopg2.extensions.connection :param str name: A cursor name for a server side cursor :rtype: psycopg2.extensions.cursor """ cursor = connection.cursor(name=name, cursor_factory=self._cursor_factory) if name is not None: cursor.scrollable = True cursor.withhold = True return cursor def _incr_exceptions(self): """Increment the number of exceptions for the current connection.""" self._pool_manager.get_connection(self.pid, self._conn).exceptions += 1 def _incr_executions(self): """Increment the number of executions for the current connection.""" self._pool_manager.get_connection(self.pid, self._conn).executions += 1 def _psycopg2_connect(self, kwargs): """Return a psycopg2 connection for the specified kwargs. Extend for use in async session adapters. :param dict kwargs: Keyword connection args :rtype: psycopg2.extensions.connection """ return psycopg2.connect(**kwargs) @staticmethod def _register_unicode(connection): """Register the cursor to be able to receive Unicode string. :type connection: psycopg2.extensions.connection :param connection: Where to register things """ psycopg2.extensions.register_type(psycopg2.extensions.UNICODE, connection) psycopg2.extensions.register_type(psycopg2.extensions.UNICODEARRAY, connection) @staticmethod def _register_uuid(connection): """Register the UUID extension from the psycopg2.extra module :type connection: psycopg2.extensions.connection :param connection: Where to register things """ psycopg2.extras.register_uuid(conn_or_curs=connection) @property def _status(self): """Return the current connection status as an integer value. The status should match one of the following constants: - queries.Session.INTRANS: Connection established, in transaction - queries.Session.PREPARED: Prepared for second phase of transaction - queries.Session.READY: Connected, no active transaction :rtype: int """ if self._conn.status == psycopg2.extensions.STATUS_BEGIN: return self.READY return self._conn.status <MSG> Update the examples <DFF> @@ -106,7 +106,7 @@ class Session(object): .. code:: python - rows = list(session.callproc('now')) + rows = list(session.callproc('chr', [65])) :param str name: The procedure name :param list args: The list of arguments to pass in @@ -192,7 +192,7 @@ class Session(object): .. code:: python - rows = list(session.query('SELECT * FROM foo')) + rows = list(session.query('SELECT * FROM foo WHERE id=%s', [1])) :param str sql: The SQL statement :param dict parameters: A dictionary of query parameters
2
Update the examples
2
.py
py
bsd-3-clause
gmr/queries
1215
<NME> pool.py <BEF> """ Connection Pooling """ import logging import threading import time import weakref import weakref LOGGER = logging.getLogger(__name__) DEFAULT_IDLE_TTL = 60 DEFAULT_MAX_SIZE = 1 class Connection(object): class Connection(object): """Contains the handle to the connection, the current state of the connection and methods for manipulating the state of the connection. """ _lock = threading.Lock() def __init__(self, handle): self.handle = handle self.used_by = None self.executions = 0 self.exceptions = 0 def close(self): """Close the connection :raises: ConnectionBusyError """ LOGGER.debug('Connection %s closing', self.id) if self.busy and not self.closed: raise ConnectionBusyError(self) with self._lock: if not self.handle.closed: try: self.handle.close() except psycopg2.InterfaceError as error: LOGGER.error('Error closing socket: %s', error) @property def closed(self): """Return if the psycopg2 connection is closed. :rtype: bool """ return self.handle.closed != 0 @property def busy(self): """Return if the connection is currently executing a query or is locked by a session that still exists. :rtype: bool """ if self.handle.isexecuting(): return True elif self.used_by is None: return False return not self.used_by() is None @property def executing(self): """Return if the connection is currently executing a query :rtype: bool """ return self.handle.isexecuting() def free(self): """Remove the lock on the connection if the connection is not active :raises: ConnectionBusyError """ LOGGER.debug('Connection %s freeing', self.id) if self.handle.isexecuting(): raise ConnectionBusyError(self) with self._lock: self.used_by = None LOGGER.debug('Connection %s freed', self.id) @property def id(self): """Return id of the psycopg2 connection object :rtype: int """ return id(self.handle) def lock(self, session): """Lock the connection, ensuring that it is not busy and storing a weakref for the session. :param queries.Session session: The session to lock the connection with :raises: ConnectionBusyError """ if self.busy: raise ConnectionBusyError(self) with self._lock: self.used_by = weakref.ref(session) LOGGER.debug('Connection %s locked', self.id) @property def locked(self): """Return if the connection is currently exclusively locked :rtype: bool """ return self.used_by is not None class Pool(object): """A connection pool for gaining access to and managing connections""" _lock = threading.Lock() idle_start = None idle_ttl = DEFAULT_IDLE_TTL max_size = DEFAULT_MAX_SIZE def __init__(self, pool_id, idle_ttl=DEFAULT_IDLE_TTL, max_size=DEFAULT_MAX_SIZE, time_method=None): self.connections = {} self._id = pool_id self.idle_ttl = idle_ttl self.max_size = max_size self.time_method = time_method or time.time def __contains__(self, connection): """Return True if the pool contains the connection""" return id(connection) in self.connections def __len__(self): """Return the number of connections in the pool""" return len(self.connections) def add(self, connection): """Add a new connection to the pool :param connection: The connection to add to the pool :type connection: psycopg2.extensions.connection :raises: PoolFullError """ if id(connection) in self.connections: raise ValueError('Connection already exists in pool') if len(self.connections) == self.max_size: LOGGER.warning('Race condition found when adding new connection') try: connection.close() except (psycopg2.Error, psycopg2.Warning) as error: LOGGER.error('Error closing the conn that cant be used: %s', error) raise PoolFullError(self) with self._lock: self.connections[id(connection)] = Connection(connection) LOGGER.debug('Pool %s added connection %s', self.id, id(connection)) @property def busy_connections(self): """Return a list of active/busy connections :rtype: list """ return [c for c in self.connections.values() if c.busy and not c.closed] def clean(self): """Clean the pool by removing any closed connections and if the pool's idle has exceeded its idle TTL, remove all connections. """ LOGGER.debug('Cleaning the pool') for connection in [self.connections[k] for k in self.connections if self.connections[k].closed]: LOGGER.debug('Removing %s', connection.id) self.remove(connection.handle) if self.idle_duration > self.idle_ttl: self.close() LOGGER.debug('Pool %s cleaned', self.id) def close(self): """Close the pool by closing and removing all of the connections""" for cid in list(self.connections.keys()): self.remove(self.connections[cid].handle) LOGGER.debug('Pool %s closed', self.id) @property def closed_connections(self): """Return a list of closed connections :rtype: list """ return [c for c in self.connections.values() if c.closed] def connection_handle(self, connection): """Return a connection object for the given psycopg2 connection :param connection: The connection to return a parent for :type connection: psycopg2.extensions.connection :rtype: Connection """ return self.connections[id(connection)] @property def executing_connections(self): """Return a list of connections actively executing queries :rtype: list """ return [c for c in self.connections.values() if c.executing] def free(self, connection): """Free the connection from use by the session that was using it. :param connection: The connection to free :type connection: psycopg2.extensions.connection :raises: ConnectionNotFoundError """ LOGGER.debug('Pool %s freeing connection %s', self.id, id(connection)) try: self.connection_handle(connection).free() except KeyError: raise ConnectionNotFoundError(self.id, id(connection)) if self.idle_connections == list(self.connections.values()): with self._lock: self.idle_start = self.time_method() LOGGER.debug('Pool %s freed connection %s', self.id, id(connection)) def get(self, session): """Return an idle connection and assign the session to the connection :param queries.Session session: The session to assign :rtype: psycopg2.extensions.connection :raises: NoIdleConnectionsError """ idle = self.idle_connections if idle: connection = idle.pop(0) connection.lock(session) if self.idle_start: with self._lock: self.idle_start = None return connection.handle raise NoIdleConnectionsError(self.id) @property def id(self): """Return the ID for this pool :rtype: str """ return self._id @property def idle_connections(self): """Return a list of idle connections :rtype: list """ return [c for c in self.connections.values() if not c.busy and not c.closed] @property def idle_duration(self): """Return the number of seconds that the pool has had no active connections. :rtype: float """ if self.idle_start is None: return 0 return self.time_method() - self.idle_start @property def is_full(self): """Return True if there are no more open slots for connections. :rtype: bool """ return len(self.connections) >= self.max_size def lock(self, connection, session): """Explicitly lock the specified connection :type connection: psycopg2.extensions.connection :param connection: The connection to lock :param queries.Session session: The session to hold the lock """ cid = id(connection) try: self.connection_handle(connection).lock(session) except KeyError: raise ConnectionNotFoundError(self.id, cid) else: if self.idle_start: with self._lock: self.idle_start = None LOGGER.debug('Pool %s locked connection %s', self.id, cid) @property def locked_connections(self): """Return a list of all locked connections :rtype: list """ return [c for c in self.connections.values() if c.locked] def remove(self, connection): """Remove the connection from the pool :param connection: The connection to remove :type connection: psycopg2.extensions.connection :raises: ConnectionNotFoundError :raises: ConnectionBusyError """ cid = id(connection) if cid not in self.connections: raise ConnectionNotFoundError(self.id, cid) self.connection_handle(connection).close() with self._lock: del self.connections[cid] LOGGER.debug('Pool %s removed connection %s', self.id, cid) def report(self): """Return a report about the pool state and configuration. :rtype: dict """ return { 'connections': { 'busy': len(self.busy_connections), 'closed': len(self.closed_connections), 'executing': len(self.executing_connections), 'idle': len(self.idle_connections), 'locked': len(self.busy_connections) }, 'exceptions': sum([c.exceptions for c in self.connections.values()]), 'executions': sum([c.executions for c in self.connections.values()]), 'full': self.is_full, 'idle': { 'duration': self.idle_duration, 'ttl': self.idle_ttl }, 'max_size': self.max_size } def shutdown(self): """Forcefully shutdown the entire pool, closing all non-executing connections. :raises: ConnectionBusyError """ with self._lock: for cid in list(self.connections.keys()): if self.connections[cid].executing: raise ConnectionBusyError(cid) if self.connections[cid].locked: self.connections[cid].free() self.connections[cid].close() del self.connections[cid] def set_idle_ttl(self, ttl): """Set the idle ttl :param int ttl: The TTL when idle """ with self._lock: self.idle_ttl = ttl def set_max_size(self, size): """Set the maximum number of connections :param int size: The maximum number of connections """ with self._lock: self.max_size = size class PoolManager(object): """The connection pool object implements behavior around connections and their use in queries.Session objects. We carry a pool id instead of the connection URI so that we will not be carrying the URI in memory, creating a possible security issue. """ _lock = threading.Lock() _pools = {} def __contains__(self, pid): """Returns True if the pool exists :param str pid: The pool id to check for :rtype: bool """ return pid in self.__class__._pools @classmethod def instance(cls): """Only allow a single PoolManager instance to exist, returning the handle for it. :rtype: PoolManager """ if not hasattr(cls, '_instance'): with cls._lock: cls._instance = cls() return cls._instance @classmethod def add(cls, pid, connection): """Add a new connection and session to a pool. :param str pid: The pool id :type connection: psycopg2.extensions.connection :param connection: The connection to add to the pool """ with cls._lock: cls._ensure_pool_exists(pid) cls._pools[pid].add(connection) @classmethod def clean(cls, pid): """Clean the specified pool, removing any closed connections or stale locks. :param str pid: The pool id to clean """ with cls._lock: try: cls._ensure_pool_exists(pid) except KeyError: LOGGER.debug('Pool clean invoked against missing pool %s', pid) return cls._pools[pid].clean() cls._maybe_remove_pool(pid) @classmethod def create(cls, pid, idle_ttl=DEFAULT_IDLE_TTL, max_size=DEFAULT_MAX_SIZE, time_method=None): """Create a new pool, with the ability to pass in values to override the default idle TTL and the default maximum size. A pool's idle TTL defines the amount of time that a pool can be open without any sessions before it is removed. A pool's max size defines the maximum number of connections that can be added to the pool to prevent unbounded open connections. :param str pid: The pool ID :param int idle_ttl: Time in seconds for the idle TTL :param int max_size: The maximum pool size :param callable time_method: Override the use of :py:meth:`time.time` method for time values. :raises: KeyError """ if pid in cls._pools: raise KeyError('Pool %s already exists' % pid) with cls._lock: LOGGER.debug("Creating Pool: %s (%i/%i)", pid, idle_ttl, max_size) cls._pools[pid] = Pool(pid, idle_ttl, max_size, time_method) @classmethod def free(cls, pid, connection): """Free a connection that was locked by a session :param str pid: The pool ID :param connection: The connection to remove :type connection: psycopg2.extensions.connection """ with cls._lock: LOGGER.debug('Freeing %s from pool %s', id(connection), pid) cls._ensure_pool_exists(pid) cls._pools[pid].free(connection) @classmethod def get(cls, pid, session): """Get an idle, unused connection from the pool. Once a connection has been retrieved, it will be marked as in-use until it is freed. :param str pid: The pool ID :param queries.Session session: The session to assign to the connection :rtype: psycopg2.extensions.connection """ with cls._lock: cls._ensure_pool_exists(pid) return cls._pools[pid].get(session) @classmethod def get_connection(cls, pid, connection): """Return the specified :class:`~queries.pool.Connection` from the pool. :param str pid: The pool ID :param connection: The connection to return for :type connection: psycopg2.extensions.connection :rtype: queries.pool.Connection """ with cls._lock: return cls._pools[pid].connection_handle(connection) @classmethod def has_connection(cls, pid, connection): """Check to see if a pool has the specified connection :param str pid: The pool ID :param connection: The connection to check for :type connection: psycopg2.extensions.connection :rtype: bool """ with cls._lock: cls._ensure_pool_exists(pid) return connection in cls._pools[pid] @classmethod def has_idle_connection(cls, pid): """Check to see if a pool has an idle connection :param str pid: The pool ID :rtype: bool """ with cls._lock: cls._ensure_pool_exists(pid) return bool(cls._pools[pid].idle_connections) @classmethod def is_full(cls, pid): """Return a bool indicating if the specified pool is full :param str pid: The pool id :rtype: bool """ with cls._lock: cls._ensure_pool_exists(pid) return cls._pools[pid].is_full @classmethod def lock(cls, pid, connection, session): """Explicitly lock the specified connection in the pool :param str pid: The pool id :type connection: psycopg2.extensions.connection :param connection: The connection to add to the pool :param queries.Session session: The session to hold the lock """ with cls._lock: cls._ensure_pool_exists(pid) cls._pools[pid].lock(connection, session) @classmethod def remove(cls, pid): """Remove a pool, closing all connections :param str pid: The pool ID """ with cls._lock: cls._ensure_pool_exists(pid) cls._pools[pid].close() del cls._pools[pid] @classmethod def remove_connection(cls, pid, connection): """Remove a connection from the pool, closing it if is open. :param str pid: The pool ID :param connection: The connection to remove :type connection: psycopg2.extensions.connection :raises: ConnectionNotFoundError """ cls._ensure_pool_exists(pid) cls._pools[pid].remove(connection) @classmethod def set_idle_ttl(cls, pid, ttl): """Set the idle TTL for a pool, after which it will be destroyed. :param str pid: The pool id :param int ttl: The TTL for an idle pool """ with cls._lock: cls._ensure_pool_exists(pid) cls._pools[pid].set_idle_ttl(ttl) @classmethod def set_max_size(cls, pid, size): """Set the maximum number of connections for the specified pool :param str pid: The pool to set the size for :param int size: The maximum number of connections """ with cls._lock: cls._ensure_pool_exists(pid) cls._pools[pid].set_max_size(size) @classmethod def shutdown(cls): """Close all connections on in all pools""" for pid in list(cls._pools.keys()): cls._pools[pid].shutdown() LOGGER.info('Shutdown complete, all pooled connections closed') @classmethod def size(cls, pid): """Return the number of connections in the pool :param str pid: The pool id :rtype int """ with cls._lock: cls._ensure_pool_exists(pid) return len(cls._pools[pid]) @classmethod def report(cls): """Return the state of the all of the registered pools. :rtype: dict """ return { 'timestamp': datetime.datetime.utcnow().isoformat(), 'process': os.getpid(), 'pools': dict([(i, p.report()) for i, p in cls._pools.items()]) } @classmethod def _ensure_pool_exists(cls, pid): """Raise an exception if the pool has yet to be created or has been removed. :param str pid: The pool ID to check for :raises: KeyError """ if pid not in cls._pools: raise KeyError('Pool %s has not been created' % pid) @classmethod def _maybe_remove_pool(cls, pid): """If the pool has no open connections, remove it :param str pid: The pool id to clean """ if not len(cls._pools[pid]): del cls._pools[pid] class QueriesException(Exception): """Base Exception for all other Queries exceptions""" pass class ConnectionException(QueriesException): def __init__(self, cid): self.cid = cid class PoolException(QueriesException): def __init__(self, pid): self.pid = pid class PoolConnectionException(PoolException): def __init__(self, pid, cid): self.pid = pid self.cid = cid class ActivePoolError(PoolException): """Raised when removing a pool that has active connections""" def __str__(self): return 'Pool %s has at least one active connection' % self.pid class ConnectionBusyError(ConnectionException): """Raised when trying to lock a connection that is already busy""" def __str__(self): return 'Connection %s is busy' % self.cid class ConnectionNotFoundError(PoolConnectionException): """Raised if a specific connection is not found in the pool""" def __str__(self): return 'Connection %s not found in pool %s' % (self.cid, self.pid) class NoIdleConnectionsError(PoolException): """Raised if a pool does not have any idle, open connections""" def __str__(self): return 'Pool %s has no idle connections' % self.pid class PoolFullError(PoolException): """Raised when adding a connection to a pool that has hit max-size""" def __str__(self): return 'Pool %s is at its maximum capacity' % self.pid <MSG> Add support for the QUERIES_MAX_POOL_SIZE env var <DFF> @@ -3,6 +3,7 @@ Connection Pooling """ import logging +import os import threading import time import weakref @@ -10,7 +11,7 @@ import weakref LOGGER = logging.getLogger(__name__) DEFAULT_IDLE_TTL = 60 -DEFAULT_MAX_SIZE = 1 +DEFAULT_MAX_SIZE = os.environ.get('QUERIES_MAX_POOL_SIZE', 1) class Connection(object):
2
Add support for the QUERIES_MAX_POOL_SIZE env var
1
.py
py
bsd-3-clause
gmr/queries
1216
<NME> pool.py <BEF> """ Connection Pooling """ import logging import threading import time import weakref import weakref LOGGER = logging.getLogger(__name__) DEFAULT_IDLE_TTL = 60 DEFAULT_MAX_SIZE = 1 class Connection(object): class Connection(object): """Contains the handle to the connection, the current state of the connection and methods for manipulating the state of the connection. """ _lock = threading.Lock() def __init__(self, handle): self.handle = handle self.used_by = None self.executions = 0 self.exceptions = 0 def close(self): """Close the connection :raises: ConnectionBusyError """ LOGGER.debug('Connection %s closing', self.id) if self.busy and not self.closed: raise ConnectionBusyError(self) with self._lock: if not self.handle.closed: try: self.handle.close() except psycopg2.InterfaceError as error: LOGGER.error('Error closing socket: %s', error) @property def closed(self): """Return if the psycopg2 connection is closed. :rtype: bool """ return self.handle.closed != 0 @property def busy(self): """Return if the connection is currently executing a query or is locked by a session that still exists. :rtype: bool """ if self.handle.isexecuting(): return True elif self.used_by is None: return False return not self.used_by() is None @property def executing(self): """Return if the connection is currently executing a query :rtype: bool """ return self.handle.isexecuting() def free(self): """Remove the lock on the connection if the connection is not active :raises: ConnectionBusyError """ LOGGER.debug('Connection %s freeing', self.id) if self.handle.isexecuting(): raise ConnectionBusyError(self) with self._lock: self.used_by = None LOGGER.debug('Connection %s freed', self.id) @property def id(self): """Return id of the psycopg2 connection object :rtype: int """ return id(self.handle) def lock(self, session): """Lock the connection, ensuring that it is not busy and storing a weakref for the session. :param queries.Session session: The session to lock the connection with :raises: ConnectionBusyError """ if self.busy: raise ConnectionBusyError(self) with self._lock: self.used_by = weakref.ref(session) LOGGER.debug('Connection %s locked', self.id) @property def locked(self): """Return if the connection is currently exclusively locked :rtype: bool """ return self.used_by is not None class Pool(object): """A connection pool for gaining access to and managing connections""" _lock = threading.Lock() idle_start = None idle_ttl = DEFAULT_IDLE_TTL max_size = DEFAULT_MAX_SIZE def __init__(self, pool_id, idle_ttl=DEFAULT_IDLE_TTL, max_size=DEFAULT_MAX_SIZE, time_method=None): self.connections = {} self._id = pool_id self.idle_ttl = idle_ttl self.max_size = max_size self.time_method = time_method or time.time def __contains__(self, connection): """Return True if the pool contains the connection""" return id(connection) in self.connections def __len__(self): """Return the number of connections in the pool""" return len(self.connections) def add(self, connection): """Add a new connection to the pool :param connection: The connection to add to the pool :type connection: psycopg2.extensions.connection :raises: PoolFullError """ if id(connection) in self.connections: raise ValueError('Connection already exists in pool') if len(self.connections) == self.max_size: LOGGER.warning('Race condition found when adding new connection') try: connection.close() except (psycopg2.Error, psycopg2.Warning) as error: LOGGER.error('Error closing the conn that cant be used: %s', error) raise PoolFullError(self) with self._lock: self.connections[id(connection)] = Connection(connection) LOGGER.debug('Pool %s added connection %s', self.id, id(connection)) @property def busy_connections(self): """Return a list of active/busy connections :rtype: list """ return [c for c in self.connections.values() if c.busy and not c.closed] def clean(self): """Clean the pool by removing any closed connections and if the pool's idle has exceeded its idle TTL, remove all connections. """ LOGGER.debug('Cleaning the pool') for connection in [self.connections[k] for k in self.connections if self.connections[k].closed]: LOGGER.debug('Removing %s', connection.id) self.remove(connection.handle) if self.idle_duration > self.idle_ttl: self.close() LOGGER.debug('Pool %s cleaned', self.id) def close(self): """Close the pool by closing and removing all of the connections""" for cid in list(self.connections.keys()): self.remove(self.connections[cid].handle) LOGGER.debug('Pool %s closed', self.id) @property def closed_connections(self): """Return a list of closed connections :rtype: list """ return [c for c in self.connections.values() if c.closed] def connection_handle(self, connection): """Return a connection object for the given psycopg2 connection :param connection: The connection to return a parent for :type connection: psycopg2.extensions.connection :rtype: Connection """ return self.connections[id(connection)] @property def executing_connections(self): """Return a list of connections actively executing queries :rtype: list """ return [c for c in self.connections.values() if c.executing] def free(self, connection): """Free the connection from use by the session that was using it. :param connection: The connection to free :type connection: psycopg2.extensions.connection :raises: ConnectionNotFoundError """ LOGGER.debug('Pool %s freeing connection %s', self.id, id(connection)) try: self.connection_handle(connection).free() except KeyError: raise ConnectionNotFoundError(self.id, id(connection)) if self.idle_connections == list(self.connections.values()): with self._lock: self.idle_start = self.time_method() LOGGER.debug('Pool %s freed connection %s', self.id, id(connection)) def get(self, session): """Return an idle connection and assign the session to the connection :param queries.Session session: The session to assign :rtype: psycopg2.extensions.connection :raises: NoIdleConnectionsError """ idle = self.idle_connections if idle: connection = idle.pop(0) connection.lock(session) if self.idle_start: with self._lock: self.idle_start = None return connection.handle raise NoIdleConnectionsError(self.id) @property def id(self): """Return the ID for this pool :rtype: str """ return self._id @property def idle_connections(self): """Return a list of idle connections :rtype: list """ return [c for c in self.connections.values() if not c.busy and not c.closed] @property def idle_duration(self): """Return the number of seconds that the pool has had no active connections. :rtype: float """ if self.idle_start is None: return 0 return self.time_method() - self.idle_start @property def is_full(self): """Return True if there are no more open slots for connections. :rtype: bool """ return len(self.connections) >= self.max_size def lock(self, connection, session): """Explicitly lock the specified connection :type connection: psycopg2.extensions.connection :param connection: The connection to lock :param queries.Session session: The session to hold the lock """ cid = id(connection) try: self.connection_handle(connection).lock(session) except KeyError: raise ConnectionNotFoundError(self.id, cid) else: if self.idle_start: with self._lock: self.idle_start = None LOGGER.debug('Pool %s locked connection %s', self.id, cid) @property def locked_connections(self): """Return a list of all locked connections :rtype: list """ return [c for c in self.connections.values() if c.locked] def remove(self, connection): """Remove the connection from the pool :param connection: The connection to remove :type connection: psycopg2.extensions.connection :raises: ConnectionNotFoundError :raises: ConnectionBusyError """ cid = id(connection) if cid not in self.connections: raise ConnectionNotFoundError(self.id, cid) self.connection_handle(connection).close() with self._lock: del self.connections[cid] LOGGER.debug('Pool %s removed connection %s', self.id, cid) def report(self): """Return a report about the pool state and configuration. :rtype: dict """ return { 'connections': { 'busy': len(self.busy_connections), 'closed': len(self.closed_connections), 'executing': len(self.executing_connections), 'idle': len(self.idle_connections), 'locked': len(self.busy_connections) }, 'exceptions': sum([c.exceptions for c in self.connections.values()]), 'executions': sum([c.executions for c in self.connections.values()]), 'full': self.is_full, 'idle': { 'duration': self.idle_duration, 'ttl': self.idle_ttl }, 'max_size': self.max_size } def shutdown(self): """Forcefully shutdown the entire pool, closing all non-executing connections. :raises: ConnectionBusyError """ with self._lock: for cid in list(self.connections.keys()): if self.connections[cid].executing: raise ConnectionBusyError(cid) if self.connections[cid].locked: self.connections[cid].free() self.connections[cid].close() del self.connections[cid] def set_idle_ttl(self, ttl): """Set the idle ttl :param int ttl: The TTL when idle """ with self._lock: self.idle_ttl = ttl def set_max_size(self, size): """Set the maximum number of connections :param int size: The maximum number of connections """ with self._lock: self.max_size = size class PoolManager(object): """The connection pool object implements behavior around connections and their use in queries.Session objects. We carry a pool id instead of the connection URI so that we will not be carrying the URI in memory, creating a possible security issue. """ _lock = threading.Lock() _pools = {} def __contains__(self, pid): """Returns True if the pool exists :param str pid: The pool id to check for :rtype: bool """ return pid in self.__class__._pools @classmethod def instance(cls): """Only allow a single PoolManager instance to exist, returning the handle for it. :rtype: PoolManager """ if not hasattr(cls, '_instance'): with cls._lock: cls._instance = cls() return cls._instance @classmethod def add(cls, pid, connection): """Add a new connection and session to a pool. :param str pid: The pool id :type connection: psycopg2.extensions.connection :param connection: The connection to add to the pool """ with cls._lock: cls._ensure_pool_exists(pid) cls._pools[pid].add(connection) @classmethod def clean(cls, pid): """Clean the specified pool, removing any closed connections or stale locks. :param str pid: The pool id to clean """ with cls._lock: try: cls._ensure_pool_exists(pid) except KeyError: LOGGER.debug('Pool clean invoked against missing pool %s', pid) return cls._pools[pid].clean() cls._maybe_remove_pool(pid) @classmethod def create(cls, pid, idle_ttl=DEFAULT_IDLE_TTL, max_size=DEFAULT_MAX_SIZE, time_method=None): """Create a new pool, with the ability to pass in values to override the default idle TTL and the default maximum size. A pool's idle TTL defines the amount of time that a pool can be open without any sessions before it is removed. A pool's max size defines the maximum number of connections that can be added to the pool to prevent unbounded open connections. :param str pid: The pool ID :param int idle_ttl: Time in seconds for the idle TTL :param int max_size: The maximum pool size :param callable time_method: Override the use of :py:meth:`time.time` method for time values. :raises: KeyError """ if pid in cls._pools: raise KeyError('Pool %s already exists' % pid) with cls._lock: LOGGER.debug("Creating Pool: %s (%i/%i)", pid, idle_ttl, max_size) cls._pools[pid] = Pool(pid, idle_ttl, max_size, time_method) @classmethod def free(cls, pid, connection): """Free a connection that was locked by a session :param str pid: The pool ID :param connection: The connection to remove :type connection: psycopg2.extensions.connection """ with cls._lock: LOGGER.debug('Freeing %s from pool %s', id(connection), pid) cls._ensure_pool_exists(pid) cls._pools[pid].free(connection) @classmethod def get(cls, pid, session): """Get an idle, unused connection from the pool. Once a connection has been retrieved, it will be marked as in-use until it is freed. :param str pid: The pool ID :param queries.Session session: The session to assign to the connection :rtype: psycopg2.extensions.connection """ with cls._lock: cls._ensure_pool_exists(pid) return cls._pools[pid].get(session) @classmethod def get_connection(cls, pid, connection): """Return the specified :class:`~queries.pool.Connection` from the pool. :param str pid: The pool ID :param connection: The connection to return for :type connection: psycopg2.extensions.connection :rtype: queries.pool.Connection """ with cls._lock: return cls._pools[pid].connection_handle(connection) @classmethod def has_connection(cls, pid, connection): """Check to see if a pool has the specified connection :param str pid: The pool ID :param connection: The connection to check for :type connection: psycopg2.extensions.connection :rtype: bool """ with cls._lock: cls._ensure_pool_exists(pid) return connection in cls._pools[pid] @classmethod def has_idle_connection(cls, pid): """Check to see if a pool has an idle connection :param str pid: The pool ID :rtype: bool """ with cls._lock: cls._ensure_pool_exists(pid) return bool(cls._pools[pid].idle_connections) @classmethod def is_full(cls, pid): """Return a bool indicating if the specified pool is full :param str pid: The pool id :rtype: bool """ with cls._lock: cls._ensure_pool_exists(pid) return cls._pools[pid].is_full @classmethod def lock(cls, pid, connection, session): """Explicitly lock the specified connection in the pool :param str pid: The pool id :type connection: psycopg2.extensions.connection :param connection: The connection to add to the pool :param queries.Session session: The session to hold the lock """ with cls._lock: cls._ensure_pool_exists(pid) cls._pools[pid].lock(connection, session) @classmethod def remove(cls, pid): """Remove a pool, closing all connections :param str pid: The pool ID """ with cls._lock: cls._ensure_pool_exists(pid) cls._pools[pid].close() del cls._pools[pid] @classmethod def remove_connection(cls, pid, connection): """Remove a connection from the pool, closing it if is open. :param str pid: The pool ID :param connection: The connection to remove :type connection: psycopg2.extensions.connection :raises: ConnectionNotFoundError """ cls._ensure_pool_exists(pid) cls._pools[pid].remove(connection) @classmethod def set_idle_ttl(cls, pid, ttl): """Set the idle TTL for a pool, after which it will be destroyed. :param str pid: The pool id :param int ttl: The TTL for an idle pool """ with cls._lock: cls._ensure_pool_exists(pid) cls._pools[pid].set_idle_ttl(ttl) @classmethod def set_max_size(cls, pid, size): """Set the maximum number of connections for the specified pool :param str pid: The pool to set the size for :param int size: The maximum number of connections """ with cls._lock: cls._ensure_pool_exists(pid) cls._pools[pid].set_max_size(size) @classmethod def shutdown(cls): """Close all connections on in all pools""" for pid in list(cls._pools.keys()): cls._pools[pid].shutdown() LOGGER.info('Shutdown complete, all pooled connections closed') @classmethod def size(cls, pid): """Return the number of connections in the pool :param str pid: The pool id :rtype int """ with cls._lock: cls._ensure_pool_exists(pid) return len(cls._pools[pid]) @classmethod def report(cls): """Return the state of the all of the registered pools. :rtype: dict """ return { 'timestamp': datetime.datetime.utcnow().isoformat(), 'process': os.getpid(), 'pools': dict([(i, p.report()) for i, p in cls._pools.items()]) } @classmethod def _ensure_pool_exists(cls, pid): """Raise an exception if the pool has yet to be created or has been removed. :param str pid: The pool ID to check for :raises: KeyError """ if pid not in cls._pools: raise KeyError('Pool %s has not been created' % pid) @classmethod def _maybe_remove_pool(cls, pid): """If the pool has no open connections, remove it :param str pid: The pool id to clean """ if not len(cls._pools[pid]): del cls._pools[pid] class QueriesException(Exception): """Base Exception for all other Queries exceptions""" pass class ConnectionException(QueriesException): def __init__(self, cid): self.cid = cid class PoolException(QueriesException): def __init__(self, pid): self.pid = pid class PoolConnectionException(PoolException): def __init__(self, pid, cid): self.pid = pid self.cid = cid class ActivePoolError(PoolException): """Raised when removing a pool that has active connections""" def __str__(self): return 'Pool %s has at least one active connection' % self.pid class ConnectionBusyError(ConnectionException): """Raised when trying to lock a connection that is already busy""" def __str__(self): return 'Connection %s is busy' % self.cid class ConnectionNotFoundError(PoolConnectionException): """Raised if a specific connection is not found in the pool""" def __str__(self): return 'Connection %s not found in pool %s' % (self.cid, self.pid) class NoIdleConnectionsError(PoolException): """Raised if a pool does not have any idle, open connections""" def __str__(self): return 'Pool %s has no idle connections' % self.pid class PoolFullError(PoolException): """Raised when adding a connection to a pool that has hit max-size""" def __str__(self): return 'Pool %s is at its maximum capacity' % self.pid <MSG> Add support for the QUERIES_MAX_POOL_SIZE env var <DFF> @@ -3,6 +3,7 @@ Connection Pooling """ import logging +import os import threading import time import weakref @@ -10,7 +11,7 @@ import weakref LOGGER = logging.getLogger(__name__) DEFAULT_IDLE_TTL = 60 -DEFAULT_MAX_SIZE = 1 +DEFAULT_MAX_SIZE = os.environ.get('QUERIES_MAX_POOL_SIZE', 1) class Connection(object):
2
Add support for the QUERIES_MAX_POOL_SIZE env var
1
.py
py
bsd-3-clause
gmr/queries
1217
<NME> session.py <BEF> """PostgreSQL Class module """ import logging import psycopg2 from psycopg2 import extensions from psycopg2 import extras from queries import pool, PYPY LOGGER = logging.getLogger(__name__) class Session(object): """Core queries Uses a module level cache of connections to reduce overhead. """ _from_pool = False # Connection status constants BEGIN = extensions.STATUS_BEGIN INTRANS = extensions.STATUS_IN_TRANSACTION PREPARED = extensions.STATUS_PREPARED READY = extensions.STATUS_READY # Transaction status constants TX_ACTIVE = extensions.TRANSACTION_STATUS_ACTIVE TX_IDLE = extensions.TRANSACTION_STATUS_IDLE TX_INERROR = extensions.TRANSACTION_STATUS_INERROR TX_INTRANS = extensions.TRANSACTION_STATUS_INTRANS TX_UNKNOWN = extensions.TRANSACTION_STATUS_UNKNOWN def __init__(self, uri, cursor_factory=extras.RealDictCursor, use_pool=True): """Connect to a PostgreSQL server using the module wide connection and autocommit=True): """Connect to a PostgreSQL server using the module wide connection and set the isolation level. :param str uri: PostgreSQL connection URI :param psycopg2.extensions.cursor: The cursor type to use :param int pool_idle_ttl: How long idle pools keep connections open :param int pool_max_size: The maximum size of the pool to use """ self._cursor = self._get_cursor(cursor_factory) self._autocommit() # Don't re-register unicode or uuid if not use_pool or not self._from_pool: self._register_unicode() self._register_uuid() def __del__(self): """When deleting the context, ensure the instance is removed from caches, etc. """ self._cleanup() def __enter__(self): """For use as a context manager, return a handle to this object def backend_pid(self): """Return the backend process ID of the PostgreSQL server that this session is connected to. :rtype: int """ return self._conn.get_backend_pid() caches, etc. """ self._cleanup() @property def backend_pid(self): return self._conn.get_backend_pid() @property def connection(self): """Returns the psycopg2 PostgreSQL connection instance raise psycopg2.InterfaceError('Connection not open') LOGGER.info('Closing connection %r in %s', self._conn, self.pid) self._pool_manager.free(self.pid, self._conn) self._pool_manager.remove_connection(self.pid, self._conn) # Un-assign the connection and cursor self._conn, self._cursor = None, None @property def connection(self): """Return the current open connection to PostgreSQL. """ return self._cursor def cancel(self): self._conn.cancel() def commit(self): self._conn.commit() def listen(self, channel): pass @property def encoding(self): return self._conn.encoding def set_encoding(self, value='UTF-8'): self._conn.set_client_encoding(value) def rollback(self): self._conn.rollback() def create_transaction(self): self._conn.autocommit = False self._cursor.execute('') @property def notices(self): return self._conn.notices @property def status(self): return self._conn.status @property def tx_status(self): """Return the transaction status for the current connection. connection = self._pool_manager.get(self.pid, self) LOGGER.debug("Re-using connection for %s", self.pid) except pool.NoIdleConnectionsError: if self._pool_manager.is_full(self.pid): raise # Create a new PostgreSQL connection kwargs = utils.uri_to_kwargs(self._uri) LOGGER.debug("Creating a new connection for %s", self.pid) connection = self._psycopg2_connect(kwargs) """ return self._conn.get_transaction_status() def close(self): """Explicitly close the connection and remove it from the connection cache. :raises: AssertionError """ if not self._conn: raise AssertionError('Connection not open') self._conn.close() if self._use_pool: pool.remove_connection(self._uri) self._conn = None self._cursor = None def _autocommit(self): """Set the isolation level automatically to commit after every query""" connection.reset() # Register the custom data types self._register_unicode(connection) self._register_uuid(connection) return connection def _get_cursor(self, connection, name=None): """Return a cursor for the given cursor_factory. Specify a name to use server-side cursors. :param connection: The connection to create a cursor on :type connection: psycopg2.extensions.connection :param str name: A cursor name for a server side cursor :rtype: psycopg2.extensions.cursor """ cursor = connection.cursor(name=name, cursor_factory=self._cursor_factory) if name is not None: cursor.scrollable = True cursor.withhold = True return connection # Create a new PostgreSQL connection connection = psycopg2.connect(self._uri) # Add it to the pool, if pooling is enabled if self._use_pool: self._pool_manager.get_connection(self.pid, self._conn).executions += 1 def _psycopg2_connect(self, kwargs): """Return a psycopg2 connection for the specified kwargs. Extend for use in async session adapters. if PYPY: connection.reset() return connection def _get_cursor(self, cursor_factory): :type connection: psycopg2.extensions.connection :param connection: Where to register things """ psycopg2.extensions.register_type(psycopg2.extensions.UNICODE, """ return self._conn.cursor(cursor_factory=cursor_factory) def _register_unicode(self): """Register the cursor to be able to receive Unicode string. :param psycopg2.cursor: The cursor to add unicode support to """ psycopg2.extensions.register_type(psycopg2.extensions.UNICODE, self._cursor) psycopg2.extensions.register_type(psycopg2.extensions.UNICODEARRAY, self._cursor) def _register_uuid(self): """Register the UUID extension from psycopg2""" psycopg2.extras.register_uuid(self._conn) <MSG> Updated documentation and class functions <DFF> @@ -1,29 +1,61 @@ -"""PostgreSQL Class module +"""The Session class allows for a unified (and simplified) view of +interfacing with a PostgreSQL database server. +Connection details are passed in as a PostgreSQL URI and connections are pooled +by default, allowing for reuse of connections across modules in the Python +runtime without having to pass around the object handle. + +While you can still access the raw `psycopg2` connection and cursor objects to +provide ultimate flexibility in how you use the queries.Session object, there +are convenience methods designed to simplify the interaction with PostgreSQL. +For `psycopg2` functionality outside of what is exposed in Session, simply +use the Session.connection or Session.cursor properties to gain access to +either object just as you would in a program using psycopg2 directly. """ +import contextlib import logging import psycopg2 from psycopg2 import extensions from psycopg2 import extras -from queries import pool, PYPY +from queries import pool +from queries import utils LOGGER = logging.getLogger(__name__) +from queries import DEFAULT_URI +from queries import PYPY + +DEFAULT_ENCODING = 'UTF8' + + class Session(object): - """Core queries + """The Session class allows for a unified (and simplified) view of + interfacing with a PostgreSQL database server. The Session object can + act as a context manager, providing automated cleanup and simple, pythoic + way of interacting with the object: + + .. code:: python + + import queries + + with queries.Session('pgsql://postgres@localhost/postgres') as session: + for row in session.Query('SELECT * FROM table'): + print row - Uses a module level cache of connections to reduce overhead. + :param str uri: PostgreSQL connection URI + :param psycopg2.cursor: The cursor type to use + :param bool use_pool: Use the connection pool """ _from_pool = False + _tpc_id = None # Connection status constants - BEGIN = extensions.STATUS_BEGIN INTRANS = extensions.STATUS_IN_TRANSACTION PREPARED = extensions.STATUS_PREPARED READY = extensions.STATUS_READY @@ -35,7 +67,7 @@ class Session(object): TX_INTRANS = extensions.TRANSACTION_STATUS_INTRANS TX_UNKNOWN = extensions.TRANSACTION_STATUS_UNKNOWN - def __init__(self, uri, + def __init__(self, uri=DEFAULT_URI, cursor_factory=extras.RealDictCursor, use_pool=True): """Connect to a PostgreSQL server using the module wide connection and @@ -52,17 +84,13 @@ class Session(object): self._cursor = self._get_cursor(cursor_factory) self._autocommit() - # Don't re-register unicode or uuid - if not use_pool or not self._from_pool: - self._register_unicode() - self._register_uuid() - def __del__(self): """When deleting the context, ensure the instance is removed from caches, etc. """ - self._cleanup() + #self._cleanup() + pass def __enter__(self): """For use as a context manager, return a handle to this object @@ -78,12 +106,34 @@ class Session(object): caches, etc. """ - self._cleanup() + #self._cleanup() + pass @property def backend_pid(self): + """Return the backend process ID of the PostgreSQL server that this + session is connected to. + + :rtype: int + + """ return self._conn.get_backend_pid() + def close(self): + """Explicitly close the connection and remove it from the connection + pool if pooling is enabled. + + :raises: AssertionError + + """ + if not self._conn: + raise AssertionError('Connection not open') + self._conn.close() + if self._use_pool: + pool.remove_connection(self._uri) + self._conn = None + self._cursor = None + @property def connection(self): """Returns the psycopg2 PostgreSQL connection instance @@ -102,37 +152,135 @@ class Session(object): """ return self._cursor - def cancel(self): - self._conn.cancel() - - def commit(self): - self._conn.commit() - - def listen(self, channel): - pass - @property def encoding(self): - return self._conn.encoding - - def set_encoding(self, value='UTF-8'): - self._conn.set_client_encoding(value) + """The current client encoding value. - def rollback(self): - self._conn.rollback() + :rtype: str - def create_transaction(self): - self._conn.autocommit = False - self._cursor.execute('') + """ + return self._conn.encoding @property def notices(self): + """A list of up to the last 50 server notices sent to the client. + + :rtype: list + + """ return self._conn.notices + def set_encoding(self, value=DEFAULT_ENCODING): + """Set the client encoding for the session if the value specified + is different than the current client encoding. + + :param str value: The encoding value to use + + """ + if self._conn.encoding != value: + self._conn.set_client_encoding(value) + @property def status(self): + """Return the current connection status as an integer value. + + The status should match one of the following constants: + + - queries.Session.INTRANS: Connection established, in transaction + - queries.Session.PREPARED: Prepared for second phase of transaction + - queries.Session.READY: Connected, no active transaction + + :rtype: int + + """ + if self._conn.status == psycopg2.extensions.STATUS_BEGIN: + return self.READY return self._conn.status + # Querying, executing, copying, etc + + def callproc(self, name, parameters=None): + """Call a stored procedure on the server and return an iterator of the + result set for easy access to the data. + + .. code:: python + + for row in session.callproc('now'): + print row + + :param str name: The procedure name + :param list parameters: The list of parameters to pass in + :return: iterator + + """ + self._cursor.callproc(name, parameters) + for record in self._cursor: + yield record + + def callproc_all(self, name, parameters=None): + """Call a stored procedure on the server returning all of the rows + returned by the server. + + :rtype: list + + """ + self._cursor.callproc(name, parameters) + return self._cursor.fetchall() + + def query(self, sql, parameters=None): + """A generator to issue a query on the server, mogrifying the + parameters against the sql statement and returning the results as an + iterator. + + .. code:: python + + for row in session.query('SELECT * FROM foo WHERE bar=%(bar)s', + {'bar': 'baz'}): + print row + + :param str sql: The SQL statement + :param dict parameters: A dictionary of query parameters + :rtype: iterator + + """ + self._cursor.execute(sql, parameters) + for record in self._cursor: + yield record + + def query_all(self, sql, parameters=None): + """Issue a query on the server mogrifying the parameters against the + sql statement and returning the entire result set as a list. + + :param str sql: The SQL statement + :param dict parameters: A dictionary of query parameters + :rtype: list + + """ + self._cursor.execute(sql, parameters or {}) + return self._cursor.fetchall() + + # Listen Notify + + def listen(self, channel, callback=None): + pass + + def notifications(self): + pass + + # TPC Transaction Functionality + + def tx_begin(self): + """Begin a new transaction""" + # Ensure that auto-commit is off + if self._conn.autocommit: + self._conn.autocommit = False + + def tx_commit(self): + self._conn.commit() + + def tx_rollback(self): + self._conn.rollback() + @property def tx_status(self): """Return the transaction status for the current connection. @@ -150,20 +298,7 @@ class Session(object): """ return self._conn.get_transaction_status() - def close(self): - """Explicitly close the connection and remove it from the connection - cache. - - :raises: AssertionError - - """ - if not self._conn: - raise AssertionError('Connection not open') - self._conn.close() - if self._use_pool: - pool.remove_connection(self._uri) - self._conn = None - self._cursor = None + # Internal methods def _autocommit(self): """Set the isolation level automatically to commit after every query""" @@ -193,7 +328,8 @@ class Session(object): return connection # Create a new PostgreSQL connection - connection = psycopg2.connect(self._uri) + LOGGER.debug('Connection KWARGS: %r', utils.uri_to_kwargs(self._uri)) + connection = psycopg2.connect(**utils.uri_to_kwargs(self._uri)) # Add it to the pool, if pooling is enabled if self._use_pool: @@ -205,6 +341,11 @@ class Session(object): if PYPY: connection.reset() + # Register the custom data types + self._register_json(connection) + self._register_unicode(connection) + self._register_uuid(connection) + return connection def _get_cursor(self, cursor_factory): @@ -216,17 +357,32 @@ class Session(object): """ return self._conn.cursor(cursor_factory=cursor_factory) - def _register_unicode(self): + @staticmethod + def _register_json(connection): + """Register the JSON extension from the psycopg2.extras module + + :param psycopg2.connection connection: The connection to register on + + """ + psycopg2.extras.register_json(conn_or_curs=connection) + + @staticmethod + def _register_unicode(connection): """Register the cursor to be able to receive Unicode string. - :param psycopg2.cursor: The cursor to add unicode support to + :param psycopg2.connection connection: The connection to register on """ psycopg2.extensions.register_type(psycopg2.extensions.UNICODE, - self._cursor) + connection) psycopg2.extensions.register_type(psycopg2.extensions.UNICODEARRAY, - self._cursor) + connection) - def _register_uuid(self): - """Register the UUID extension from psycopg2""" - psycopg2.extras.register_uuid(self._conn) + @staticmethod + def _register_uuid(connection): + """Register the UUID extension from the psycopg2.extra module + + :param psycopg2.connection connection: The connection to register on + + """ + psycopg2.extras.register_uuid(conn_or_curs=connection)
209
Updated documentation and class functions
53
.py
py
bsd-3-clause
gmr/queries
1218
<NME> session.py <BEF> """PostgreSQL Class module """ import logging import psycopg2 from psycopg2 import extensions from psycopg2 import extras from queries import pool, PYPY LOGGER = logging.getLogger(__name__) class Session(object): """Core queries Uses a module level cache of connections to reduce overhead. """ _from_pool = False # Connection status constants BEGIN = extensions.STATUS_BEGIN INTRANS = extensions.STATUS_IN_TRANSACTION PREPARED = extensions.STATUS_PREPARED READY = extensions.STATUS_READY # Transaction status constants TX_ACTIVE = extensions.TRANSACTION_STATUS_ACTIVE TX_IDLE = extensions.TRANSACTION_STATUS_IDLE TX_INERROR = extensions.TRANSACTION_STATUS_INERROR TX_INTRANS = extensions.TRANSACTION_STATUS_INTRANS TX_UNKNOWN = extensions.TRANSACTION_STATUS_UNKNOWN def __init__(self, uri, cursor_factory=extras.RealDictCursor, use_pool=True): """Connect to a PostgreSQL server using the module wide connection and autocommit=True): """Connect to a PostgreSQL server using the module wide connection and set the isolation level. :param str uri: PostgreSQL connection URI :param psycopg2.extensions.cursor: The cursor type to use :param int pool_idle_ttl: How long idle pools keep connections open :param int pool_max_size: The maximum size of the pool to use """ self._cursor = self._get_cursor(cursor_factory) self._autocommit() # Don't re-register unicode or uuid if not use_pool or not self._from_pool: self._register_unicode() self._register_uuid() def __del__(self): """When deleting the context, ensure the instance is removed from caches, etc. """ self._cleanup() def __enter__(self): """For use as a context manager, return a handle to this object def backend_pid(self): """Return the backend process ID of the PostgreSQL server that this session is connected to. :rtype: int """ return self._conn.get_backend_pid() caches, etc. """ self._cleanup() @property def backend_pid(self): return self._conn.get_backend_pid() @property def connection(self): """Returns the psycopg2 PostgreSQL connection instance raise psycopg2.InterfaceError('Connection not open') LOGGER.info('Closing connection %r in %s', self._conn, self.pid) self._pool_manager.free(self.pid, self._conn) self._pool_manager.remove_connection(self.pid, self._conn) # Un-assign the connection and cursor self._conn, self._cursor = None, None @property def connection(self): """Return the current open connection to PostgreSQL. """ return self._cursor def cancel(self): self._conn.cancel() def commit(self): self._conn.commit() def listen(self, channel): pass @property def encoding(self): return self._conn.encoding def set_encoding(self, value='UTF-8'): self._conn.set_client_encoding(value) def rollback(self): self._conn.rollback() def create_transaction(self): self._conn.autocommit = False self._cursor.execute('') @property def notices(self): return self._conn.notices @property def status(self): return self._conn.status @property def tx_status(self): """Return the transaction status for the current connection. connection = self._pool_manager.get(self.pid, self) LOGGER.debug("Re-using connection for %s", self.pid) except pool.NoIdleConnectionsError: if self._pool_manager.is_full(self.pid): raise # Create a new PostgreSQL connection kwargs = utils.uri_to_kwargs(self._uri) LOGGER.debug("Creating a new connection for %s", self.pid) connection = self._psycopg2_connect(kwargs) """ return self._conn.get_transaction_status() def close(self): """Explicitly close the connection and remove it from the connection cache. :raises: AssertionError """ if not self._conn: raise AssertionError('Connection not open') self._conn.close() if self._use_pool: pool.remove_connection(self._uri) self._conn = None self._cursor = None def _autocommit(self): """Set the isolation level automatically to commit after every query""" connection.reset() # Register the custom data types self._register_unicode(connection) self._register_uuid(connection) return connection def _get_cursor(self, connection, name=None): """Return a cursor for the given cursor_factory. Specify a name to use server-side cursors. :param connection: The connection to create a cursor on :type connection: psycopg2.extensions.connection :param str name: A cursor name for a server side cursor :rtype: psycopg2.extensions.cursor """ cursor = connection.cursor(name=name, cursor_factory=self._cursor_factory) if name is not None: cursor.scrollable = True cursor.withhold = True return connection # Create a new PostgreSQL connection connection = psycopg2.connect(self._uri) # Add it to the pool, if pooling is enabled if self._use_pool: self._pool_manager.get_connection(self.pid, self._conn).executions += 1 def _psycopg2_connect(self, kwargs): """Return a psycopg2 connection for the specified kwargs. Extend for use in async session adapters. if PYPY: connection.reset() return connection def _get_cursor(self, cursor_factory): :type connection: psycopg2.extensions.connection :param connection: Where to register things """ psycopg2.extensions.register_type(psycopg2.extensions.UNICODE, """ return self._conn.cursor(cursor_factory=cursor_factory) def _register_unicode(self): """Register the cursor to be able to receive Unicode string. :param psycopg2.cursor: The cursor to add unicode support to """ psycopg2.extensions.register_type(psycopg2.extensions.UNICODE, self._cursor) psycopg2.extensions.register_type(psycopg2.extensions.UNICODEARRAY, self._cursor) def _register_uuid(self): """Register the UUID extension from psycopg2""" psycopg2.extras.register_uuid(self._conn) <MSG> Updated documentation and class functions <DFF> @@ -1,29 +1,61 @@ -"""PostgreSQL Class module +"""The Session class allows for a unified (and simplified) view of +interfacing with a PostgreSQL database server. +Connection details are passed in as a PostgreSQL URI and connections are pooled +by default, allowing for reuse of connections across modules in the Python +runtime without having to pass around the object handle. + +While you can still access the raw `psycopg2` connection and cursor objects to +provide ultimate flexibility in how you use the queries.Session object, there +are convenience methods designed to simplify the interaction with PostgreSQL. +For `psycopg2` functionality outside of what is exposed in Session, simply +use the Session.connection or Session.cursor properties to gain access to +either object just as you would in a program using psycopg2 directly. """ +import contextlib import logging import psycopg2 from psycopg2 import extensions from psycopg2 import extras -from queries import pool, PYPY +from queries import pool +from queries import utils LOGGER = logging.getLogger(__name__) +from queries import DEFAULT_URI +from queries import PYPY + +DEFAULT_ENCODING = 'UTF8' + + class Session(object): - """Core queries + """The Session class allows for a unified (and simplified) view of + interfacing with a PostgreSQL database server. The Session object can + act as a context manager, providing automated cleanup and simple, pythoic + way of interacting with the object: + + .. code:: python + + import queries + + with queries.Session('pgsql://postgres@localhost/postgres') as session: + for row in session.Query('SELECT * FROM table'): + print row - Uses a module level cache of connections to reduce overhead. + :param str uri: PostgreSQL connection URI + :param psycopg2.cursor: The cursor type to use + :param bool use_pool: Use the connection pool """ _from_pool = False + _tpc_id = None # Connection status constants - BEGIN = extensions.STATUS_BEGIN INTRANS = extensions.STATUS_IN_TRANSACTION PREPARED = extensions.STATUS_PREPARED READY = extensions.STATUS_READY @@ -35,7 +67,7 @@ class Session(object): TX_INTRANS = extensions.TRANSACTION_STATUS_INTRANS TX_UNKNOWN = extensions.TRANSACTION_STATUS_UNKNOWN - def __init__(self, uri, + def __init__(self, uri=DEFAULT_URI, cursor_factory=extras.RealDictCursor, use_pool=True): """Connect to a PostgreSQL server using the module wide connection and @@ -52,17 +84,13 @@ class Session(object): self._cursor = self._get_cursor(cursor_factory) self._autocommit() - # Don't re-register unicode or uuid - if not use_pool or not self._from_pool: - self._register_unicode() - self._register_uuid() - def __del__(self): """When deleting the context, ensure the instance is removed from caches, etc. """ - self._cleanup() + #self._cleanup() + pass def __enter__(self): """For use as a context manager, return a handle to this object @@ -78,12 +106,34 @@ class Session(object): caches, etc. """ - self._cleanup() + #self._cleanup() + pass @property def backend_pid(self): + """Return the backend process ID of the PostgreSQL server that this + session is connected to. + + :rtype: int + + """ return self._conn.get_backend_pid() + def close(self): + """Explicitly close the connection and remove it from the connection + pool if pooling is enabled. + + :raises: AssertionError + + """ + if not self._conn: + raise AssertionError('Connection not open') + self._conn.close() + if self._use_pool: + pool.remove_connection(self._uri) + self._conn = None + self._cursor = None + @property def connection(self): """Returns the psycopg2 PostgreSQL connection instance @@ -102,37 +152,135 @@ class Session(object): """ return self._cursor - def cancel(self): - self._conn.cancel() - - def commit(self): - self._conn.commit() - - def listen(self, channel): - pass - @property def encoding(self): - return self._conn.encoding - - def set_encoding(self, value='UTF-8'): - self._conn.set_client_encoding(value) + """The current client encoding value. - def rollback(self): - self._conn.rollback() + :rtype: str - def create_transaction(self): - self._conn.autocommit = False - self._cursor.execute('') + """ + return self._conn.encoding @property def notices(self): + """A list of up to the last 50 server notices sent to the client. + + :rtype: list + + """ return self._conn.notices + def set_encoding(self, value=DEFAULT_ENCODING): + """Set the client encoding for the session if the value specified + is different than the current client encoding. + + :param str value: The encoding value to use + + """ + if self._conn.encoding != value: + self._conn.set_client_encoding(value) + @property def status(self): + """Return the current connection status as an integer value. + + The status should match one of the following constants: + + - queries.Session.INTRANS: Connection established, in transaction + - queries.Session.PREPARED: Prepared for second phase of transaction + - queries.Session.READY: Connected, no active transaction + + :rtype: int + + """ + if self._conn.status == psycopg2.extensions.STATUS_BEGIN: + return self.READY return self._conn.status + # Querying, executing, copying, etc + + def callproc(self, name, parameters=None): + """Call a stored procedure on the server and return an iterator of the + result set for easy access to the data. + + .. code:: python + + for row in session.callproc('now'): + print row + + :param str name: The procedure name + :param list parameters: The list of parameters to pass in + :return: iterator + + """ + self._cursor.callproc(name, parameters) + for record in self._cursor: + yield record + + def callproc_all(self, name, parameters=None): + """Call a stored procedure on the server returning all of the rows + returned by the server. + + :rtype: list + + """ + self._cursor.callproc(name, parameters) + return self._cursor.fetchall() + + def query(self, sql, parameters=None): + """A generator to issue a query on the server, mogrifying the + parameters against the sql statement and returning the results as an + iterator. + + .. code:: python + + for row in session.query('SELECT * FROM foo WHERE bar=%(bar)s', + {'bar': 'baz'}): + print row + + :param str sql: The SQL statement + :param dict parameters: A dictionary of query parameters + :rtype: iterator + + """ + self._cursor.execute(sql, parameters) + for record in self._cursor: + yield record + + def query_all(self, sql, parameters=None): + """Issue a query on the server mogrifying the parameters against the + sql statement and returning the entire result set as a list. + + :param str sql: The SQL statement + :param dict parameters: A dictionary of query parameters + :rtype: list + + """ + self._cursor.execute(sql, parameters or {}) + return self._cursor.fetchall() + + # Listen Notify + + def listen(self, channel, callback=None): + pass + + def notifications(self): + pass + + # TPC Transaction Functionality + + def tx_begin(self): + """Begin a new transaction""" + # Ensure that auto-commit is off + if self._conn.autocommit: + self._conn.autocommit = False + + def tx_commit(self): + self._conn.commit() + + def tx_rollback(self): + self._conn.rollback() + @property def tx_status(self): """Return the transaction status for the current connection. @@ -150,20 +298,7 @@ class Session(object): """ return self._conn.get_transaction_status() - def close(self): - """Explicitly close the connection and remove it from the connection - cache. - - :raises: AssertionError - - """ - if not self._conn: - raise AssertionError('Connection not open') - self._conn.close() - if self._use_pool: - pool.remove_connection(self._uri) - self._conn = None - self._cursor = None + # Internal methods def _autocommit(self): """Set the isolation level automatically to commit after every query""" @@ -193,7 +328,8 @@ class Session(object): return connection # Create a new PostgreSQL connection - connection = psycopg2.connect(self._uri) + LOGGER.debug('Connection KWARGS: %r', utils.uri_to_kwargs(self._uri)) + connection = psycopg2.connect(**utils.uri_to_kwargs(self._uri)) # Add it to the pool, if pooling is enabled if self._use_pool: @@ -205,6 +341,11 @@ class Session(object): if PYPY: connection.reset() + # Register the custom data types + self._register_json(connection) + self._register_unicode(connection) + self._register_uuid(connection) + return connection def _get_cursor(self, cursor_factory): @@ -216,17 +357,32 @@ class Session(object): """ return self._conn.cursor(cursor_factory=cursor_factory) - def _register_unicode(self): + @staticmethod + def _register_json(connection): + """Register the JSON extension from the psycopg2.extras module + + :param psycopg2.connection connection: The connection to register on + + """ + psycopg2.extras.register_json(conn_or_curs=connection) + + @staticmethod + def _register_unicode(connection): """Register the cursor to be able to receive Unicode string. - :param psycopg2.cursor: The cursor to add unicode support to + :param psycopg2.connection connection: The connection to register on """ psycopg2.extensions.register_type(psycopg2.extensions.UNICODE, - self._cursor) + connection) psycopg2.extensions.register_type(psycopg2.extensions.UNICODEARRAY, - self._cursor) + connection) - def _register_uuid(self): - """Register the UUID extension from psycopg2""" - psycopg2.extras.register_uuid(self._conn) + @staticmethod + def _register_uuid(connection): + """Register the UUID extension from the psycopg2.extra module + + :param psycopg2.connection connection: The connection to register on + + """ + psycopg2.extras.register_uuid(conn_or_curs=connection)
209
Updated documentation and class functions
53
.py
py
bsd-3-clause
gmr/queries
1219
<NME> setup.cfg <BEF> ADDFILE <MSG> Add deployment via travis, add wheel distribution <DFF> @@ -0,0 +1,2 @@ +[bdist_wheel] +universal = 1 \ No newline at end of file
2
Add deployment via travis, add wheel distribution
0
.cfg
cfg
bsd-3-clause
gmr/queries
1220
<NME> setup.cfg <BEF> ADDFILE <MSG> Add deployment via travis, add wheel distribution <DFF> @@ -0,0 +1,2 @@ +[bdist_wheel] +universal = 1 \ No newline at end of file
2
Add deployment via travis, add wheel distribution
0
.cfg
cfg
bsd-3-clause
gmr/queries
1221
<NME> history.rst <BEF> Version History =============== - 1.9.1 2016-10-25 - Handle exceptions raised when creating the connection - 1.9.0 2016-07-01 - Handle a potential race condition in TornadoSession when too many simultaneous new connections are made and a pool fills up - Increase logging in various places to be more informative 2.0.0 2018-01-29 ----------------- - REMOVED support for Python 2.6 - 1.8.9 2015-11-11 - Move to psycopg2cffi for PyPy support - 1.7.5 2015-09-03 - Dont let Session and TornadoSession share connections - 1.7.1 2015-03-25 - Fix TornadoSession's use of cleanup (#8) - Fix by Oren Itamar - 1.7.0 2015-01-13 - Implement ``Results.__bool__`` to be explicit about Python 3 support. - Catch any exception raised when using TornadoSession and invoking the execute function in psycopg2 for exceptions raised prior to sending the query to Postgres. This could be psycopg2.Error, IndexError, KeyError, or who knows, it's not documented in psycopg2. 1.10.3 2017-11-01 ----------------- - Remove the functionality from ``TornadoSession.validate`` and make it raise a ``DeprecationWarning`` - Catch the ``KeyError`` raised when ``PoolManager.clean()`` is invoked for a pool that doesn't exist 1.10.2 2017-10-26 ----------------- - Ensure the pool exists when executing a query in TornadoSession, the new timeout behavior prevented that from happening. 1.10.1 2017-10-24 ----------------- - Use an absolute time in the call to ``add_timeout`` 1.10.0 2017-09-27 ----------------- - Free when tornado_session.Result is ``__del__``'d without ``free`` being called. - Auto-clean the pool after Results.free TTL+1 in tornado_session.TornadoSession - Don't raise NotImplementedError in Results.free for synchronous use, just treat as a noop 1.9.1 2016-10-25 ---------------- - Add better exception handling around connections and getting the logged in user 1.9.0 2016-07-01 ---------------- - Handle a potential race condition in TornadoSession when too many simultaneous new connections are made and a pool fills up - Increase logging in various places to be more informative - Restructure queries specific exceptions to all extend off of a base QueriesException - Trivial code cleanup 1.8.10 2016-06-14 ----------------- - Propagate PoolManager exceptions from TornadoSession (#20) - Fix by Dave Shawley 1.8.9 2015-11-11 ---------------- - Move to psycopg2cffi for PyPy support 1.7.5 2015-09-03 ---------------- - Don't let Session and TornadoSession share connections 1.7.1 2015-03-25 ---------------- - Fix TornadoSession's use of cleanup (#8) - Fix by Oren Itamar 1.7.0 2015-01-13 ---------------- - Implement :py:meth:`Pool.shutdown <queries.pool.Pool.shutdown>` and :py:meth:`PoolManager.shutdown <queries.pool.PoolManager.shutdown>` to cleanly shutdown all open, non-executing connections across a Pool or all pools. Update locks in Pool operations to ensure atomicity. 1.6.1 2015-01-09 ---------------- - Fixes an iteration error when closing a pool (#7) - Fix by Chris McGuire 1.6.0 2014-11-20 ----------------- - Handle URI encoded password values properly 1.5.0 2014-10-07 ---------------- - Handle empty query results in the iterator (#4) - Fix by Den Teresh 1.4.0 2014-09-04 ---------------- - Address exception handling in tornado_session <MSG> Update history <DFF> @@ -1,7 +1,7 @@ Version History =============== - 1.9.1 2016-10-25 - - Handle exceptions raised when creating the connection + - Add better exception handling around connections and getting the logged in user - 1.9.0 2016-07-01 - Handle a potential race condition in TornadoSession when too many simultaneous new connections are made and a pool fills up - Increase logging in various places to be more informative @@ -12,7 +12,7 @@ Version History - 1.8.9 2015-11-11 - Move to psycopg2cffi for PyPy support - 1.7.5 2015-09-03 - - Dont let Session and TornadoSession share connections + - Don't let Session and TornadoSession share connections - 1.7.1 2015-03-25 - Fix TornadoSession's use of cleanup (#8) - Fix by Oren Itamar - 1.7.0 2015-01-13
2
Update history
2
.rst
rst
bsd-3-clause
gmr/queries
1222
<NME> history.rst <BEF> Version History =============== - 1.9.1 2016-10-25 - Handle exceptions raised when creating the connection - 1.9.0 2016-07-01 - Handle a potential race condition in TornadoSession when too many simultaneous new connections are made and a pool fills up - Increase logging in various places to be more informative 2.0.0 2018-01-29 ----------------- - REMOVED support for Python 2.6 - 1.8.9 2015-11-11 - Move to psycopg2cffi for PyPy support - 1.7.5 2015-09-03 - Dont let Session and TornadoSession share connections - 1.7.1 2015-03-25 - Fix TornadoSession's use of cleanup (#8) - Fix by Oren Itamar - 1.7.0 2015-01-13 - Implement ``Results.__bool__`` to be explicit about Python 3 support. - Catch any exception raised when using TornadoSession and invoking the execute function in psycopg2 for exceptions raised prior to sending the query to Postgres. This could be psycopg2.Error, IndexError, KeyError, or who knows, it's not documented in psycopg2. 1.10.3 2017-11-01 ----------------- - Remove the functionality from ``TornadoSession.validate`` and make it raise a ``DeprecationWarning`` - Catch the ``KeyError`` raised when ``PoolManager.clean()`` is invoked for a pool that doesn't exist 1.10.2 2017-10-26 ----------------- - Ensure the pool exists when executing a query in TornadoSession, the new timeout behavior prevented that from happening. 1.10.1 2017-10-24 ----------------- - Use an absolute time in the call to ``add_timeout`` 1.10.0 2017-09-27 ----------------- - Free when tornado_session.Result is ``__del__``'d without ``free`` being called. - Auto-clean the pool after Results.free TTL+1 in tornado_session.TornadoSession - Don't raise NotImplementedError in Results.free for synchronous use, just treat as a noop 1.9.1 2016-10-25 ---------------- - Add better exception handling around connections and getting the logged in user 1.9.0 2016-07-01 ---------------- - Handle a potential race condition in TornadoSession when too many simultaneous new connections are made and a pool fills up - Increase logging in various places to be more informative - Restructure queries specific exceptions to all extend off of a base QueriesException - Trivial code cleanup 1.8.10 2016-06-14 ----------------- - Propagate PoolManager exceptions from TornadoSession (#20) - Fix by Dave Shawley 1.8.9 2015-11-11 ---------------- - Move to psycopg2cffi for PyPy support 1.7.5 2015-09-03 ---------------- - Don't let Session and TornadoSession share connections 1.7.1 2015-03-25 ---------------- - Fix TornadoSession's use of cleanup (#8) - Fix by Oren Itamar 1.7.0 2015-01-13 ---------------- - Implement :py:meth:`Pool.shutdown <queries.pool.Pool.shutdown>` and :py:meth:`PoolManager.shutdown <queries.pool.PoolManager.shutdown>` to cleanly shutdown all open, non-executing connections across a Pool or all pools. Update locks in Pool operations to ensure atomicity. 1.6.1 2015-01-09 ---------------- - Fixes an iteration error when closing a pool (#7) - Fix by Chris McGuire 1.6.0 2014-11-20 ----------------- - Handle URI encoded password values properly 1.5.0 2014-10-07 ---------------- - Handle empty query results in the iterator (#4) - Fix by Den Teresh 1.4.0 2014-09-04 ---------------- - Address exception handling in tornado_session <MSG> Update history <DFF> @@ -1,7 +1,7 @@ Version History =============== - 1.9.1 2016-10-25 - - Handle exceptions raised when creating the connection + - Add better exception handling around connections and getting the logged in user - 1.9.0 2016-07-01 - Handle a potential race condition in TornadoSession when too many simultaneous new connections are made and a pool fills up - Increase logging in various places to be more informative @@ -12,7 +12,7 @@ Version History - 1.8.9 2015-11-11 - Move to psycopg2cffi for PyPy support - 1.7.5 2015-09-03 - - Dont let Session and TornadoSession share connections + - Don't let Session and TornadoSession share connections - 1.7.1 2015-03-25 - Fix TornadoSession's use of cleanup (#8) - Fix by Oren Itamar - 1.7.0 2015-01-13
2
Update history
2
.rst
rst
bsd-3-clause
gmr/queries
1223
<NME> .travis.yml <BEF> sudo: false language: python dist: xenial env: global: - PATH=$HOME/.local/bin:$PATH - AWS_DEFAULT_REGION=us-east-1 - 3.4 install: - if [[ $TRAVIS_PYTHON_VERSION == 'pypy' ]]; then pip install psycopg2ct; fi - if [[ $TRAVIS_PYTHON_VERSION == '2.6' ]]; then pip install psycopg2 unittest2; fi - if [[ $TRAVIS_PYTHON_VERSION == '2.7' ]]; then pip install psycopg2; fi - if [[ $TRAVIS_PYTHON_VERSION == '3.2' ]]; then pip install psycopg2; fi - if [[ $TRAVIS_PYTHON_VERSION == '3.3' ]]; then pip install psycopg2; fi - if [[ $TRAVIS_PYTHON_VERSION == '3.4' ]]; then pip install psycopg2; fi script: nosetests install: - pip install awscli - pip install -r requires/testing.txt - python setup.py develop script: nosetests after_success: - aws s3 cp .coverage "s3://com-gavinroy-travis/queries/$TRAVIS_BUILD_NUMBER/.coverage.${TRAVIS_PYTHON_VERSION}" jobs: include: - python: 2.7 - python: 3.4 - python: 3.5 - python: 3.6 - python: 3.7 - python: 3.8 - stage: coverage if: repo = gmr/queries services: [] python: 3.7 install: - pip install awscli coverage codecov script: - mkdir coverage - aws s3 cp --recursive s3://com-gavinroy-travis/queries/$TRAVIS_BUILD_NUMBER/ coverage - cd coverage - coverage combine - cd .. - mv coverage/.coverage . - coverage report after_success: codecov - stage: deploy if: repo = gmr/queries python: 3.6 services: [] install: true script: true after_success: true deploy: distributions: sdist bdist_wheel provider: pypi user: crad on: tags: true all_branches: true password: secure: UWQWui+QhAL1cz6oW/vqjEEp6/EPn1YOlItNJcWHNOO/WMMOlaTVYVUuXp+y+m52B+8PtYZZCTHwKCUKe97Grh291FLxgd0RJCawA40f4v1gmOFYLNKyZFBGfbC69/amxvGCcDvOPtpChHAlTIeokS5EQneVcAhXg2jXct0HTfI= <MSG> Require tornado for testing <DFF> @@ -9,11 +9,11 @@ python: - 3.4 install: - - if [[ $TRAVIS_PYTHON_VERSION == 'pypy' ]]; then pip install psycopg2ct; fi - - if [[ $TRAVIS_PYTHON_VERSION == '2.6' ]]; then pip install psycopg2 unittest2; fi - - if [[ $TRAVIS_PYTHON_VERSION == '2.7' ]]; then pip install psycopg2; fi - - if [[ $TRAVIS_PYTHON_VERSION == '3.2' ]]; then pip install psycopg2; fi - - if [[ $TRAVIS_PYTHON_VERSION == '3.3' ]]; then pip install psycopg2; fi - - if [[ $TRAVIS_PYTHON_VERSION == '3.4' ]]; then pip install psycopg2; fi + - if [[ $TRAVIS_PYTHON_VERSION == 'pypy' ]]; then pip install psycopg2ct tornado; fi + - if [[ $TRAVIS_PYTHON_VERSION == '2.6' ]]; then pip install psycopg2 unittest2 tornado; fi + - if [[ $TRAVIS_PYTHON_VERSION == '2.7' ]]; then pip install psycopg2 tornado; fi + - if [[ $TRAVIS_PYTHON_VERSION == '3.2' ]]; then pip install psycopg2 tornado; fi + - if [[ $TRAVIS_PYTHON_VERSION == '3.3' ]]; then pip install psycopg2 tornado; fi + - if [[ $TRAVIS_PYTHON_VERSION == '3.4' ]]; then pip install psycopg2 tornado; fi script: nosetests
6
Require tornado for testing
6
.yml
travis
bsd-3-clause
gmr/queries
1224
<NME> .travis.yml <BEF> sudo: false language: python dist: xenial env: global: - PATH=$HOME/.local/bin:$PATH - AWS_DEFAULT_REGION=us-east-1 - 3.4 install: - if [[ $TRAVIS_PYTHON_VERSION == 'pypy' ]]; then pip install psycopg2ct; fi - if [[ $TRAVIS_PYTHON_VERSION == '2.6' ]]; then pip install psycopg2 unittest2; fi - if [[ $TRAVIS_PYTHON_VERSION == '2.7' ]]; then pip install psycopg2; fi - if [[ $TRAVIS_PYTHON_VERSION == '3.2' ]]; then pip install psycopg2; fi - if [[ $TRAVIS_PYTHON_VERSION == '3.3' ]]; then pip install psycopg2; fi - if [[ $TRAVIS_PYTHON_VERSION == '3.4' ]]; then pip install psycopg2; fi script: nosetests install: - pip install awscli - pip install -r requires/testing.txt - python setup.py develop script: nosetests after_success: - aws s3 cp .coverage "s3://com-gavinroy-travis/queries/$TRAVIS_BUILD_NUMBER/.coverage.${TRAVIS_PYTHON_VERSION}" jobs: include: - python: 2.7 - python: 3.4 - python: 3.5 - python: 3.6 - python: 3.7 - python: 3.8 - stage: coverage if: repo = gmr/queries services: [] python: 3.7 install: - pip install awscli coverage codecov script: - mkdir coverage - aws s3 cp --recursive s3://com-gavinroy-travis/queries/$TRAVIS_BUILD_NUMBER/ coverage - cd coverage - coverage combine - cd .. - mv coverage/.coverage . - coverage report after_success: codecov - stage: deploy if: repo = gmr/queries python: 3.6 services: [] install: true script: true after_success: true deploy: distributions: sdist bdist_wheel provider: pypi user: crad on: tags: true all_branches: true password: secure: UWQWui+QhAL1cz6oW/vqjEEp6/EPn1YOlItNJcWHNOO/WMMOlaTVYVUuXp+y+m52B+8PtYZZCTHwKCUKe97Grh291FLxgd0RJCawA40f4v1gmOFYLNKyZFBGfbC69/amxvGCcDvOPtpChHAlTIeokS5EQneVcAhXg2jXct0HTfI= <MSG> Require tornado for testing <DFF> @@ -9,11 +9,11 @@ python: - 3.4 install: - - if [[ $TRAVIS_PYTHON_VERSION == 'pypy' ]]; then pip install psycopg2ct; fi - - if [[ $TRAVIS_PYTHON_VERSION == '2.6' ]]; then pip install psycopg2 unittest2; fi - - if [[ $TRAVIS_PYTHON_VERSION == '2.7' ]]; then pip install psycopg2; fi - - if [[ $TRAVIS_PYTHON_VERSION == '3.2' ]]; then pip install psycopg2; fi - - if [[ $TRAVIS_PYTHON_VERSION == '3.3' ]]; then pip install psycopg2; fi - - if [[ $TRAVIS_PYTHON_VERSION == '3.4' ]]; then pip install psycopg2; fi + - if [[ $TRAVIS_PYTHON_VERSION == 'pypy' ]]; then pip install psycopg2ct tornado; fi + - if [[ $TRAVIS_PYTHON_VERSION == '2.6' ]]; then pip install psycopg2 unittest2 tornado; fi + - if [[ $TRAVIS_PYTHON_VERSION == '2.7' ]]; then pip install psycopg2 tornado; fi + - if [[ $TRAVIS_PYTHON_VERSION == '3.2' ]]; then pip install psycopg2 tornado; fi + - if [[ $TRAVIS_PYTHON_VERSION == '3.3' ]]; then pip install psycopg2 tornado; fi + - if [[ $TRAVIS_PYTHON_VERSION == '3.4' ]]; then pip install psycopg2 tornado; fi script: nosetests
6
Require tornado for testing
6
.yml
travis
bsd-3-clause
gmr/queries
1225
<NME> session.py <BEF> """The Session class allows for a unified (and simplified) view of interfacing with a PostgreSQL database server. Connection details are passed in as a PostgreSQL URI and connections are pooled by default, allowing for reuse of connections across modules in the Python runtime without having to pass around the object handle. While you can still access the raw `psycopg2` connection and cursor objects to provide ultimate flexibility in how you use the queries.Session object, there are convenience methods designed to simplify the interaction with PostgreSQL. For `psycopg2` functionality outside of what is exposed in Session, simply use the Session.connection or Session.cursor properties to gain access to either object just as you would in a program using psycopg2 directly. Example usage: .. code:: python import queries with queries.Session('pgsql://postgres@localhost/postgres') as session: for row in session.Query('SELECT * FROM table'): print row """ import hashlib import logging import psycopg2 from psycopg2 import extensions, extras from queries import pool, results, utils LOGGER = logging.getLogger(__name__) DEFAULT_ENCODING = 'UTF8' DEFAULT_URI = 'postgresql://localhost:5432' class Session(object): """The Session class allows for a unified (and simplified) view of interfacing with a PostgreSQL database server. The Session object can act as a context manager, providing automated cleanup and simple, Pythonic way of interacting with the object. :param str uri: PostgreSQL connection URI :param psycopg2.extensions.cursor: The cursor type to use :param int pool_idle_ttl: How long idle pools keep connections open :param int pool_max_size: The maximum size of the pool to use """ _conn = None _cursor = None _tpc_id = None _uri = None # Connection status constants INTRANS = extensions.STATUS_IN_TRANSACTION PREPARED = extensions.STATUS_PREPARED READY = extensions.STATUS_READY SETUP = extensions.STATUS_SETUP # Transaction status constants TX_ACTIVE = extensions.TRANSACTION_STATUS_ACTIVE TX_IDLE = extensions.TRANSACTION_STATUS_IDLE TX_INERROR = extensions.TRANSACTION_STATUS_INERROR TX_INTRANS = extensions.TRANSACTION_STATUS_INTRANS TX_UNKNOWN = extensions.TRANSACTION_STATUS_UNKNOWN def __init__(self, uri=DEFAULT_URI, cursor_factory=extras.RealDictCursor, pool_idle_ttl=pool.DEFAULT_IDLE_TTL, pool_max_size=pool.DEFAULT_MAX_SIZE, autocommit=True): """Connect to a PostgreSQL server using the module wide connection and set the isolation level. :param str uri: PostgreSQL connection URI :param psycopg2.extensions.cursor: The cursor type to use :param int pool_idle_ttl: How long idle pools keep connections open :param int pool_max_size: The maximum size of the pool to use """ self._pool_manager = pool.PoolManager.instance() self._uri = uri # Ensure the pool exists in the pool manager if self.pid not in self._pool_manager: self._pool_manager.create(self.pid, pool_idle_ttl, pool_max_size) self._conn = self._connect() self._cursor_factory = cursor_factory self._cursor = self._get_cursor(self._conn) self._autocommit(autocommit) @property def backend_pid(self): """Return the backend process ID of the PostgreSQL server that this session is connected to. :rtype: int """ return self._conn.get_backend_pid() def callproc(self, name, args=None): """Call a stored procedure on the server, returning the results in a :py:class:`queries.Results` instance. :param str name: The procedure name :param list args: The list of arguments to pass in :rtype: queries.Results :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError :raises: queries.InternalError :raises: queries.InterfaceError :raises: queries.NotSupportedError :raises: queries.OperationalError :raises: queries.ProgrammingError """ try: self._cursor.callproc(name, args) except psycopg2.Error as err: self._incr_exceptions() raise err finally: self._incr_executions() return results.Results(self._cursor) def close(self): """Explicitly close the connection and remove it from the connection pool if pooling is enabled. If the connection is already closed :raises: psycopg2.InterfaceError """ @property def connection(self): """Returns the psycopg2 PostgreSQL connection instance :rtype: psycopg2.extensions.connection self._conn, self._cursor = None, None @property def cursor(self): """Returns the cursor instance :rtype: psycopg2.extensions.cursor return self._conn @property def cursor(self): """Return the current, active cursor for the open connection. :rtype: psycopg2.extensions.cursor """ return self._cursor @property def encoding(self): """Return the current client encoding value. :rtype: str """ return self._conn.encoding @property def notices(self): """Return a list of up to the last 50 server notices sent to the client. :rtype: list """ return self._conn.notices @property def pid(self): """Return the pool ID used for connection pooling. :rtype: str """ return hashlib.md5(':'.join([self.__class__.__name__, self._uri]).encode('utf-8')).hexdigest() def query(self, sql, parameters=None): """A generator to issue a query on the server, mogrifying the parameters against the sql statement. Results are returned as a :py:class:`queries.Results` object which can act as an iterator and has multiple ways to access the result data. :param str sql: The SQL statement :param dict parameters: A dictionary of query parameters :rtype: queries.Results :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError :raises: queries.InternalError :raises: queries.InterfaceError :raises: queries.NotSupportedError :raises: queries.OperationalError :raises: queries.ProgrammingError """ try: self._cursor.execute(sql, parameters) except psycopg2.Error as err: self._incr_exceptions() raise err finally: self._incr_executions() return results.Results(self._cursor) def set_encoding(self, value=DEFAULT_ENCODING): """Set the client encoding for the session if the value specified is different than the current client encoding. :param str value: The encoding value to use """ if self._conn.encoding != value: self._conn.set_client_encoding(value) def __del__(self): """When deleting the context, ensure the instance is removed from caches, etc. """ self._cleanup() def __enter__(self): """For use as a context manager, return a handle to this object instance. :rtype: Session """ return self def __exit__(self, exc_type, exc_val, exc_tb): """When leaving the context, ensure the instance is removed from caches, etc. """ self._cleanup() def _autocommit(self, autocommit): """Set the isolation level automatically to commit or not after every query :param autocommit: Boolean (Default - True) """ self._conn.autocommit = autocommit def _cleanup(self): """Remove the connection from the stack, closing out the cursor""" if self._cursor: LOGGER.debug('Closing the cursor on %s', self.pid) self._cursor.close() self._cursor = None if self._conn: LOGGER.debug('Freeing %s in the pool', self.pid) try: pool.PoolManager.instance().free(self.pid, self._conn) except pool.ConnectionNotFoundError: pass self._conn = None def _connect(self): """Connect to PostgreSQL, either by reusing a connection from the pool if possible, or by creating the new connection. :rtype: psycopg2.extensions.connection :raises: pool.NoIdleConnectionsError """ # Attempt to get a cached connection from the connection pool try: connection = self._pool_manager.get(self.pid, self) LOGGER.debug("Re-using connection for %s", self.pid) except pool.NoIdleConnectionsError: if self._pool_manager.is_full(self.pid): raise # Create a new PostgreSQL connection kwargs = utils.uri_to_kwargs(self._uri) LOGGER.debug("Creating a new connection for %s", self.pid) connection = self._psycopg2_connect(kwargs) self._pool_manager.add(self.pid, connection) self._pool_manager.lock(self.pid, connection, self) # Added in because psycopg2ct connects and leaves the connection in # a weird state: consts.STATUS_DATESTYLE, returning from # Connection._setup without setting the state as const.STATUS_OK if utils.PYPY: connection.reset() # Register the custom data types self._register_unicode(connection) self._register_uuid(connection) return connection def _get_cursor(self, connection, name=None): """Return a cursor for the given cursor_factory. Specify a name to use server-side cursors. :param connection: The connection to create a cursor on :type connection: psycopg2.extensions.connection :param str name: A cursor name for a server side cursor :rtype: psycopg2.extensions.cursor """ cursor = connection.cursor(name=name, cursor_factory=self._cursor_factory) if name is not None: cursor.scrollable = True cursor.withhold = True return cursor def _incr_exceptions(self): """Increment the number of exceptions for the current connection.""" self._pool_manager.get_connection(self.pid, self._conn).exceptions += 1 def _incr_executions(self): """Increment the number of executions for the current connection.""" self._pool_manager.get_connection(self.pid, self._conn).executions += 1 def _psycopg2_connect(self, kwargs): """Return a psycopg2 connection for the specified kwargs. Extend for use in async session adapters. :param dict kwargs: Keyword connection args :rtype: psycopg2.extensions.connection """ return psycopg2.connect(**kwargs) @staticmethod def _register_unicode(connection): """Register the cursor to be able to receive Unicode string. :type connection: psycopg2.extensions.connection :param connection: Where to register things """ psycopg2.extensions.register_type(psycopg2.extensions.UNICODE, connection) psycopg2.extensions.register_type(psycopg2.extensions.UNICODEARRAY, connection) @staticmethod def _register_uuid(connection): """Register the UUID extension from the psycopg2.extra module :type connection: psycopg2.extensions.connection :param connection: Where to register things """ psycopg2.extras.register_uuid(conn_or_curs=connection) @property def _status(self): """Return the current connection status as an integer value. The status should match one of the following constants: - queries.Session.INTRANS: Connection established, in transaction - queries.Session.PREPARED: Prepared for second phase of transaction - queries.Session.READY: Connected, no active transaction :rtype: int """ if self._conn.status == psycopg2.extensions.STATUS_BEGIN: return self.READY return self._conn.status <MSG> Update the documentaiton [ci skip] <DFF> @@ -139,7 +139,7 @@ class Session(object): @property def connection(self): - """Returns the psycopg2 PostgreSQL connection instance + """The current open connection to PostgreSQL. :rtype: psycopg2.extensions.connection @@ -148,7 +148,7 @@ class Session(object): @property def cursor(self): - """Returns the cursor instance + """The current, active cursor for the open connection. :rtype: psycopg2.extensions.cursor
2
Update the documentaiton
2
.py
py
bsd-3-clause
gmr/queries
1226
<NME> session.py <BEF> """The Session class allows for a unified (and simplified) view of interfacing with a PostgreSQL database server. Connection details are passed in as a PostgreSQL URI and connections are pooled by default, allowing for reuse of connections across modules in the Python runtime without having to pass around the object handle. While you can still access the raw `psycopg2` connection and cursor objects to provide ultimate flexibility in how you use the queries.Session object, there are convenience methods designed to simplify the interaction with PostgreSQL. For `psycopg2` functionality outside of what is exposed in Session, simply use the Session.connection or Session.cursor properties to gain access to either object just as you would in a program using psycopg2 directly. Example usage: .. code:: python import queries with queries.Session('pgsql://postgres@localhost/postgres') as session: for row in session.Query('SELECT * FROM table'): print row """ import hashlib import logging import psycopg2 from psycopg2 import extensions, extras from queries import pool, results, utils LOGGER = logging.getLogger(__name__) DEFAULT_ENCODING = 'UTF8' DEFAULT_URI = 'postgresql://localhost:5432' class Session(object): """The Session class allows for a unified (and simplified) view of interfacing with a PostgreSQL database server. The Session object can act as a context manager, providing automated cleanup and simple, Pythonic way of interacting with the object. :param str uri: PostgreSQL connection URI :param psycopg2.extensions.cursor: The cursor type to use :param int pool_idle_ttl: How long idle pools keep connections open :param int pool_max_size: The maximum size of the pool to use """ _conn = None _cursor = None _tpc_id = None _uri = None # Connection status constants INTRANS = extensions.STATUS_IN_TRANSACTION PREPARED = extensions.STATUS_PREPARED READY = extensions.STATUS_READY SETUP = extensions.STATUS_SETUP # Transaction status constants TX_ACTIVE = extensions.TRANSACTION_STATUS_ACTIVE TX_IDLE = extensions.TRANSACTION_STATUS_IDLE TX_INERROR = extensions.TRANSACTION_STATUS_INERROR TX_INTRANS = extensions.TRANSACTION_STATUS_INTRANS TX_UNKNOWN = extensions.TRANSACTION_STATUS_UNKNOWN def __init__(self, uri=DEFAULT_URI, cursor_factory=extras.RealDictCursor, pool_idle_ttl=pool.DEFAULT_IDLE_TTL, pool_max_size=pool.DEFAULT_MAX_SIZE, autocommit=True): """Connect to a PostgreSQL server using the module wide connection and set the isolation level. :param str uri: PostgreSQL connection URI :param psycopg2.extensions.cursor: The cursor type to use :param int pool_idle_ttl: How long idle pools keep connections open :param int pool_max_size: The maximum size of the pool to use """ self._pool_manager = pool.PoolManager.instance() self._uri = uri # Ensure the pool exists in the pool manager if self.pid not in self._pool_manager: self._pool_manager.create(self.pid, pool_idle_ttl, pool_max_size) self._conn = self._connect() self._cursor_factory = cursor_factory self._cursor = self._get_cursor(self._conn) self._autocommit(autocommit) @property def backend_pid(self): """Return the backend process ID of the PostgreSQL server that this session is connected to. :rtype: int """ return self._conn.get_backend_pid() def callproc(self, name, args=None): """Call a stored procedure on the server, returning the results in a :py:class:`queries.Results` instance. :param str name: The procedure name :param list args: The list of arguments to pass in :rtype: queries.Results :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError :raises: queries.InternalError :raises: queries.InterfaceError :raises: queries.NotSupportedError :raises: queries.OperationalError :raises: queries.ProgrammingError """ try: self._cursor.callproc(name, args) except psycopg2.Error as err: self._incr_exceptions() raise err finally: self._incr_executions() return results.Results(self._cursor) def close(self): """Explicitly close the connection and remove it from the connection pool if pooling is enabled. If the connection is already closed :raises: psycopg2.InterfaceError """ @property def connection(self): """Returns the psycopg2 PostgreSQL connection instance :rtype: psycopg2.extensions.connection self._conn, self._cursor = None, None @property def cursor(self): """Returns the cursor instance :rtype: psycopg2.extensions.cursor return self._conn @property def cursor(self): """Return the current, active cursor for the open connection. :rtype: psycopg2.extensions.cursor """ return self._cursor @property def encoding(self): """Return the current client encoding value. :rtype: str """ return self._conn.encoding @property def notices(self): """Return a list of up to the last 50 server notices sent to the client. :rtype: list """ return self._conn.notices @property def pid(self): """Return the pool ID used for connection pooling. :rtype: str """ return hashlib.md5(':'.join([self.__class__.__name__, self._uri]).encode('utf-8')).hexdigest() def query(self, sql, parameters=None): """A generator to issue a query on the server, mogrifying the parameters against the sql statement. Results are returned as a :py:class:`queries.Results` object which can act as an iterator and has multiple ways to access the result data. :param str sql: The SQL statement :param dict parameters: A dictionary of query parameters :rtype: queries.Results :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError :raises: queries.InternalError :raises: queries.InterfaceError :raises: queries.NotSupportedError :raises: queries.OperationalError :raises: queries.ProgrammingError """ try: self._cursor.execute(sql, parameters) except psycopg2.Error as err: self._incr_exceptions() raise err finally: self._incr_executions() return results.Results(self._cursor) def set_encoding(self, value=DEFAULT_ENCODING): """Set the client encoding for the session if the value specified is different than the current client encoding. :param str value: The encoding value to use """ if self._conn.encoding != value: self._conn.set_client_encoding(value) def __del__(self): """When deleting the context, ensure the instance is removed from caches, etc. """ self._cleanup() def __enter__(self): """For use as a context manager, return a handle to this object instance. :rtype: Session """ return self def __exit__(self, exc_type, exc_val, exc_tb): """When leaving the context, ensure the instance is removed from caches, etc. """ self._cleanup() def _autocommit(self, autocommit): """Set the isolation level automatically to commit or not after every query :param autocommit: Boolean (Default - True) """ self._conn.autocommit = autocommit def _cleanup(self): """Remove the connection from the stack, closing out the cursor""" if self._cursor: LOGGER.debug('Closing the cursor on %s', self.pid) self._cursor.close() self._cursor = None if self._conn: LOGGER.debug('Freeing %s in the pool', self.pid) try: pool.PoolManager.instance().free(self.pid, self._conn) except pool.ConnectionNotFoundError: pass self._conn = None def _connect(self): """Connect to PostgreSQL, either by reusing a connection from the pool if possible, or by creating the new connection. :rtype: psycopg2.extensions.connection :raises: pool.NoIdleConnectionsError """ # Attempt to get a cached connection from the connection pool try: connection = self._pool_manager.get(self.pid, self) LOGGER.debug("Re-using connection for %s", self.pid) except pool.NoIdleConnectionsError: if self._pool_manager.is_full(self.pid): raise # Create a new PostgreSQL connection kwargs = utils.uri_to_kwargs(self._uri) LOGGER.debug("Creating a new connection for %s", self.pid) connection = self._psycopg2_connect(kwargs) self._pool_manager.add(self.pid, connection) self._pool_manager.lock(self.pid, connection, self) # Added in because psycopg2ct connects and leaves the connection in # a weird state: consts.STATUS_DATESTYLE, returning from # Connection._setup without setting the state as const.STATUS_OK if utils.PYPY: connection.reset() # Register the custom data types self._register_unicode(connection) self._register_uuid(connection) return connection def _get_cursor(self, connection, name=None): """Return a cursor for the given cursor_factory. Specify a name to use server-side cursors. :param connection: The connection to create a cursor on :type connection: psycopg2.extensions.connection :param str name: A cursor name for a server side cursor :rtype: psycopg2.extensions.cursor """ cursor = connection.cursor(name=name, cursor_factory=self._cursor_factory) if name is not None: cursor.scrollable = True cursor.withhold = True return cursor def _incr_exceptions(self): """Increment the number of exceptions for the current connection.""" self._pool_manager.get_connection(self.pid, self._conn).exceptions += 1 def _incr_executions(self): """Increment the number of executions for the current connection.""" self._pool_manager.get_connection(self.pid, self._conn).executions += 1 def _psycopg2_connect(self, kwargs): """Return a psycopg2 connection for the specified kwargs. Extend for use in async session adapters. :param dict kwargs: Keyword connection args :rtype: psycopg2.extensions.connection """ return psycopg2.connect(**kwargs) @staticmethod def _register_unicode(connection): """Register the cursor to be able to receive Unicode string. :type connection: psycopg2.extensions.connection :param connection: Where to register things """ psycopg2.extensions.register_type(psycopg2.extensions.UNICODE, connection) psycopg2.extensions.register_type(psycopg2.extensions.UNICODEARRAY, connection) @staticmethod def _register_uuid(connection): """Register the UUID extension from the psycopg2.extra module :type connection: psycopg2.extensions.connection :param connection: Where to register things """ psycopg2.extras.register_uuid(conn_or_curs=connection) @property def _status(self): """Return the current connection status as an integer value. The status should match one of the following constants: - queries.Session.INTRANS: Connection established, in transaction - queries.Session.PREPARED: Prepared for second phase of transaction - queries.Session.READY: Connected, no active transaction :rtype: int """ if self._conn.status == psycopg2.extensions.STATUS_BEGIN: return self.READY return self._conn.status <MSG> Update the documentaiton [ci skip] <DFF> @@ -139,7 +139,7 @@ class Session(object): @property def connection(self): - """Returns the psycopg2 PostgreSQL connection instance + """The current open connection to PostgreSQL. :rtype: psycopg2.extensions.connection @@ -148,7 +148,7 @@ class Session(object): @property def cursor(self): - """Returns the cursor instance + """The current, active cursor for the open connection. :rtype: psycopg2.extensions.cursor
2
Update the documentaiton
2
.py
py
bsd-3-clause
gmr/queries
1227
<NME> pool_manager_tests.py <BEF> """ Tests for Manager class in the pool module """ import unittest import uuid import mock from queries import pool def mock_connection(): conn = mock.MagicMock('psycopg2.extensions.connection') conn.close = mock.Mock() conn.closed = True conn.isexecuting = mock.Mock(return_value=False) return conn class ManagerTests(unittest.TestCase): def setUp(self): self.manager = pool.PoolManager.instance() def tearDown(self): self.manager.shutdown() def test_singleton_behavior(self): self.assertEqual(pool.PoolManager.instance(), self.manager) def test_has_pool_false(self): self.assertNotIn(mock.Mock(), self.manager) psycopg2_conn = mock.Mock() self.manager.add(pid, psycopg2_conn) self.assertIn(psycopg2_conn, self.manager._pools[pid]) pid = str(uuid.uuid4()) psycopg2_conn = mock.Mock() self.assertRaises(KeyError, self.manager.add, pid, psycopg2_conn) def test_ensures_pool_exists_raises_key_error(self): pid = str(uuid.uuid4()) self.assertRaises(KeyError, self.manager._ensure_pool_exists, pid) def test_clean_ensures_pool_exists_catches_key_error(self): pid = str(uuid.uuid4()) self.assertIsNone(self.manager.clean(pid)) def test_clean_invokes_pool_clean(self): pid = str(uuid.uuid4()) with mock.patch('queries.pool.Pool') as Pool: self.manager._pools[pid] = Pool() self.manager._pools[pid].clean = clean = mock.Mock() self.manager.clean(pid) clean.assert_called_once_with() def test_clean_removes_pool(self): pid = str(uuid.uuid4()) with mock.patch('queries.pool.Pool') as Pool: self.manager._pools[pid] = Pool() self.manager.clean(pid) self.assertNotIn(pid, self.manager._pools) def test_create_prevents_duplicate_pool_id(self): pid = str(uuid.uuid4()) with mock.patch('queries.pool.Pool'): self.manager.create(pid, 10, 10) self.assertRaises(KeyError, self.manager.create, pid, 10, 10) def test_create_passes_in_idle_ttl(self): pid = str(uuid.uuid4()) self.manager.create(pid, 12) self.assertEqual(self.manager._pools[pid].idle_ttl, 12) def test_create_passes_in_max_size(self): pid = str(uuid.uuid4()) self.manager.create(pid, 10, 16) self.assertEqual(self.manager._pools[pid].max_size, 16) def test_get_ensures_pool_exists(self): pid = str(uuid.uuid4()) session = mock.Mock() self.assertRaises(KeyError, self.manager.get, pid, session) def test_get_invokes_pool_get(self): pid = str(uuid.uuid4()) session = mock.Mock() self.manager.create(pid) self.manager._pools[pid].get = get = mock.Mock() self.manager.get(pid, session) get.assert_called_once_with(session) def test_free_ensures_pool_exists(self): pid = str(uuid.uuid4()) psycopg2_conn = mock_connection() self.assertRaises(KeyError, self.manager.free, pid, psycopg2_conn) def test_free_invokes_pool_free(self): pid = str(uuid.uuid4()) psycopg2_conn = mock_connection() self.manager.create(pid) self.manager._pools[pid].free = free = mock.Mock() self.manager.free(pid, psycopg2_conn) free.assert_called_once_with(psycopg2_conn) def test_has_connection_ensures_pool_exists(self): pid = str(uuid.uuid4()) self.assertRaises(KeyError, self.manager.has_connection, pid, None) def test_has_idle_connection_ensures_pool_exists(self): pid = str(uuid.uuid4()) self.assertRaises(KeyError, self.manager.has_idle_connection, pid) def test_has_connection_returns_false(self): pid = str(uuid.uuid4()) self.manager.create(pid) self.assertFalse(self.manager.has_connection(pid, mock.Mock())) def test_has_connection_returns_true(self): pid = str(uuid.uuid4()) self.manager.create(pid) psycopg2_conn = mock_connection() self.manager.add(pid, psycopg2_conn) self.assertTrue(self.manager.has_connection(pid, psycopg2_conn)) self.manager.remove(pid) def test_has_idle_connection_returns_false(self): pid = str(uuid.uuid4()) self.manager.create(pid) with mock.patch('queries.pool.Pool.idle_connections', new_callable=mock.PropertyMock) as idle_connections: idle_connections.return_value = 0 self.assertFalse(self.manager.has_idle_connection(pid)) def test_has_idle_connection_returns_true(self): pid = str(uuid.uuid4()) self.manager.create(pid) with mock.patch('queries.pool.Pool.idle_connections', new_callable=mock.PropertyMock) as idle_connections: idle_connections.return_value = 5 self.assertTrue(self.manager.has_idle_connection(pid)) def test_is_full_ensures_pool_exists(self): pid = str(uuid.uuid4()) self.assertRaises(KeyError, self.manager.is_full, pid) def test_is_full_invokes_pool_is_full(self): pid = str(uuid.uuid4()) self.manager.create(pid) with mock.patch('queries.pool.Pool.is_full', new_callable=mock.PropertyMock) as is_full: self.manager.is_full(pid) is_full.assert_called_once_with() def test_lock_ensures_pool_exists(self): pid = str(uuid.uuid4()) self.assertRaises(KeyError, self.manager.lock, pid, None, None) def test_lock_invokes_pool_lock(self): pid = str(uuid.uuid4()) self.manager.create(pid) self.manager._pools[pid].lock = lock = mock.Mock() psycopg2_conn = mock.Mock() session = mock.Mock() self.manager.lock(pid, psycopg2_conn, session) lock.assert_called_once_with(psycopg2_conn, session) def test_remove_ensures_pool_exists(self): pid = str(uuid.uuid4()) self.assertRaises(KeyError, self.manager.remove, pid) def test_remove_invokes_pool_close(self): pid = str(uuid.uuid4()) self.manager.create(pid) self.manager._pools[pid].close = method = mock.Mock() self.manager.remove(pid) method.assert_called_once_with() def test_remove_deletes_pool(self): pid = str(uuid.uuid4()) self.manager.create(pid) self.manager._pools[pid].close = mock.Mock() self.manager.remove(pid) self.assertNotIn(pid, self.manager._pools) def test_remove_connection_ensures_pool_exists(self): pid = str(uuid.uuid4()) self.assertRaises(KeyError, self.manager.remove_connection, pid, None) def test_remove_connection_invokes_pool_remove(self): pid = str(uuid.uuid4()) self.manager.create(pid) self.manager._pools[pid].remove = remove = mock.Mock() psycopg2_conn = mock.Mock() self.manager.remove_connection(pid, psycopg2_conn) remove.assert_called_once_with(psycopg2_conn) def test_size_ensures_pool_exists(self): pid = str(uuid.uuid4()) self.assertRaises(KeyError, self.manager.size, pid) def test_size_returns_pool_length(self): pid = str(uuid.uuid4()) self.manager.create(pid) self.assertEqual(self.manager.size(pid), len(self.manager._pools[pid])) def test_set_idle_ttl_ensures_pool_exists(self): pid = str(uuid.uuid4()) self.assertRaises(KeyError, self.manager.set_idle_ttl, pid, None) def test_set_idle_ttl_invokes_pool_set_idle_ttl(self): pid = str(uuid.uuid4()) self.manager.create(pid) self.manager._pools[pid].set_idle_ttl = set_idle_ttl = mock.Mock() self.manager.set_idle_ttl(pid, 256) set_idle_ttl.assert_called_once_with(256) def test_set_max_size_ensures_pool_exists(self): pid = str(uuid.uuid4()) self.assertRaises(KeyError, self.manager.set_idle_ttl, pid, None) def test_set_max_size_invokes_pool_set_max_size(self): pid = str(uuid.uuid4()) self.manager.create(pid) self.manager._pools[pid].set_max_size = set_max_size = mock.Mock() self.manager.set_max_size(pid, 128) set_max_size.assert_called_once_with(128) def test_shutdown_closes_all(self): pid1, pid2 = str(uuid.uuid4()), str(uuid.uuid4()) self.manager.create(pid1) self.manager._pools[pid1].shutdown = method1 = mock.Mock() self.manager.create(pid2) self.manager._pools[pid2].shutdown = method2 = mock.Mock() self.manager.shutdown() method1.assert_called_once_with() method2.assert_called_once_with() <MSG> A few more pool manager tests <DFF> @@ -35,3 +35,13 @@ class ManagerTests(unittest.TestCase): psycopg2_conn = mock.Mock() self.manager.add(pid, psycopg2_conn) self.assertIn(psycopg2_conn, self.manager._pools[pid]) + + def test_adding_to_pool_ensures_pool_exists(self): + pid = str(uuid.uuid4()) + psycopg2_conn = mock.Mock() + self.assertRaises(KeyError, self.manager.add, pid, psycopg2_conn) + + def test_clean_ensures_pool_exists(self): + pid = str(uuid.uuid4()) + psycopg2_conn = mock.Mock() + self.assertRaises(KeyError, self.manager.clean, pid)
10
A few more pool manager tests
0
.py
py
bsd-3-clause
gmr/queries
1228
<NME> pool_manager_tests.py <BEF> """ Tests for Manager class in the pool module """ import unittest import uuid import mock from queries import pool def mock_connection(): conn = mock.MagicMock('psycopg2.extensions.connection') conn.close = mock.Mock() conn.closed = True conn.isexecuting = mock.Mock(return_value=False) return conn class ManagerTests(unittest.TestCase): def setUp(self): self.manager = pool.PoolManager.instance() def tearDown(self): self.manager.shutdown() def test_singleton_behavior(self): self.assertEqual(pool.PoolManager.instance(), self.manager) def test_has_pool_false(self): self.assertNotIn(mock.Mock(), self.manager) psycopg2_conn = mock.Mock() self.manager.add(pid, psycopg2_conn) self.assertIn(psycopg2_conn, self.manager._pools[pid]) pid = str(uuid.uuid4()) psycopg2_conn = mock.Mock() self.assertRaises(KeyError, self.manager.add, pid, psycopg2_conn) def test_ensures_pool_exists_raises_key_error(self): pid = str(uuid.uuid4()) self.assertRaises(KeyError, self.manager._ensure_pool_exists, pid) def test_clean_ensures_pool_exists_catches_key_error(self): pid = str(uuid.uuid4()) self.assertIsNone(self.manager.clean(pid)) def test_clean_invokes_pool_clean(self): pid = str(uuid.uuid4()) with mock.patch('queries.pool.Pool') as Pool: self.manager._pools[pid] = Pool() self.manager._pools[pid].clean = clean = mock.Mock() self.manager.clean(pid) clean.assert_called_once_with() def test_clean_removes_pool(self): pid = str(uuid.uuid4()) with mock.patch('queries.pool.Pool') as Pool: self.manager._pools[pid] = Pool() self.manager.clean(pid) self.assertNotIn(pid, self.manager._pools) def test_create_prevents_duplicate_pool_id(self): pid = str(uuid.uuid4()) with mock.patch('queries.pool.Pool'): self.manager.create(pid, 10, 10) self.assertRaises(KeyError, self.manager.create, pid, 10, 10) def test_create_passes_in_idle_ttl(self): pid = str(uuid.uuid4()) self.manager.create(pid, 12) self.assertEqual(self.manager._pools[pid].idle_ttl, 12) def test_create_passes_in_max_size(self): pid = str(uuid.uuid4()) self.manager.create(pid, 10, 16) self.assertEqual(self.manager._pools[pid].max_size, 16) def test_get_ensures_pool_exists(self): pid = str(uuid.uuid4()) session = mock.Mock() self.assertRaises(KeyError, self.manager.get, pid, session) def test_get_invokes_pool_get(self): pid = str(uuid.uuid4()) session = mock.Mock() self.manager.create(pid) self.manager._pools[pid].get = get = mock.Mock() self.manager.get(pid, session) get.assert_called_once_with(session) def test_free_ensures_pool_exists(self): pid = str(uuid.uuid4()) psycopg2_conn = mock_connection() self.assertRaises(KeyError, self.manager.free, pid, psycopg2_conn) def test_free_invokes_pool_free(self): pid = str(uuid.uuid4()) psycopg2_conn = mock_connection() self.manager.create(pid) self.manager._pools[pid].free = free = mock.Mock() self.manager.free(pid, psycopg2_conn) free.assert_called_once_with(psycopg2_conn) def test_has_connection_ensures_pool_exists(self): pid = str(uuid.uuid4()) self.assertRaises(KeyError, self.manager.has_connection, pid, None) def test_has_idle_connection_ensures_pool_exists(self): pid = str(uuid.uuid4()) self.assertRaises(KeyError, self.manager.has_idle_connection, pid) def test_has_connection_returns_false(self): pid = str(uuid.uuid4()) self.manager.create(pid) self.assertFalse(self.manager.has_connection(pid, mock.Mock())) def test_has_connection_returns_true(self): pid = str(uuid.uuid4()) self.manager.create(pid) psycopg2_conn = mock_connection() self.manager.add(pid, psycopg2_conn) self.assertTrue(self.manager.has_connection(pid, psycopg2_conn)) self.manager.remove(pid) def test_has_idle_connection_returns_false(self): pid = str(uuid.uuid4()) self.manager.create(pid) with mock.patch('queries.pool.Pool.idle_connections', new_callable=mock.PropertyMock) as idle_connections: idle_connections.return_value = 0 self.assertFalse(self.manager.has_idle_connection(pid)) def test_has_idle_connection_returns_true(self): pid = str(uuid.uuid4()) self.manager.create(pid) with mock.patch('queries.pool.Pool.idle_connections', new_callable=mock.PropertyMock) as idle_connections: idle_connections.return_value = 5 self.assertTrue(self.manager.has_idle_connection(pid)) def test_is_full_ensures_pool_exists(self): pid = str(uuid.uuid4()) self.assertRaises(KeyError, self.manager.is_full, pid) def test_is_full_invokes_pool_is_full(self): pid = str(uuid.uuid4()) self.manager.create(pid) with mock.patch('queries.pool.Pool.is_full', new_callable=mock.PropertyMock) as is_full: self.manager.is_full(pid) is_full.assert_called_once_with() def test_lock_ensures_pool_exists(self): pid = str(uuid.uuid4()) self.assertRaises(KeyError, self.manager.lock, pid, None, None) def test_lock_invokes_pool_lock(self): pid = str(uuid.uuid4()) self.manager.create(pid) self.manager._pools[pid].lock = lock = mock.Mock() psycopg2_conn = mock.Mock() session = mock.Mock() self.manager.lock(pid, psycopg2_conn, session) lock.assert_called_once_with(psycopg2_conn, session) def test_remove_ensures_pool_exists(self): pid = str(uuid.uuid4()) self.assertRaises(KeyError, self.manager.remove, pid) def test_remove_invokes_pool_close(self): pid = str(uuid.uuid4()) self.manager.create(pid) self.manager._pools[pid].close = method = mock.Mock() self.manager.remove(pid) method.assert_called_once_with() def test_remove_deletes_pool(self): pid = str(uuid.uuid4()) self.manager.create(pid) self.manager._pools[pid].close = mock.Mock() self.manager.remove(pid) self.assertNotIn(pid, self.manager._pools) def test_remove_connection_ensures_pool_exists(self): pid = str(uuid.uuid4()) self.assertRaises(KeyError, self.manager.remove_connection, pid, None) def test_remove_connection_invokes_pool_remove(self): pid = str(uuid.uuid4()) self.manager.create(pid) self.manager._pools[pid].remove = remove = mock.Mock() psycopg2_conn = mock.Mock() self.manager.remove_connection(pid, psycopg2_conn) remove.assert_called_once_with(psycopg2_conn) def test_size_ensures_pool_exists(self): pid = str(uuid.uuid4()) self.assertRaises(KeyError, self.manager.size, pid) def test_size_returns_pool_length(self): pid = str(uuid.uuid4()) self.manager.create(pid) self.assertEqual(self.manager.size(pid), len(self.manager._pools[pid])) def test_set_idle_ttl_ensures_pool_exists(self): pid = str(uuid.uuid4()) self.assertRaises(KeyError, self.manager.set_idle_ttl, pid, None) def test_set_idle_ttl_invokes_pool_set_idle_ttl(self): pid = str(uuid.uuid4()) self.manager.create(pid) self.manager._pools[pid].set_idle_ttl = set_idle_ttl = mock.Mock() self.manager.set_idle_ttl(pid, 256) set_idle_ttl.assert_called_once_with(256) def test_set_max_size_ensures_pool_exists(self): pid = str(uuid.uuid4()) self.assertRaises(KeyError, self.manager.set_idle_ttl, pid, None) def test_set_max_size_invokes_pool_set_max_size(self): pid = str(uuid.uuid4()) self.manager.create(pid) self.manager._pools[pid].set_max_size = set_max_size = mock.Mock() self.manager.set_max_size(pid, 128) set_max_size.assert_called_once_with(128) def test_shutdown_closes_all(self): pid1, pid2 = str(uuid.uuid4()), str(uuid.uuid4()) self.manager.create(pid1) self.manager._pools[pid1].shutdown = method1 = mock.Mock() self.manager.create(pid2) self.manager._pools[pid2].shutdown = method2 = mock.Mock() self.manager.shutdown() method1.assert_called_once_with() method2.assert_called_once_with() <MSG> A few more pool manager tests <DFF> @@ -35,3 +35,13 @@ class ManagerTests(unittest.TestCase): psycopg2_conn = mock.Mock() self.manager.add(pid, psycopg2_conn) self.assertIn(psycopg2_conn, self.manager._pools[pid]) + + def test_adding_to_pool_ensures_pool_exists(self): + pid = str(uuid.uuid4()) + psycopg2_conn = mock.Mock() + self.assertRaises(KeyError, self.manager.add, pid, psycopg2_conn) + + def test_clean_ensures_pool_exists(self): + pid = str(uuid.uuid4()) + psycopg2_conn = mock.Mock() + self.assertRaises(KeyError, self.manager.clean, pid)
10
A few more pool manager tests
0
.py
py
bsd-3-clause
gmr/queries
1229
<NME> session_tests.py <BEF> """ Tests for functionality in the session module """ import hashlib import logging import unittest import mock from psycopg2 import extras import psycopg2 # Out of order import to ensure psycopg2cffi is registered from queries import pool, results, session, utils LOGGER = logging.getLogger(__name__) class SessionTestCase(unittest.TestCase): URI = 'postgresql://foo:bar@localhost:5432/foo' @mock.patch('psycopg2.connect') @mock.patch('psycopg2.extensions.register_type') @mock.patch('psycopg2.extras.register_uuid') @mock.patch('queries.utils.uri_to_kwargs') def setUp(self, uri_to_kwargs, register_uuid, register_type, connect): self.conn = mock.Mock() self.conn.autocommit = False self.conn.closed = False self.conn.cursor = mock.Mock() self.conn.isexecuting = mock.Mock(return_value=False) self.conn.reset = mock.Mock() self.conn.status = psycopg2.extensions.STATUS_BEGIN self.psycopg2_connect = connect self.psycopg2_connect.return_value = self.conn self.psycopg2_register_type = register_type self.psycopg2_register_uuid = register_uuid self.uri_to_kwargs = uri_to_kwargs self.uri_to_kwargs.return_value = {'host': 'localhost', 'port': 5432, 'user': 'foo', 'password': 'bar', 'dbname': 'foo'} self.obj = session.Session(self.URI, pool_max_size=100) def test_init_sets_uri(self): self.assertEqual(self.obj._uri, self.URI) def test_init_creates_new_pool(self): self.assertIn(self.obj.pid, self.obj._pool_manager) def test_init_creates_connection(self): conns = \ [value.handle for key, value in self.obj._pool_manager._pools[self.obj.pid].connections.items()] self.assertIn(self.conn, conns) def test_init_sets_cursorfactory(self): self.assertEqual(self.obj._cursor_factory, extras.RealDictCursor) def test_init_gets_cursor(self): self.conn.cursor.assert_called_once_with( name=None, cursor_factory=extras.RealDictCursor) def test_init_sets_autocommit(self): self.assertTrue(self.conn.autocommit) def test_backend_pid_invokes_conn_backend_pid(self): self.conn.get_backend_pid = get_backend_pid = mock.Mock() LOGGER.debug('ValueL %s', self.obj.backend_pid) get_backend_pid.assert_called_once_with() def test_callproc_invokes_cursor_callproc(self): self.obj._cursor.callproc = mock.Mock() args = ('foo', ['bar', 'baz']) self.obj.callproc(*args) self.obj._cursor.callproc.assert_called_once_with(*args) def test_callproc_returns_results(self): self.obj._cursor.callproc = mock.Mock() args = ('foo', ['bar', 'baz']) self.assertIsInstance(self.obj.callproc(*args), results.Results) def test_close_raises_exception(self): self.obj._conn = None self.assertRaises(psycopg2.InterfaceError, self.obj.close) def test_close_removes_connection(self): self.obj.close() self.assertNotIn(self.conn, self.obj._pool_manager._pools[self.obj.pid]) def test_close_unassigns_connection(self): self.obj.close() self.assertIsNone(self.obj._conn) def test_close_unassigns_cursor(self): self.obj.close() self.assertIsNone(self.obj._cursor) def test_connection_property_returns_correct_value(self): self.assertEqual(self.obj.connection, self.conn) def test_cursor_property_returns_correct_value(self): self.assertEqual(self.obj.cursor, self.obj._cursor) def test_encoding_property_value(self): self.conn.encoding = 'UTF-8' self.assertEqual(self.obj.encoding, 'UTF-8') def test_notices_value(self): self.conn.notices = [1, 2, 3] self.assertListEqual(self.obj.notices, [1, 2, 3]) def test_pid_value(self): expectation = hashlib.md5( ':'.join([self.obj.__class__.__name__, self.URI]).encode('utf-8')).hexdigest() self.assertEqual(self.obj.pid, expectation) def test_query_invokes_cursor_execute(self): self.obj._cursor.callproc = mock.Mock() args = ('SELECT * FROM foo', ['bar', 'baz']) self.obj.query(*args) self.obj._cursor.execute.assert_called_once_with(*args) def test_set_encoding_sets_encoding_if_different(self): self.conn.encoding = 'LATIN-1' self.conn.set_client_encoding = set_client_encoding = mock.Mock() self.obj.set_encoding('UTF-8') set_client_encoding.assert_called_once_with('UTF-8') def test_set_encoding_does_not_set_encoding_if_same(self): self.conn.encoding = 'UTF-8' self.conn.set_client_encoding = set_client_encoding = mock.Mock() self.obj.set_encoding('UTF-8') self.assertFalse(set_client_encoding.called) @unittest.skipIf(utils.PYPY, 'PYPY does not invoke object.__del__ synchronously') def test_del_invokes_cleanup(self): cleanup = mock.Mock() with mock.patch.multiple('queries.session.Session', _cleanup=cleanup, _connect=mock.Mock(), _get_cursor=mock.Mock(), _autocommit=mock.Mock()): obj = session.Session(self.URI) del obj cleanup.assert_called_once_with() def test_exit_invokes_cleanup(self): cleanup = mock.Mock() with mock.patch.multiple('queries.session.Session', _cleanup=cleanup, _connect=mock.Mock(), _get_cursor=mock.Mock(), _autocommit=mock.Mock()): with session.Session(self.URI): pass self.assertTrue(cleanup.called) def test_autocommit_sets_attribute(self): self.conn.autocommit = False self.obj._autocommit() self.assertTrue(self.conn.autocommit) def test_cleanup_closes_cursor(self): self.obj._cursor.close = closeit = mock.Mock() self.conn = None self.obj._cleanup() closeit.assert_called_once_with() def test_cleanup_sets_cursor_to_none(self): self.obj._cursor.close = mock.Mock() self.conn = None self.obj._cleanup() self.assertIsNone(self.obj._cursor) def test_cleanup_frees_connection(self): with mock.patch.object(self.obj._pool_manager, 'free') as free: conn = self.obj._conn self.obj._cleanup() free.assert_called_once_with(self.obj.pid, conn) def test_cleanup_sets_connect_to_none(self): self.obj._cleanup() self.assertIsNone(self.obj._conn) def test_connect_invokes_pool_manager_get(self): with mock.patch.object(self.obj._pool_manager, 'get') as get: self.obj._connect() get.assert_called_once_with(self.obj.pid, self.obj) def test_connect_raises_noidleconnectionserror(self): with mock.patch.object(self.obj._pool_manager, 'get') as get: with mock.patch.object(self.obj._pool_manager, 'is_full') as full: get.side_effect = pool.NoIdleConnectionsError(self.obj.pid) full.return_value = True self.assertRaises(pool.NoIdleConnectionsError, self.obj._connect) def test_connect_invokes_uri_to_kwargs(self): self.uri_to_kwargs.assert_called_once_with(self.URI) def test_connect_returned_the_proper_value(self): self.assertEqual(self.obj.connection, self.conn) def test_status_is_ready_by_default(self): self.assertEqual(self.obj._status, self.obj.READY) def test_status_when_not_ready(self): self.conn.status = self.obj.SETUP self.assertEqual(self.obj._status, self.obj.SETUP) def test_get_named_cursor_sets_scrollable(self): result = self.obj._get_cursor(self.obj._conn, 'test1') self.assertTrue(result.scrollable) def test_get_named_cursor_sets_withhold(self): result = self.obj._get_cursor(self.obj._conn, 'test2') self.assertTrue(result.withhhold) @unittest.skipUnless(utils.PYPY, 'connection.reset is PYPY only behavior') def test_connection_reset_in_pypy(self): self.conn.reset.assert_called_once_with() <MSG> Merge pull request #38 from tanveerg/master Add ability to override autocommit option for the session. <DFF> @@ -165,7 +165,7 @@ class SessionTestCase(unittest.TestCase): def test_autocommit_sets_attribute(self): self.conn.autocommit = False - self.obj._autocommit() + self.obj._autocommit(True) self.assertTrue(self.conn.autocommit) def test_cleanup_closes_cursor(self):
1
Merge pull request #38 from tanveerg/master
1
.py
py
bsd-3-clause
gmr/queries
1230
<NME> session_tests.py <BEF> """ Tests for functionality in the session module """ import hashlib import logging import unittest import mock from psycopg2 import extras import psycopg2 # Out of order import to ensure psycopg2cffi is registered from queries import pool, results, session, utils LOGGER = logging.getLogger(__name__) class SessionTestCase(unittest.TestCase): URI = 'postgresql://foo:bar@localhost:5432/foo' @mock.patch('psycopg2.connect') @mock.patch('psycopg2.extensions.register_type') @mock.patch('psycopg2.extras.register_uuid') @mock.patch('queries.utils.uri_to_kwargs') def setUp(self, uri_to_kwargs, register_uuid, register_type, connect): self.conn = mock.Mock() self.conn.autocommit = False self.conn.closed = False self.conn.cursor = mock.Mock() self.conn.isexecuting = mock.Mock(return_value=False) self.conn.reset = mock.Mock() self.conn.status = psycopg2.extensions.STATUS_BEGIN self.psycopg2_connect = connect self.psycopg2_connect.return_value = self.conn self.psycopg2_register_type = register_type self.psycopg2_register_uuid = register_uuid self.uri_to_kwargs = uri_to_kwargs self.uri_to_kwargs.return_value = {'host': 'localhost', 'port': 5432, 'user': 'foo', 'password': 'bar', 'dbname': 'foo'} self.obj = session.Session(self.URI, pool_max_size=100) def test_init_sets_uri(self): self.assertEqual(self.obj._uri, self.URI) def test_init_creates_new_pool(self): self.assertIn(self.obj.pid, self.obj._pool_manager) def test_init_creates_connection(self): conns = \ [value.handle for key, value in self.obj._pool_manager._pools[self.obj.pid].connections.items()] self.assertIn(self.conn, conns) def test_init_sets_cursorfactory(self): self.assertEqual(self.obj._cursor_factory, extras.RealDictCursor) def test_init_gets_cursor(self): self.conn.cursor.assert_called_once_with( name=None, cursor_factory=extras.RealDictCursor) def test_init_sets_autocommit(self): self.assertTrue(self.conn.autocommit) def test_backend_pid_invokes_conn_backend_pid(self): self.conn.get_backend_pid = get_backend_pid = mock.Mock() LOGGER.debug('ValueL %s', self.obj.backend_pid) get_backend_pid.assert_called_once_with() def test_callproc_invokes_cursor_callproc(self): self.obj._cursor.callproc = mock.Mock() args = ('foo', ['bar', 'baz']) self.obj.callproc(*args) self.obj._cursor.callproc.assert_called_once_with(*args) def test_callproc_returns_results(self): self.obj._cursor.callproc = mock.Mock() args = ('foo', ['bar', 'baz']) self.assertIsInstance(self.obj.callproc(*args), results.Results) def test_close_raises_exception(self): self.obj._conn = None self.assertRaises(psycopg2.InterfaceError, self.obj.close) def test_close_removes_connection(self): self.obj.close() self.assertNotIn(self.conn, self.obj._pool_manager._pools[self.obj.pid]) def test_close_unassigns_connection(self): self.obj.close() self.assertIsNone(self.obj._conn) def test_close_unassigns_cursor(self): self.obj.close() self.assertIsNone(self.obj._cursor) def test_connection_property_returns_correct_value(self): self.assertEqual(self.obj.connection, self.conn) def test_cursor_property_returns_correct_value(self): self.assertEqual(self.obj.cursor, self.obj._cursor) def test_encoding_property_value(self): self.conn.encoding = 'UTF-8' self.assertEqual(self.obj.encoding, 'UTF-8') def test_notices_value(self): self.conn.notices = [1, 2, 3] self.assertListEqual(self.obj.notices, [1, 2, 3]) def test_pid_value(self): expectation = hashlib.md5( ':'.join([self.obj.__class__.__name__, self.URI]).encode('utf-8')).hexdigest() self.assertEqual(self.obj.pid, expectation) def test_query_invokes_cursor_execute(self): self.obj._cursor.callproc = mock.Mock() args = ('SELECT * FROM foo', ['bar', 'baz']) self.obj.query(*args) self.obj._cursor.execute.assert_called_once_with(*args) def test_set_encoding_sets_encoding_if_different(self): self.conn.encoding = 'LATIN-1' self.conn.set_client_encoding = set_client_encoding = mock.Mock() self.obj.set_encoding('UTF-8') set_client_encoding.assert_called_once_with('UTF-8') def test_set_encoding_does_not_set_encoding_if_same(self): self.conn.encoding = 'UTF-8' self.conn.set_client_encoding = set_client_encoding = mock.Mock() self.obj.set_encoding('UTF-8') self.assertFalse(set_client_encoding.called) @unittest.skipIf(utils.PYPY, 'PYPY does not invoke object.__del__ synchronously') def test_del_invokes_cleanup(self): cleanup = mock.Mock() with mock.patch.multiple('queries.session.Session', _cleanup=cleanup, _connect=mock.Mock(), _get_cursor=mock.Mock(), _autocommit=mock.Mock()): obj = session.Session(self.URI) del obj cleanup.assert_called_once_with() def test_exit_invokes_cleanup(self): cleanup = mock.Mock() with mock.patch.multiple('queries.session.Session', _cleanup=cleanup, _connect=mock.Mock(), _get_cursor=mock.Mock(), _autocommit=mock.Mock()): with session.Session(self.URI): pass self.assertTrue(cleanup.called) def test_autocommit_sets_attribute(self): self.conn.autocommit = False self.obj._autocommit() self.assertTrue(self.conn.autocommit) def test_cleanup_closes_cursor(self): self.obj._cursor.close = closeit = mock.Mock() self.conn = None self.obj._cleanup() closeit.assert_called_once_with() def test_cleanup_sets_cursor_to_none(self): self.obj._cursor.close = mock.Mock() self.conn = None self.obj._cleanup() self.assertIsNone(self.obj._cursor) def test_cleanup_frees_connection(self): with mock.patch.object(self.obj._pool_manager, 'free') as free: conn = self.obj._conn self.obj._cleanup() free.assert_called_once_with(self.obj.pid, conn) def test_cleanup_sets_connect_to_none(self): self.obj._cleanup() self.assertIsNone(self.obj._conn) def test_connect_invokes_pool_manager_get(self): with mock.patch.object(self.obj._pool_manager, 'get') as get: self.obj._connect() get.assert_called_once_with(self.obj.pid, self.obj) def test_connect_raises_noidleconnectionserror(self): with mock.patch.object(self.obj._pool_manager, 'get') as get: with mock.patch.object(self.obj._pool_manager, 'is_full') as full: get.side_effect = pool.NoIdleConnectionsError(self.obj.pid) full.return_value = True self.assertRaises(pool.NoIdleConnectionsError, self.obj._connect) def test_connect_invokes_uri_to_kwargs(self): self.uri_to_kwargs.assert_called_once_with(self.URI) def test_connect_returned_the_proper_value(self): self.assertEqual(self.obj.connection, self.conn) def test_status_is_ready_by_default(self): self.assertEqual(self.obj._status, self.obj.READY) def test_status_when_not_ready(self): self.conn.status = self.obj.SETUP self.assertEqual(self.obj._status, self.obj.SETUP) def test_get_named_cursor_sets_scrollable(self): result = self.obj._get_cursor(self.obj._conn, 'test1') self.assertTrue(result.scrollable) def test_get_named_cursor_sets_withhold(self): result = self.obj._get_cursor(self.obj._conn, 'test2') self.assertTrue(result.withhhold) @unittest.skipUnless(utils.PYPY, 'connection.reset is PYPY only behavior') def test_connection_reset_in_pypy(self): self.conn.reset.assert_called_once_with() <MSG> Merge pull request #38 from tanveerg/master Add ability to override autocommit option for the session. <DFF> @@ -165,7 +165,7 @@ class SessionTestCase(unittest.TestCase): def test_autocommit_sets_attribute(self): self.conn.autocommit = False - self.obj._autocommit() + self.obj._autocommit(True) self.assertTrue(self.conn.autocommit) def test_cleanup_closes_cursor(self):
1
Merge pull request #38 from tanveerg/master
1
.py
py
bsd-3-clause
gmr/queries
1231
<NME> README.rst <BEF> Queries: PostgreSQL Simplified ============================== *Queries* is a BSD licensed opinionated wrapper of the psycopg2_ library for interacting with PostgreSQL. The popular psycopg2_ package is a full-featured python client. Unfortunately as a developer, you're often repeating the same steps to get started with your applications that use it. Queries aims to reduce the complexity of psycopg2 while adding additional features to make writing PostgreSQL client applications both fast and easy. Check out the `Usage`_ section below to see how easy it can be. Key features include: - Simplified API - Support of Python 2.7+ and 3.4+ - PyPy support via psycopg2cffi_ - Asynchronous support for Tornado_ - Connection information provided by URI - Query results delivered as a generator based iterators - Automatically registered data-type support for UUIDs, Unicode and Unicode Arrays - Ability to directly access psycopg2 ``connection`` and ``cursor`` objects - Internal connection pooling |Version| |Status| |Coverage| |License| Documentation ------------- Documentation is available at https://queries.readthedocs.org Installation ------------ Queries is available via pypi_ and can be installed with easy_install or pip: .. code:: bash pip install queries Usage ----- Queries provides a session based API for interacting with PostgreSQL. Simply pass in the URI_ of the PostgreSQL server to connect to when creating a session: .. code:: python session = queries.Session("postgresql://postgres@localhost:5432/postgres") Queries built-in connection pooling will re-use connections when possible, lowering the overhead of connecting and reconnecting. When specifying a URI, if you omit the username and database name to connect with, Queries will use the current OS username for both. You can also omit the URI when connecting to connect to localhost on port 5432 as the current OS user, connecting to a database named for the current user. For example, if your username is ``fred`` and you omit the URI when issuing ``queries.query`` the URI that is constructed would be ``postgresql://fred@localhost:5432/fred``. If you'd rather use individual values for the connection, the queries.uri() method provides a quick and easy way to create a URI to pass into the various methods. .. code:: python >>> queries.uri("server-name", 5432, "dbname", "user", "pass") 'postgresql://user:pass@server-name:5432/dbname' Environment Variables ^^^^^^^^^^^^^^^^^^^^^ Currently Queries uses the following environment variables for tweaking various configuration values. The supported ones are: In addition to both the ``queries.Session.query`` and ``queries.Session.callproc`` methods that are similar to the simple API methods, the ``queries.Session`` class provides access to the psycopg2 connection and cursor objects. It also provides methods for managing transactions and to the `LISTEN/NOTIFY <http://www.postgresql.org/docs/9.3/static/sql-listen.html>`_ functionality provided by PostgreSQL. **Using queries.Session.query** more information on the ``with`` keyword and context managers, see PEP343_. In addition to both the ``queries.Session.query`` and ``queries.Session.callproc`` methods that are similar to the simple API methods, the ``queries.Session`` class provides access to the psycopg2 connection and cursor objects. **Using queries.Session.query** The following example shows how a ``queries.Session`` object can be used as a context manager to query the database table: .. code:: python >>> import pprint >>> import queries >>> >>> with queries.Session() as session: ... for row in session.query('SELECT * FROM names'): ... pprint.pprint(row) ... {'id': 1, 'name': u'Jacob'} {'id': 2, 'name': u'Mason'} {'id': 3, 'name': u'Ethan'} **Using queries.Session.callproc** This example uses ``queries.Session.callproc`` to execute a stored procedure and then pretty-prints the single row results as a dictionary: .. code:: python >>> import pprint >>> import queries >>> with queries.Session() as session: ... results = session.callproc('chr', [65]) ... pprint.pprint(results.as_dict()) ... {'chr': u'A'} **Asynchronous Queries with Tornado** In addition to providing a Pythonic, synchronous client API for PostgreSQL, Queries provides a very similar asynchronous API for use with Tornado. The only major difference API difference between ``queries.TornadoSession`` and ``queries.Session`` is the ``TornadoSession.query`` and ``TornadoSession.callproc`` methods return the entire result set instead of acting as an iterator over the results. The following example uses ``TornadoSession.query`` in an asynchronous Tornado_ web application to send a JSON payload with the query result set. .. code:: python from tornado import gen, ioloop, web import queries class MainHandler(web.RequestHandler): def initialize(self): self.session = queries.TornadoSession() @gen.coroutine def get(self): results = yield self.session.query('SELECT * FROM names') self.finish({'data': results.items()}) results.free() application = web.Application([ (r"/", MainHandler), ]) if __name__ == "__main__": application.listen(8888) ioloop.IOLoop.instance().start() Inspiration ----------- Queries is inspired by `Kenneth Reitz's <https://github.com/kennethreitz/>`_ awesome work on `requests <http://docs.python-requests.org/en/latest/>`_. History ------- Queries is a fork and enhancement of pgsql_wrapper_, which can be found in the main GitHub repository of Queries as tags prior to version 1.2.0. .. _pypi: https://pypi.python.org/pypi/queries .. _psycopg2: https://pypi.python.org/pypi/psycopg2 .. _documentation: https://queries.readthedocs.org .. _URI: http://www.postgresql.org/docs/9.3/static/libpq-connect.html#LIBPQ-CONNSTRING .. _pgsql_wrapper: https://pypi.python.org/pypi/pgsql_wrapper .. _Tornado: http://tornadoweb.org .. _PEP343: http://legacy.python.org/dev/peps/pep-0343/ :target: https://pypi.python.org/pypi/queries .. |Coverage| image:: https://img.shields.io/coveralls/gmr/queries.svg? :target: https://coveralls.io/r/gmr/queries .. |Status| image:: https://img.shields.io/travis/gmr/queries.svg? :target: https://travis-ci.org/gmr/queries .. |Coverage| image:: https://img.shields.io/codecov/c/github/gmr/queries.svg? :target: https://codecov.io/github/gmr/queries?branch=master .. |License| image:: https://img.shields.io/github/license/gmr/queries.svg? :target: https://github.com/gmr/queries <MSG> Remove no longer accurate info about LISTEN/NOTIFY [skip ci] <DFF> @@ -74,10 +74,7 @@ more information on the ``with`` keyword and context managers, see PEP343_. In addition to both the ``queries.Session.query`` and ``queries.Session.callproc`` methods that are similar to the simple API methods, the ``queries.Session`` class -provides access to the psycopg2 connection and cursor objects. It also provides -methods for managing transactions and to the -`LISTEN/NOTIFY <http://www.postgresql.org/docs/9.3/static/sql-listen.html>`_ -functionality provided by PostgreSQL. +provides access to the psycopg2 connection and cursor objects. **Using queries.Session.query** @@ -174,4 +171,4 @@ main GitHub repository of Queries as tags prior to version 1.2.0. :target: https://pypi.python.org/pypi/queries .. |Coverage| image:: https://img.shields.io/coveralls/gmr/queries.svg? - :target: https://coveralls.io/r/gmr/queries \ No newline at end of file + :target: https://coveralls.io/r/gmr/queries
2
Remove no longer accurate info about LISTEN/NOTIFY [skip ci]
5
.rst
rst
bsd-3-clause
gmr/queries
1232
<NME> README.rst <BEF> Queries: PostgreSQL Simplified ============================== *Queries* is a BSD licensed opinionated wrapper of the psycopg2_ library for interacting with PostgreSQL. The popular psycopg2_ package is a full-featured python client. Unfortunately as a developer, you're often repeating the same steps to get started with your applications that use it. Queries aims to reduce the complexity of psycopg2 while adding additional features to make writing PostgreSQL client applications both fast and easy. Check out the `Usage`_ section below to see how easy it can be. Key features include: - Simplified API - Support of Python 2.7+ and 3.4+ - PyPy support via psycopg2cffi_ - Asynchronous support for Tornado_ - Connection information provided by URI - Query results delivered as a generator based iterators - Automatically registered data-type support for UUIDs, Unicode and Unicode Arrays - Ability to directly access psycopg2 ``connection`` and ``cursor`` objects - Internal connection pooling |Version| |Status| |Coverage| |License| Documentation ------------- Documentation is available at https://queries.readthedocs.org Installation ------------ Queries is available via pypi_ and can be installed with easy_install or pip: .. code:: bash pip install queries Usage ----- Queries provides a session based API for interacting with PostgreSQL. Simply pass in the URI_ of the PostgreSQL server to connect to when creating a session: .. code:: python session = queries.Session("postgresql://postgres@localhost:5432/postgres") Queries built-in connection pooling will re-use connections when possible, lowering the overhead of connecting and reconnecting. When specifying a URI, if you omit the username and database name to connect with, Queries will use the current OS username for both. You can also omit the URI when connecting to connect to localhost on port 5432 as the current OS user, connecting to a database named for the current user. For example, if your username is ``fred`` and you omit the URI when issuing ``queries.query`` the URI that is constructed would be ``postgresql://fred@localhost:5432/fred``. If you'd rather use individual values for the connection, the queries.uri() method provides a quick and easy way to create a URI to pass into the various methods. .. code:: python >>> queries.uri("server-name", 5432, "dbname", "user", "pass") 'postgresql://user:pass@server-name:5432/dbname' Environment Variables ^^^^^^^^^^^^^^^^^^^^^ Currently Queries uses the following environment variables for tweaking various configuration values. The supported ones are: In addition to both the ``queries.Session.query`` and ``queries.Session.callproc`` methods that are similar to the simple API methods, the ``queries.Session`` class provides access to the psycopg2 connection and cursor objects. It also provides methods for managing transactions and to the `LISTEN/NOTIFY <http://www.postgresql.org/docs/9.3/static/sql-listen.html>`_ functionality provided by PostgreSQL. **Using queries.Session.query** more information on the ``with`` keyword and context managers, see PEP343_. In addition to both the ``queries.Session.query`` and ``queries.Session.callproc`` methods that are similar to the simple API methods, the ``queries.Session`` class provides access to the psycopg2 connection and cursor objects. **Using queries.Session.query** The following example shows how a ``queries.Session`` object can be used as a context manager to query the database table: .. code:: python >>> import pprint >>> import queries >>> >>> with queries.Session() as session: ... for row in session.query('SELECT * FROM names'): ... pprint.pprint(row) ... {'id': 1, 'name': u'Jacob'} {'id': 2, 'name': u'Mason'} {'id': 3, 'name': u'Ethan'} **Using queries.Session.callproc** This example uses ``queries.Session.callproc`` to execute a stored procedure and then pretty-prints the single row results as a dictionary: .. code:: python >>> import pprint >>> import queries >>> with queries.Session() as session: ... results = session.callproc('chr', [65]) ... pprint.pprint(results.as_dict()) ... {'chr': u'A'} **Asynchronous Queries with Tornado** In addition to providing a Pythonic, synchronous client API for PostgreSQL, Queries provides a very similar asynchronous API for use with Tornado. The only major difference API difference between ``queries.TornadoSession`` and ``queries.Session`` is the ``TornadoSession.query`` and ``TornadoSession.callproc`` methods return the entire result set instead of acting as an iterator over the results. The following example uses ``TornadoSession.query`` in an asynchronous Tornado_ web application to send a JSON payload with the query result set. .. code:: python from tornado import gen, ioloop, web import queries class MainHandler(web.RequestHandler): def initialize(self): self.session = queries.TornadoSession() @gen.coroutine def get(self): results = yield self.session.query('SELECT * FROM names') self.finish({'data': results.items()}) results.free() application = web.Application([ (r"/", MainHandler), ]) if __name__ == "__main__": application.listen(8888) ioloop.IOLoop.instance().start() Inspiration ----------- Queries is inspired by `Kenneth Reitz's <https://github.com/kennethreitz/>`_ awesome work on `requests <http://docs.python-requests.org/en/latest/>`_. History ------- Queries is a fork and enhancement of pgsql_wrapper_, which can be found in the main GitHub repository of Queries as tags prior to version 1.2.0. .. _pypi: https://pypi.python.org/pypi/queries .. _psycopg2: https://pypi.python.org/pypi/psycopg2 .. _documentation: https://queries.readthedocs.org .. _URI: http://www.postgresql.org/docs/9.3/static/libpq-connect.html#LIBPQ-CONNSTRING .. _pgsql_wrapper: https://pypi.python.org/pypi/pgsql_wrapper .. _Tornado: http://tornadoweb.org .. _PEP343: http://legacy.python.org/dev/peps/pep-0343/ :target: https://pypi.python.org/pypi/queries .. |Coverage| image:: https://img.shields.io/coveralls/gmr/queries.svg? :target: https://coveralls.io/r/gmr/queries .. |Status| image:: https://img.shields.io/travis/gmr/queries.svg? :target: https://travis-ci.org/gmr/queries .. |Coverage| image:: https://img.shields.io/codecov/c/github/gmr/queries.svg? :target: https://codecov.io/github/gmr/queries?branch=master .. |License| image:: https://img.shields.io/github/license/gmr/queries.svg? :target: https://github.com/gmr/queries <MSG> Remove no longer accurate info about LISTEN/NOTIFY [skip ci] <DFF> @@ -74,10 +74,7 @@ more information on the ``with`` keyword and context managers, see PEP343_. In addition to both the ``queries.Session.query`` and ``queries.Session.callproc`` methods that are similar to the simple API methods, the ``queries.Session`` class -provides access to the psycopg2 connection and cursor objects. It also provides -methods for managing transactions and to the -`LISTEN/NOTIFY <http://www.postgresql.org/docs/9.3/static/sql-listen.html>`_ -functionality provided by PostgreSQL. +provides access to the psycopg2 connection and cursor objects. **Using queries.Session.query** @@ -174,4 +171,4 @@ main GitHub repository of Queries as tags prior to version 1.2.0. :target: https://pypi.python.org/pypi/queries .. |Coverage| image:: https://img.shields.io/coveralls/gmr/queries.svg? - :target: https://coveralls.io/r/gmr/queries \ No newline at end of file + :target: https://coveralls.io/r/gmr/queries
2
Remove no longer accurate info about LISTEN/NOTIFY [skip ci]
5
.rst
rst
bsd-3-clause
gmr/queries
1233
<NME> tornado_session.py <BEF> """ Tornado Session Adapter Use Queries asynchronously within the Tornado framework. Example Use: .. code:: python class NameListHandler(web.RequestHandler): def initialize(self): self.session = queries.TornadoSession(pool_max_size=60) @gen.coroutine def get(self): data = yield self.session.query('SELECT * FROM names') if data: self.finish({'names': data.items()}) data.free() else: self.set_status(500, 'Error querying the data') """ import logging import socket import warnings from tornado import concurrent, ioloop from psycopg2 import extras, extensions import psycopg2 from queries import pool, results, session, utils LOGGER = logging.getLogger(__name__) DEFAULT_MAX_POOL_SIZE = 25 class Results(results.Results): """A TornadoSession specific :py:class:`queries.Results` class that adds the :py:meth:`Results.free <queries.tornado_session.Results.free>` method. The :py:meth:`Results.free <queries.tornado_session.Results.free>` method **must** be called to free the connection that the results were generated on. `Results` objects that are not freed will cause the connections to remain locked and your application will eventually run out of connections in the pool. The following examples illustrate the various behaviors that the ::py:class:`queries.Results <queries.tornado_session.Requests>` class implements: **Using Results as an Iterator** .. code:: python results = yield session.query('SELECT * FROM foo') for row in results print row results.free() **Accessing an individual row by index** .. code:: python results = yield session.query('SELECT * FROM foo') print results[1] # Access the second row of the results results.free() **Casting single row results as a dict** .. code:: python results = yield session.query('SELECT * FROM foo LIMIT 1') print results.as_dict() results.free() **Checking to see if a query was successful** .. code:: python sql = "UPDATE foo SET bar='baz' WHERE qux='corgie'" results = yield session.query(sql) if results: print 'Success' results.free() **Checking the number of rows by using len(Results)** .. code:: python results = yield session.query('SELECT * FROM foo') print '%i rows' % len(results) results.free() """ def __init__(self, cursor, cleanup, fd): self.cursor = cursor self._cleanup = cleanup self._fd = fd self._freed = False def free(self): """Release the results and connection lock from the TornadoSession object. This **must** be called after you finish processing the results from :py:meth:`TornadoSession.query <queries.TornadoSession.query>` or :py:meth:`TornadoSession.callproc <queries.TornadoSession.callproc>` or the connection will not be able to be reused by other asynchronous requests. """ self._freed = True self._cleanup(self.cursor, self._fd) def __del__(self): if not self._freed: LOGGER.warning('Auto-freeing result on deletion') self.free() class TornadoSession(session.Session): """Session class for Tornado asynchronous applications. Uses :py:func:`tornado.gen.coroutine` to wrap API methods for use in Tornado. Utilizes connection pooling to ensure that multiple concurrent asynchronous queries do not block each other. Heavily trafficked services will require a higher ``max_pool_size`` to allow for greater connection concurrency. :py:meth:`TornadoSession.query <queries.TornadoSession.query>` and :py:meth:`TornadoSession.callproc <queries.TornadoSession.callproc>` must call :py:meth:`Results.free <queries.tornado_session.Results.free>` :param str uri: PostgreSQL connection URI :param psycopg2.extensions.cursor: The cursor type to use :param int pool_idle_ttl: How long idle pools keep connections open :param int pool_max_size: The maximum size of the pool to use """ def __init__(self, uri=session.DEFAULT_URI, cursor_factory=extras.RealDictCursor, pool_idle_ttl=pool.DEFAULT_IDLE_TTL, pool_max_size=DEFAULT_MAX_POOL_SIZE, io_loop=None): """Connect to a PostgreSQL server using the module wide connection and set the isolation level. :param str uri: PostgreSQL connection URI :param psycopg2.extensions.cursor: The cursor type to use :param int pool_idle_ttl: How long idle pools keep connections open :param int pool_max_size: The maximum size of the pool to use :param tornado.ioloop.IOLoop io_loop: IOLoop instance to use """ self._connections = dict() self._cleanup_callback = None self._cursor_factory = cursor_factory self._futures = dict() self._ioloop = io_loop or ioloop.IOLoop.current() self._pool_manager = pool.PoolManager.instance() self._pool_max_size = pool_max_size self._pool_idle_ttl = pool_idle_ttl self._uri = uri self._ensure_pool_exists() def _ensure_pool_exists(self): """Create the pool in the pool manager if it does not exist.""" if self.pid not in self._pool_manager: self._pool_manager.create(self.pid, self._pool_idle_ttl, self._pool_max_size, self._ioloop.time) @property def connection(self): """Do not use this directly with Tornado applications :return: """ return None @property def cursor(self): return None def callproc(self, name, args=None): """Call a stored procedure asynchronously on the server, passing in the arguments to be passed to the stored procedure, yielding the results as a :py:class:`Results <queries.tornado_session.Results>` object. You **must** free the results that are returned by this method to unlock the connection used to perform the query. Failure to do so will cause your Tornado application to run out of connections. :param str name: The stored procedure name :param list args: An optional list of procedure arguments :rtype: Results :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError :raises: queries.InternalError :raises: queries.InterfaceError :raises: queries.NotSupportedError :raises: queries.OperationalError :raises: queries.ProgrammingError """ return self._execute('callproc', name, args) def query(self, sql, parameters=None): """Issue a query asynchronously on the server, mogrifying the parameters against the sql statement and yielding the results as a :py:class:`Results <queries.tornado_session.Results>` object. You **must** free the results that are returned by this method to unlock the connection used to perform the query. Failure to do so will cause your Tornado application to run out of connections. :param str sql: The SQL statement :param dict parameters: A dictionary of query parameters :rtype: Results :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError :raises: queries.InternalError :raises: queries.InterfaceError :raises: queries.NotSupportedError :raises: queries.OperationalError :raises: queries.ProgrammingError """ return self._execute('execute', sql, parameters) def validate(self): """Validate the session can connect or has open connections to PostgreSQL. As of ``1.10.3`` .. deprecated:: 1.10.3 As of 1.10.3, this method only warns about Deprecation :rtype: bool """ warnings.warn( 'All functionality removed from this method', DeprecationWarning) def _connect(self): """Connect to PostgreSQL, either by reusing a connection from the pool if possible, or by creating the new connection. :rtype: psycopg2.extensions.connection :raises: pool.NoIdleConnectionsError """ future = concurrent.Future() # Attempt to get a cached connection from the connection pool try: connection = self._pool_manager.get(self.pid, self) self._connections[connection.fileno()] = connection future.set_result(connection) # Add the connection to the IOLoop self._ioloop.add_handler(connection.fileno(), self._on_io_events, ioloop.IOLoop.WRITE) except pool.NoIdleConnectionsError: self._create_connection(future) return future def _create_connection(self, future): """Create a new PostgreSQL connection :param tornado.concurrent.Future future: future for new conn result """ LOGGER.debug('Creating a new connection for %s', self.pid) # Create a new PostgreSQL connection kwargs = utils.uri_to_kwargs(self._uri) try: connection = self._psycopg2_connect(kwargs) except (psycopg2.Error, OSError, socket.error) as error: future.set_exception(error) return # Add the connection for use in _poll_connection fd = connection.fileno() self._connections[fd] = connection def on_connected(cf): """Invoked by the IOLoop when the future is complete for the connection :param Future cf: The future for the initial connection """ if cf.exception(): self._cleanup_fd(fd, True) future.set_exception(cf.exception()) else: try: # Add the connection to the pool LOGGER.debug('Connection established for %s', self.pid) self._pool_manager.add(self.pid, connection) except (ValueError, pool.PoolException) as err: LOGGER.exception('Failed to add %r to the pool', self.pid) self._cleanup_fd(fd) future.set_exception(err) return self._pool_manager.lock(self.pid, connection, self) # Added in because psycopg2cffi connects and leaves the # connection in a weird state: consts.STATUS_DATESTYLE, # returning from Connection._setup without setting the state # as const.STATUS_OK if utils.PYPY: connection.status = extensions.STATUS_READY # Register the custom data types self._register_unicode(connection) self._register_uuid(connection) # Set the future result future.set_result(connection) # Add a future that fires once connected self._futures[fd] = concurrent.Future() self._ioloop.add_future(self._futures[fd], on_connected) # Add the connection to the IOLoop self._ioloop.add_handler(connection.fileno(), self._on_io_events, ioloop.IOLoop.WRITE) def _execute(self, method, query, parameters=None): """Issue a query asynchronously on the server, mogrifying the parameters against the sql statement and yielding the results as a :py:class:`Results <queries.tornado_session.Results>` object. This function reduces duplicate code for callproc and query by getting the class attribute for the method passed in as the function to call. :param str method: The method attribute to use :param str query: The SQL statement or Stored Procedure name :param list|dict parameters: A dictionary of query parameters :rtype: Results :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError :raises: queries.InternalError :raises: queries.InterfaceError :raises: queries.NotSupportedError :raises: queries.OperationalError :raises: queries.ProgrammingError """ future = concurrent.Future() def on_connected(cf): """Invoked by the future returned by self._connect""" if cf.exception(): future.set_exception(cf.exception()) return # Get the psycopg2 connection object and cursor conn = cf.result() cursor = self._get_cursor(conn) def completed(qf): """Invoked by the IOLoop when the future has completed""" if qf.exception(): self._incr_exceptions(conn) err = qf.exception() LOGGER.debug('Cleaning cursor due to exception: %r', err) self._exec_cleanup(cursor, conn.fileno()) future.set_exception(err) else: self._incr_executions(conn) value = Results(cursor, self._exec_cleanup, conn.fileno()) future.set_result(value) # Setup a callback to wait on the query result self._futures[conn.fileno()] = concurrent.Future() # Add the future to the IOLoop self._ioloop.add_future(self._futures[conn.fileno()], completed) # Get the cursor, execute the query func = getattr(cursor, method) try: func(query, parameters) except Exception as error: future.set_exception(error) # Ensure the pool exists for the connection self._ensure_pool_exists() # Grab a connection to PostgreSQL self._ioloop.add_future(self._connect(), on_connected) # Return the future for the query result return future def _exec_cleanup(self, cursor, fd): """Close the cursor, remove any references to the fd in internal state and remove the fd from the ioloop. :param psycopg2.extensions.cursor cursor: The cursor to close :param int fd: The connection file descriptor """ LOGGER.debug('Closing cursor and cleaning %s', fd) try: cursor.close() except (psycopg2.Error, psycopg2.Warning) as error: LOGGER.debug('Error closing the cursor: %s', error) self._cleanup_fd(fd) """ LOGGER.debug('Closing cursor and cleaning %s', fd) cursor.close() self._pool_manager.free(self.pid, self._connections[fd]) self._ioloop.remove_handler(fd) connection stack, and futures stack. :param int fd: The fd # to cleanup """ self._ioloop.remove_handler(fd) if fd in self._connections: try: self._pool_manager.free(self.pid, self._connections[fd]) except pool.ConnectionNotFoundError: pass if close: self._connections[fd].close() del self._connections[fd] if fd in self._futures: del self._futures[fd] def _incr_exceptions(self, conn): """Increment the number of exceptions for the current connection. :param psycopg2.extensions.connection conn: the psycopg2 connection """ """ try: state = self._connections[fd].poll() except OSError as error: self._ioloop.remove_handler(fd) if fd in self._futures and not self._futures[fd].done(): self._futures[fd].set_exception( """ self._pool_manager.get_connection(self.pid, conn).executions += 1 def _on_io_events(self, fd=None, _events=None): """Invoked by Tornado's IOLoop when there are events for the fd :param int fd: The file descriptor for the event :param int _events: The events raised """ if fd not in self._connections: LOGGER.warning('Received IO event for non-existing connection') return self._poll_connection(fd) def _poll_connection(self, fd): """Check with psycopg2 to see what action to take. If the state is POLL_OK, we should have a pending callback for that fd. :param int fd: The socket fd for the postgresql connection """ try: state = self._connections[fd].poll() except (OSError, socket.error) as error: self._ioloop.remove_handler(fd) if fd in self._futures and not self._futures[fd].done(): self._futures[fd].set_exception( psycopg2.OperationalError('Connection error (%s)' % error) ) except (psycopg2.Error, psycopg2.Warning) as error: if fd in self._futures and not self._futures[fd].done(): self._futures[fd].set_exception(error) else: if state == extensions.POLL_OK: if fd in self._futures and not self._futures[fd].done(): self._futures[fd].set_result(True) elif state == extensions.POLL_WRITE: self._ioloop.update_handler(fd, ioloop.IOLoop.WRITE) elif state == extensions.POLL_READ: self._ioloop.update_handler(fd, ioloop.IOLoop.READ) elif state == extensions.POLL_ERROR: self._ioloop.remove_handler(fd) if fd in self._futures and not self._futures[fd].done(): self._futures[fd].set_exception( psycopg2.Error('Poll Error')) def _psycopg2_connect(self, kwargs): """Return a psycopg2 connection for the specified kwargs. Extend for use in async session adapters. :param dict kwargs: Keyword connection args :rtype: psycopg2.extensions.connection """ kwargs['async'] = True return psycopg2.connect(**kwargs) <MSG> More exception handling <DFF> @@ -424,7 +424,11 @@ class TornadoSession(session.Session): """ LOGGER.debug('Closing cursor and cleaning %s', fd) - cursor.close() + try: + cursor.close() + except (psycopg2.Error, psycopg2.Warning) as error: + LOGGER.debug('Error closing the cursor: %s', error) + self._pool_manager.free(self.pid, self._connections[fd]) self._ioloop.remove_handler(fd) @@ -454,7 +458,7 @@ class TornadoSession(session.Session): """ try: state = self._connections[fd].poll() - except OSError as error: + except (OSError, socket.error) as error: self._ioloop.remove_handler(fd) if fd in self._futures and not self._futures[fd].done(): self._futures[fd].set_exception(
6
More exception handling
2
.py
py
bsd-3-clause
gmr/queries
1234
<NME> tornado_session.py <BEF> """ Tornado Session Adapter Use Queries asynchronously within the Tornado framework. Example Use: .. code:: python class NameListHandler(web.RequestHandler): def initialize(self): self.session = queries.TornadoSession(pool_max_size=60) @gen.coroutine def get(self): data = yield self.session.query('SELECT * FROM names') if data: self.finish({'names': data.items()}) data.free() else: self.set_status(500, 'Error querying the data') """ import logging import socket import warnings from tornado import concurrent, ioloop from psycopg2 import extras, extensions import psycopg2 from queries import pool, results, session, utils LOGGER = logging.getLogger(__name__) DEFAULT_MAX_POOL_SIZE = 25 class Results(results.Results): """A TornadoSession specific :py:class:`queries.Results` class that adds the :py:meth:`Results.free <queries.tornado_session.Results.free>` method. The :py:meth:`Results.free <queries.tornado_session.Results.free>` method **must** be called to free the connection that the results were generated on. `Results` objects that are not freed will cause the connections to remain locked and your application will eventually run out of connections in the pool. The following examples illustrate the various behaviors that the ::py:class:`queries.Results <queries.tornado_session.Requests>` class implements: **Using Results as an Iterator** .. code:: python results = yield session.query('SELECT * FROM foo') for row in results print row results.free() **Accessing an individual row by index** .. code:: python results = yield session.query('SELECT * FROM foo') print results[1] # Access the second row of the results results.free() **Casting single row results as a dict** .. code:: python results = yield session.query('SELECT * FROM foo LIMIT 1') print results.as_dict() results.free() **Checking to see if a query was successful** .. code:: python sql = "UPDATE foo SET bar='baz' WHERE qux='corgie'" results = yield session.query(sql) if results: print 'Success' results.free() **Checking the number of rows by using len(Results)** .. code:: python results = yield session.query('SELECT * FROM foo') print '%i rows' % len(results) results.free() """ def __init__(self, cursor, cleanup, fd): self.cursor = cursor self._cleanup = cleanup self._fd = fd self._freed = False def free(self): """Release the results and connection lock from the TornadoSession object. This **must** be called after you finish processing the results from :py:meth:`TornadoSession.query <queries.TornadoSession.query>` or :py:meth:`TornadoSession.callproc <queries.TornadoSession.callproc>` or the connection will not be able to be reused by other asynchronous requests. """ self._freed = True self._cleanup(self.cursor, self._fd) def __del__(self): if not self._freed: LOGGER.warning('Auto-freeing result on deletion') self.free() class TornadoSession(session.Session): """Session class for Tornado asynchronous applications. Uses :py:func:`tornado.gen.coroutine` to wrap API methods for use in Tornado. Utilizes connection pooling to ensure that multiple concurrent asynchronous queries do not block each other. Heavily trafficked services will require a higher ``max_pool_size`` to allow for greater connection concurrency. :py:meth:`TornadoSession.query <queries.TornadoSession.query>` and :py:meth:`TornadoSession.callproc <queries.TornadoSession.callproc>` must call :py:meth:`Results.free <queries.tornado_session.Results.free>` :param str uri: PostgreSQL connection URI :param psycopg2.extensions.cursor: The cursor type to use :param int pool_idle_ttl: How long idle pools keep connections open :param int pool_max_size: The maximum size of the pool to use """ def __init__(self, uri=session.DEFAULT_URI, cursor_factory=extras.RealDictCursor, pool_idle_ttl=pool.DEFAULT_IDLE_TTL, pool_max_size=DEFAULT_MAX_POOL_SIZE, io_loop=None): """Connect to a PostgreSQL server using the module wide connection and set the isolation level. :param str uri: PostgreSQL connection URI :param psycopg2.extensions.cursor: The cursor type to use :param int pool_idle_ttl: How long idle pools keep connections open :param int pool_max_size: The maximum size of the pool to use :param tornado.ioloop.IOLoop io_loop: IOLoop instance to use """ self._connections = dict() self._cleanup_callback = None self._cursor_factory = cursor_factory self._futures = dict() self._ioloop = io_loop or ioloop.IOLoop.current() self._pool_manager = pool.PoolManager.instance() self._pool_max_size = pool_max_size self._pool_idle_ttl = pool_idle_ttl self._uri = uri self._ensure_pool_exists() def _ensure_pool_exists(self): """Create the pool in the pool manager if it does not exist.""" if self.pid not in self._pool_manager: self._pool_manager.create(self.pid, self._pool_idle_ttl, self._pool_max_size, self._ioloop.time) @property def connection(self): """Do not use this directly with Tornado applications :return: """ return None @property def cursor(self): return None def callproc(self, name, args=None): """Call a stored procedure asynchronously on the server, passing in the arguments to be passed to the stored procedure, yielding the results as a :py:class:`Results <queries.tornado_session.Results>` object. You **must** free the results that are returned by this method to unlock the connection used to perform the query. Failure to do so will cause your Tornado application to run out of connections. :param str name: The stored procedure name :param list args: An optional list of procedure arguments :rtype: Results :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError :raises: queries.InternalError :raises: queries.InterfaceError :raises: queries.NotSupportedError :raises: queries.OperationalError :raises: queries.ProgrammingError """ return self._execute('callproc', name, args) def query(self, sql, parameters=None): """Issue a query asynchronously on the server, mogrifying the parameters against the sql statement and yielding the results as a :py:class:`Results <queries.tornado_session.Results>` object. You **must** free the results that are returned by this method to unlock the connection used to perform the query. Failure to do so will cause your Tornado application to run out of connections. :param str sql: The SQL statement :param dict parameters: A dictionary of query parameters :rtype: Results :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError :raises: queries.InternalError :raises: queries.InterfaceError :raises: queries.NotSupportedError :raises: queries.OperationalError :raises: queries.ProgrammingError """ return self._execute('execute', sql, parameters) def validate(self): """Validate the session can connect or has open connections to PostgreSQL. As of ``1.10.3`` .. deprecated:: 1.10.3 As of 1.10.3, this method only warns about Deprecation :rtype: bool """ warnings.warn( 'All functionality removed from this method', DeprecationWarning) def _connect(self): """Connect to PostgreSQL, either by reusing a connection from the pool if possible, or by creating the new connection. :rtype: psycopg2.extensions.connection :raises: pool.NoIdleConnectionsError """ future = concurrent.Future() # Attempt to get a cached connection from the connection pool try: connection = self._pool_manager.get(self.pid, self) self._connections[connection.fileno()] = connection future.set_result(connection) # Add the connection to the IOLoop self._ioloop.add_handler(connection.fileno(), self._on_io_events, ioloop.IOLoop.WRITE) except pool.NoIdleConnectionsError: self._create_connection(future) return future def _create_connection(self, future): """Create a new PostgreSQL connection :param tornado.concurrent.Future future: future for new conn result """ LOGGER.debug('Creating a new connection for %s', self.pid) # Create a new PostgreSQL connection kwargs = utils.uri_to_kwargs(self._uri) try: connection = self._psycopg2_connect(kwargs) except (psycopg2.Error, OSError, socket.error) as error: future.set_exception(error) return # Add the connection for use in _poll_connection fd = connection.fileno() self._connections[fd] = connection def on_connected(cf): """Invoked by the IOLoop when the future is complete for the connection :param Future cf: The future for the initial connection """ if cf.exception(): self._cleanup_fd(fd, True) future.set_exception(cf.exception()) else: try: # Add the connection to the pool LOGGER.debug('Connection established for %s', self.pid) self._pool_manager.add(self.pid, connection) except (ValueError, pool.PoolException) as err: LOGGER.exception('Failed to add %r to the pool', self.pid) self._cleanup_fd(fd) future.set_exception(err) return self._pool_manager.lock(self.pid, connection, self) # Added in because psycopg2cffi connects and leaves the # connection in a weird state: consts.STATUS_DATESTYLE, # returning from Connection._setup without setting the state # as const.STATUS_OK if utils.PYPY: connection.status = extensions.STATUS_READY # Register the custom data types self._register_unicode(connection) self._register_uuid(connection) # Set the future result future.set_result(connection) # Add a future that fires once connected self._futures[fd] = concurrent.Future() self._ioloop.add_future(self._futures[fd], on_connected) # Add the connection to the IOLoop self._ioloop.add_handler(connection.fileno(), self._on_io_events, ioloop.IOLoop.WRITE) def _execute(self, method, query, parameters=None): """Issue a query asynchronously on the server, mogrifying the parameters against the sql statement and yielding the results as a :py:class:`Results <queries.tornado_session.Results>` object. This function reduces duplicate code for callproc and query by getting the class attribute for the method passed in as the function to call. :param str method: The method attribute to use :param str query: The SQL statement or Stored Procedure name :param list|dict parameters: A dictionary of query parameters :rtype: Results :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError :raises: queries.InternalError :raises: queries.InterfaceError :raises: queries.NotSupportedError :raises: queries.OperationalError :raises: queries.ProgrammingError """ future = concurrent.Future() def on_connected(cf): """Invoked by the future returned by self._connect""" if cf.exception(): future.set_exception(cf.exception()) return # Get the psycopg2 connection object and cursor conn = cf.result() cursor = self._get_cursor(conn) def completed(qf): """Invoked by the IOLoop when the future has completed""" if qf.exception(): self._incr_exceptions(conn) err = qf.exception() LOGGER.debug('Cleaning cursor due to exception: %r', err) self._exec_cleanup(cursor, conn.fileno()) future.set_exception(err) else: self._incr_executions(conn) value = Results(cursor, self._exec_cleanup, conn.fileno()) future.set_result(value) # Setup a callback to wait on the query result self._futures[conn.fileno()] = concurrent.Future() # Add the future to the IOLoop self._ioloop.add_future(self._futures[conn.fileno()], completed) # Get the cursor, execute the query func = getattr(cursor, method) try: func(query, parameters) except Exception as error: future.set_exception(error) # Ensure the pool exists for the connection self._ensure_pool_exists() # Grab a connection to PostgreSQL self._ioloop.add_future(self._connect(), on_connected) # Return the future for the query result return future def _exec_cleanup(self, cursor, fd): """Close the cursor, remove any references to the fd in internal state and remove the fd from the ioloop. :param psycopg2.extensions.cursor cursor: The cursor to close :param int fd: The connection file descriptor """ LOGGER.debug('Closing cursor and cleaning %s', fd) try: cursor.close() except (psycopg2.Error, psycopg2.Warning) as error: LOGGER.debug('Error closing the cursor: %s', error) self._cleanup_fd(fd) """ LOGGER.debug('Closing cursor and cleaning %s', fd) cursor.close() self._pool_manager.free(self.pid, self._connections[fd]) self._ioloop.remove_handler(fd) connection stack, and futures stack. :param int fd: The fd # to cleanup """ self._ioloop.remove_handler(fd) if fd in self._connections: try: self._pool_manager.free(self.pid, self._connections[fd]) except pool.ConnectionNotFoundError: pass if close: self._connections[fd].close() del self._connections[fd] if fd in self._futures: del self._futures[fd] def _incr_exceptions(self, conn): """Increment the number of exceptions for the current connection. :param psycopg2.extensions.connection conn: the psycopg2 connection """ """ try: state = self._connections[fd].poll() except OSError as error: self._ioloop.remove_handler(fd) if fd in self._futures and not self._futures[fd].done(): self._futures[fd].set_exception( """ self._pool_manager.get_connection(self.pid, conn).executions += 1 def _on_io_events(self, fd=None, _events=None): """Invoked by Tornado's IOLoop when there are events for the fd :param int fd: The file descriptor for the event :param int _events: The events raised """ if fd not in self._connections: LOGGER.warning('Received IO event for non-existing connection') return self._poll_connection(fd) def _poll_connection(self, fd): """Check with psycopg2 to see what action to take. If the state is POLL_OK, we should have a pending callback for that fd. :param int fd: The socket fd for the postgresql connection """ try: state = self._connections[fd].poll() except (OSError, socket.error) as error: self._ioloop.remove_handler(fd) if fd in self._futures and not self._futures[fd].done(): self._futures[fd].set_exception( psycopg2.OperationalError('Connection error (%s)' % error) ) except (psycopg2.Error, psycopg2.Warning) as error: if fd in self._futures and not self._futures[fd].done(): self._futures[fd].set_exception(error) else: if state == extensions.POLL_OK: if fd in self._futures and not self._futures[fd].done(): self._futures[fd].set_result(True) elif state == extensions.POLL_WRITE: self._ioloop.update_handler(fd, ioloop.IOLoop.WRITE) elif state == extensions.POLL_READ: self._ioloop.update_handler(fd, ioloop.IOLoop.READ) elif state == extensions.POLL_ERROR: self._ioloop.remove_handler(fd) if fd in self._futures and not self._futures[fd].done(): self._futures[fd].set_exception( psycopg2.Error('Poll Error')) def _psycopg2_connect(self, kwargs): """Return a psycopg2 connection for the specified kwargs. Extend for use in async session adapters. :param dict kwargs: Keyword connection args :rtype: psycopg2.extensions.connection """ kwargs['async'] = True return psycopg2.connect(**kwargs) <MSG> More exception handling <DFF> @@ -424,7 +424,11 @@ class TornadoSession(session.Session): """ LOGGER.debug('Closing cursor and cleaning %s', fd) - cursor.close() + try: + cursor.close() + except (psycopg2.Error, psycopg2.Warning) as error: + LOGGER.debug('Error closing the cursor: %s', error) + self._pool_manager.free(self.pid, self._connections[fd]) self._ioloop.remove_handler(fd) @@ -454,7 +458,7 @@ class TornadoSession(session.Session): """ try: state = self._connections[fd].poll() - except OSError as error: + except (OSError, socket.error) as error: self._ioloop.remove_handler(fd) if fd in self._futures and not self._futures[fd].done(): self._futures[fd].set_exception(
6
More exception handling
2
.py
py
bsd-3-clause
gmr/queries
1235
<NME> utils.py <BEF> """ Utility functions for access to OS level info and URI parsing """ import collections import getpass import logging import os import platform except ImportError: import urlparse as _urlparse Parsed = collections.namedtuple('Parsed', 'scheme,netloc,path,params,query,fragment,' 'username,password,hostname,port') DEFAULT_HOSTNAME = 'localhost' DEFAULT_PORT = 5432 DEFAULT_DBNAME = 'postgres' DEFAULT_USERNAME = 'postgres' def get_current_user(): """Return the current username for the logged in user 'sslmode', 'requiressl', 'sslcompression', 'sslcert', 'sslkey', def parse_qs(query_string): return _urlparse.parse_qs(query_string) """ if pwd is None: return getpass.getuser() else: try: return pwd.getpwuid(os.getuid())[0] """ parsed = urlparse(uri) default_user = get_current_user() return {'host': parsed.hostname or DEFAULT_HOSTNAME, 'port': parsed.port or DEFAULT_PORT, 'dbname': parsed.path[1:] or default_user, 'user': parsed.username or default_user, 'password': parsed.password} def urlparse(url): value = 'http%s' % url[5:] if url[:5] == 'pgsql' else url parsed = _urlparse.urlparse(value) return Parsed(parsed.scheme.replace('http', 'pgsql'), parsed.netloc, parsed.path, parsed.params, parsed.query, parsed.fragment, parsed.username, parsed.password, parsed.hostname, parsed.port) return 'postgresql://%s:%s@%s/%s' % (user, password, host, dbname) return 'postgresql://%s@%s/%s' % (user, host, dbname) def uri_to_kwargs(uri): """Return a URI as kwargs for connecting to PostgreSQL with psycopg2, applying default values for non-specified areas of the URI. :param str uri: The connection URI :rtype: dict """ parsed = urlparse(uri) default_user = get_current_user() password = unquote(parsed.password) if parsed.password else None kwargs = {'host': parsed.hostname, 'port': parsed.port, 'dbname': parsed.path[1:] or default_user, 'user': parsed.username or default_user, 'password': password} values = parse_qs(parsed.query) if 'host' in values: kwargs['host'] = values['host'][0] for k in [k for k in values if k in KEYWORDS]: kwargs[k] = values[k][0] if len(values[k]) == 1 else values[k] try: if kwargs[k].isdigit(): kwargs[k] = int(kwargs[k]) except AttributeError: pass return kwargs def urlparse(url): """Parse the URL in a Python2/3 independent fashion. :param str url: The URL to parse :rtype: Parsed """ value = 'http%s' % url[5:] if url[:5] == 'postgresql' else url parsed = _urlparse.urlparse(value) path, query = parsed.path, parsed.query hostname = parsed.hostname if parsed.hostname else '' return PARSED(parsed.scheme.replace('http', 'postgresql'), parsed.netloc, path, parsed.params, query, parsed.fragment, parsed.username, parsed.password, hostname.replace('%2F', '/').replace('%2f', '/'), parsed.port) <MSG> Add supported query string keyword values <DFF> @@ -10,14 +10,34 @@ try: except ImportError: import urlparse as _urlparse -Parsed = collections.namedtuple('Parsed', +PARSED = collections.namedtuple('Parsed', 'scheme,netloc,path,params,query,fragment,' 'username,password,hostname,port') DEFAULT_HOSTNAME = 'localhost' DEFAULT_PORT = 5432 -DEFAULT_DBNAME = 'postgres' -DEFAULT_USERNAME = 'postgres' + +KEYWORDS = ['connect_timeout', + 'client_encoding', + 'options', + 'application_name', + 'fallback_application_name', + 'keepalives', + 'keepalives_idle', + 'keepalives_interval', + 'keepalives_count', + 'sslmode', + 'requiressl', + 'sslcompression', + 'sslcert', + 'sslkey', + 'sslrootcert', + 'sslcrl', + 'requirepeer', + 'krbsrvname', + 'gsslib', + 'service'] + def get_current_user(): """Return the current username for the logged in user @@ -29,6 +49,12 @@ def get_current_user(): def parse_qs(query_string): + """Return the parsed query string in a python2/3 agnostic fashion + + :param str query_string: The URI query string + :rtype: dict + + """ return _urlparse.parse_qs(query_string) @@ -42,17 +68,29 @@ def uri_to_kwargs(uri): """ parsed = urlparse(uri) default_user = get_current_user() - return {'host': parsed.hostname or DEFAULT_HOSTNAME, - 'port': parsed.port or DEFAULT_PORT, - 'dbname': parsed.path[1:] or default_user, - 'user': parsed.username or default_user, - 'password': parsed.password} + kwargs = {'host': parsed.hostname or DEFAULT_HOSTNAME, + 'port': parsed.port or DEFAULT_PORT, + 'dbname': parsed.path[1:] or default_user, + 'user': parsed.username or default_user, + 'password': parsed.password} + values = parse_qs(parsed.query) + for k in [k for k in values if k in KEYWORDS]: + kwargs[k] = values[k][0] if len(values[k]) == 1 else values[k] + if kwargs[k].isdigit(): + kwargs[k] = int(kwargs[k]) + return kwargs def urlparse(url): + """Parse the URL in a Python2/3 independent fashion. + + :param str url: The URL to parse + :rtype: Parsed + + """ value = 'http%s' % url[5:] if url[:5] == 'pgsql' else url parsed = _urlparse.urlparse(value) - return Parsed(parsed.scheme.replace('http', 'pgsql'), parsed.netloc, + return PARSED(parsed.scheme.replace('http', 'pgsql'), parsed.netloc, parsed.path, parsed.params, parsed.query, parsed.fragment, parsed.username, parsed.password, parsed.hostname, - parsed.port) \ No newline at end of file + parsed.port)
48
Add supported query string keyword values
10
.py
py
bsd-3-clause
gmr/queries
1236
<NME> utils.py <BEF> """ Utility functions for access to OS level info and URI parsing """ import collections import getpass import logging import os import platform except ImportError: import urlparse as _urlparse Parsed = collections.namedtuple('Parsed', 'scheme,netloc,path,params,query,fragment,' 'username,password,hostname,port') DEFAULT_HOSTNAME = 'localhost' DEFAULT_PORT = 5432 DEFAULT_DBNAME = 'postgres' DEFAULT_USERNAME = 'postgres' def get_current_user(): """Return the current username for the logged in user 'sslmode', 'requiressl', 'sslcompression', 'sslcert', 'sslkey', def parse_qs(query_string): return _urlparse.parse_qs(query_string) """ if pwd is None: return getpass.getuser() else: try: return pwd.getpwuid(os.getuid())[0] """ parsed = urlparse(uri) default_user = get_current_user() return {'host': parsed.hostname or DEFAULT_HOSTNAME, 'port': parsed.port or DEFAULT_PORT, 'dbname': parsed.path[1:] or default_user, 'user': parsed.username or default_user, 'password': parsed.password} def urlparse(url): value = 'http%s' % url[5:] if url[:5] == 'pgsql' else url parsed = _urlparse.urlparse(value) return Parsed(parsed.scheme.replace('http', 'pgsql'), parsed.netloc, parsed.path, parsed.params, parsed.query, parsed.fragment, parsed.username, parsed.password, parsed.hostname, parsed.port) return 'postgresql://%s:%s@%s/%s' % (user, password, host, dbname) return 'postgresql://%s@%s/%s' % (user, host, dbname) def uri_to_kwargs(uri): """Return a URI as kwargs for connecting to PostgreSQL with psycopg2, applying default values for non-specified areas of the URI. :param str uri: The connection URI :rtype: dict """ parsed = urlparse(uri) default_user = get_current_user() password = unquote(parsed.password) if parsed.password else None kwargs = {'host': parsed.hostname, 'port': parsed.port, 'dbname': parsed.path[1:] or default_user, 'user': parsed.username or default_user, 'password': password} values = parse_qs(parsed.query) if 'host' in values: kwargs['host'] = values['host'][0] for k in [k for k in values if k in KEYWORDS]: kwargs[k] = values[k][0] if len(values[k]) == 1 else values[k] try: if kwargs[k].isdigit(): kwargs[k] = int(kwargs[k]) except AttributeError: pass return kwargs def urlparse(url): """Parse the URL in a Python2/3 independent fashion. :param str url: The URL to parse :rtype: Parsed """ value = 'http%s' % url[5:] if url[:5] == 'postgresql' else url parsed = _urlparse.urlparse(value) path, query = parsed.path, parsed.query hostname = parsed.hostname if parsed.hostname else '' return PARSED(parsed.scheme.replace('http', 'postgresql'), parsed.netloc, path, parsed.params, query, parsed.fragment, parsed.username, parsed.password, hostname.replace('%2F', '/').replace('%2f', '/'), parsed.port) <MSG> Add supported query string keyword values <DFF> @@ -10,14 +10,34 @@ try: except ImportError: import urlparse as _urlparse -Parsed = collections.namedtuple('Parsed', +PARSED = collections.namedtuple('Parsed', 'scheme,netloc,path,params,query,fragment,' 'username,password,hostname,port') DEFAULT_HOSTNAME = 'localhost' DEFAULT_PORT = 5432 -DEFAULT_DBNAME = 'postgres' -DEFAULT_USERNAME = 'postgres' + +KEYWORDS = ['connect_timeout', + 'client_encoding', + 'options', + 'application_name', + 'fallback_application_name', + 'keepalives', + 'keepalives_idle', + 'keepalives_interval', + 'keepalives_count', + 'sslmode', + 'requiressl', + 'sslcompression', + 'sslcert', + 'sslkey', + 'sslrootcert', + 'sslcrl', + 'requirepeer', + 'krbsrvname', + 'gsslib', + 'service'] + def get_current_user(): """Return the current username for the logged in user @@ -29,6 +49,12 @@ def get_current_user(): def parse_qs(query_string): + """Return the parsed query string in a python2/3 agnostic fashion + + :param str query_string: The URI query string + :rtype: dict + + """ return _urlparse.parse_qs(query_string) @@ -42,17 +68,29 @@ def uri_to_kwargs(uri): """ parsed = urlparse(uri) default_user = get_current_user() - return {'host': parsed.hostname or DEFAULT_HOSTNAME, - 'port': parsed.port or DEFAULT_PORT, - 'dbname': parsed.path[1:] or default_user, - 'user': parsed.username or default_user, - 'password': parsed.password} + kwargs = {'host': parsed.hostname or DEFAULT_HOSTNAME, + 'port': parsed.port or DEFAULT_PORT, + 'dbname': parsed.path[1:] or default_user, + 'user': parsed.username or default_user, + 'password': parsed.password} + values = parse_qs(parsed.query) + for k in [k for k in values if k in KEYWORDS]: + kwargs[k] = values[k][0] if len(values[k]) == 1 else values[k] + if kwargs[k].isdigit(): + kwargs[k] = int(kwargs[k]) + return kwargs def urlparse(url): + """Parse the URL in a Python2/3 independent fashion. + + :param str url: The URL to parse + :rtype: Parsed + + """ value = 'http%s' % url[5:] if url[:5] == 'pgsql' else url parsed = _urlparse.urlparse(value) - return Parsed(parsed.scheme.replace('http', 'pgsql'), parsed.netloc, + return PARSED(parsed.scheme.replace('http', 'pgsql'), parsed.netloc, parsed.path, parsed.params, parsed.query, parsed.fragment, parsed.username, parsed.password, parsed.hostname, - parsed.port) \ No newline at end of file + parsed.port)
48
Add supported query string keyword values
10
.py
py
bsd-3-clause
gmr/queries
1237
<NME> history.rst <BEF> Version History =============== - Next Release - Log a warning when a tornado_session.Result is ``__del__'d`` without ``free`` being called. - 1.9.1 2016-10-25 - Add better exception handling around connections and getting the logged in user - 1.9.0 2016-07-01 ----------------- - REMOVED support for Python 2.6 - FIXED CPU Pegging bug: Cleanup IOLoop and internal stack in ``TornadoSession`` on connection error. In the case of a connection error, the failure to do this caused CPU to peg @ 100% utilization looping on a non-existent file descriptor. Thanks to `cknave <https://github.com/cknave>`_ for his work on identifying the issue, proposing a fix, and writing a working test case. - Move the integration tests to use a local docker development environment - Added new methods ``queries.pool.Pool.report`` and ``queries.pool.PoolManager.Report`` for reporting pool status. - Added new methods to ``queries.pool.Pool`` for returning a list of busy, closed, executing, and locked connections. 1.10.4 2018-01-10 ----------------- - Implement ``Results.__bool__`` to be explicit about Python 3 support. - Catch any exception raised when using TornadoSession and invoking the execute function in psycopg2 for exceptions raised prior to sending the query to Postgres. This could be psycopg2.Error, IndexError, KeyError, or who knows, it's not documented in psycopg2. 1.10.3 2017-11-01 ----------------- - Remove the functionality from ``TornadoSession.validate`` and make it raise a ``DeprecationWarning`` - Catch the ``KeyError`` raised when ``PoolManager.clean()`` is invoked for a pool that doesn't exist 1.10.2 2017-10-26 ----------------- - Ensure the pool exists when executing a query in TornadoSession, the new timeout behavior prevented that from happening. 1.10.1 2017-10-24 ----------------- - Use an absolute time in the call to ``add_timeout`` 1.10.0 2017-09-27 ----------------- - Free when tornado_session.Result is ``__del__``'d without ``free`` being called. - Auto-clean the pool after Results.free TTL+1 in tornado_session.TornadoSession - Don't raise NotImplementedError in Results.free for synchronous use, just treat as a noop 1.9.1 2016-10-25 ---------------- - Add better exception handling around connections and getting the logged in user 1.9.0 2016-07-01 ---------------- - Handle a potential race condition in TornadoSession when too many simultaneous new connections are made and a pool fills up - Increase logging in various places to be more informative - Restructure queries specific exceptions to all extend off of a base QueriesException - Trivial code cleanup 1.8.10 2016-06-14 ----------------- - Propagate PoolManager exceptions from TornadoSession (#20) - Fix by Dave Shawley 1.8.9 2015-11-11 ---------------- - Move to psycopg2cffi for PyPy support 1.7.5 2015-09-03 ---------------- - Don't let Session and TornadoSession share connections 1.7.1 2015-03-25 ---------------- - Fix TornadoSession's use of cleanup (#8) - Fix by Oren Itamar 1.7.0 2015-01-13 ---------------- - Implement :py:meth:`Pool.shutdown <queries.pool.Pool.shutdown>` and :py:meth:`PoolManager.shutdown <queries.pool.PoolManager.shutdown>` to cleanly shutdown all open, non-executing connections across a Pool or all pools. Update locks in Pool operations to ensure atomicity. 1.6.1 2015-01-09 ---------------- - Fixes an iteration error when closing a pool (#7) - Fix by Chris McGuire 1.6.0 2014-11-20 ----------------- - Handle URI encoded password values properly 1.5.0 2014-10-07 ---------------- - Handle empty query results in the iterator (#4) - Fix by Den Teresh 1.4.0 2014-09-04 ---------------- - Address exception handling in tornado_session <MSG> Update doc history <DFF> @@ -1,7 +1,9 @@ Version History =============== - Next Release - - Log a warning when a tornado_session.Result is ``__del__'d`` without ``free`` being called. + - Free when tornado_session.Result is ``__del__'d`` without ``free`` being called. + - Auto-clean the pool after Results.free TTL+1 in tornado_session.TornadoSession + - Dont raise NotImplementedError in Results.free for synchronous use, just treat as a noop - 1.9.1 2016-10-25 - Add better exception handling around connections and getting the logged in user - 1.9.0 2016-07-01
3
Update doc history
1
.rst
rst
bsd-3-clause
gmr/queries
1238
<NME> history.rst <BEF> Version History =============== - Next Release - Log a warning when a tornado_session.Result is ``__del__'d`` without ``free`` being called. - 1.9.1 2016-10-25 - Add better exception handling around connections and getting the logged in user - 1.9.0 2016-07-01 ----------------- - REMOVED support for Python 2.6 - FIXED CPU Pegging bug: Cleanup IOLoop and internal stack in ``TornadoSession`` on connection error. In the case of a connection error, the failure to do this caused CPU to peg @ 100% utilization looping on a non-existent file descriptor. Thanks to `cknave <https://github.com/cknave>`_ for his work on identifying the issue, proposing a fix, and writing a working test case. - Move the integration tests to use a local docker development environment - Added new methods ``queries.pool.Pool.report`` and ``queries.pool.PoolManager.Report`` for reporting pool status. - Added new methods to ``queries.pool.Pool`` for returning a list of busy, closed, executing, and locked connections. 1.10.4 2018-01-10 ----------------- - Implement ``Results.__bool__`` to be explicit about Python 3 support. - Catch any exception raised when using TornadoSession and invoking the execute function in psycopg2 for exceptions raised prior to sending the query to Postgres. This could be psycopg2.Error, IndexError, KeyError, or who knows, it's not documented in psycopg2. 1.10.3 2017-11-01 ----------------- - Remove the functionality from ``TornadoSession.validate`` and make it raise a ``DeprecationWarning`` - Catch the ``KeyError`` raised when ``PoolManager.clean()`` is invoked for a pool that doesn't exist 1.10.2 2017-10-26 ----------------- - Ensure the pool exists when executing a query in TornadoSession, the new timeout behavior prevented that from happening. 1.10.1 2017-10-24 ----------------- - Use an absolute time in the call to ``add_timeout`` 1.10.0 2017-09-27 ----------------- - Free when tornado_session.Result is ``__del__``'d without ``free`` being called. - Auto-clean the pool after Results.free TTL+1 in tornado_session.TornadoSession - Don't raise NotImplementedError in Results.free for synchronous use, just treat as a noop 1.9.1 2016-10-25 ---------------- - Add better exception handling around connections and getting the logged in user 1.9.0 2016-07-01 ---------------- - Handle a potential race condition in TornadoSession when too many simultaneous new connections are made and a pool fills up - Increase logging in various places to be more informative - Restructure queries specific exceptions to all extend off of a base QueriesException - Trivial code cleanup 1.8.10 2016-06-14 ----------------- - Propagate PoolManager exceptions from TornadoSession (#20) - Fix by Dave Shawley 1.8.9 2015-11-11 ---------------- - Move to psycopg2cffi for PyPy support 1.7.5 2015-09-03 ---------------- - Don't let Session and TornadoSession share connections 1.7.1 2015-03-25 ---------------- - Fix TornadoSession's use of cleanup (#8) - Fix by Oren Itamar 1.7.0 2015-01-13 ---------------- - Implement :py:meth:`Pool.shutdown <queries.pool.Pool.shutdown>` and :py:meth:`PoolManager.shutdown <queries.pool.PoolManager.shutdown>` to cleanly shutdown all open, non-executing connections across a Pool or all pools. Update locks in Pool operations to ensure atomicity. 1.6.1 2015-01-09 ---------------- - Fixes an iteration error when closing a pool (#7) - Fix by Chris McGuire 1.6.0 2014-11-20 ----------------- - Handle URI encoded password values properly 1.5.0 2014-10-07 ---------------- - Handle empty query results in the iterator (#4) - Fix by Den Teresh 1.4.0 2014-09-04 ---------------- - Address exception handling in tornado_session <MSG> Update doc history <DFF> @@ -1,7 +1,9 @@ Version History =============== - Next Release - - Log a warning when a tornado_session.Result is ``__del__'d`` without ``free`` being called. + - Free when tornado_session.Result is ``__del__'d`` without ``free`` being called. + - Auto-clean the pool after Results.free TTL+1 in tornado_session.TornadoSession + - Dont raise NotImplementedError in Results.free for synchronous use, just treat as a noop - 1.9.1 2016-10-25 - Add better exception handling around connections and getting the logged in user - 1.9.0 2016-07-01
3
Update doc history
1
.rst
rst
bsd-3-clause
gmr/queries
1239
<NME> session_tests.py <BEF> """ Tests for the core Queries class """ import mock import logging import unittest import mock from psycopg2 import extras import psycopg2 from psycopg2 import extensions from queries import core from queries import pool from queries import PYPY class PostgresTests(unittest.TestCase): @mock.patch('psycopg2.connect') @mock.patch('psycopg2.extensions.register_type') @mock.patch('psycopg2.connect') @mock.patch('psycopg2.extensions.register_type') @mock.patch('psycopg2.extras.register_uuid') @mock.patch('queries.utils.uri_to_kwargs') def setUp(self, uri_to_kwargs, register_uuid, register_type, connect): self.conn = mock.Mock() self.uri = 'pgsql://[email protected]:5432/queries' if self.uri in pool.CONNECTIONS: del pool.CONNECTIONS[self.uri] self.client = core.Postgres(self.uri) def test_psycopg2_connection_invoked(self): """Ensure that psycopg2.connect was invoked""" self.psycopg2_connect.return_value = self.conn self.psycopg2_register_type = register_type self.psycopg2_register_uuid = register_uuid self.uri_to_kwargs = uri_to_kwargs self.uri_to_kwargs.return_value = {'host': 'localhost', 'port': 5432, 'user': 'foo', 'password': 'bar', 'dbname': 'foo'} self.obj = session.Session(self.URI, pool_max_size=100) def test_init_sets_uri(self): self.assertEqual(self.obj._uri, self.URI) def test_init_creates_new_pool(self): self.assertIn(self.obj.pid, self.obj._pool_manager) self.assertTrue(self.client._conn.autocommit) def test_connection_property(self): """Test value of Postgres.connection property""" self.assertEqual(self.client._conn, self.client.connection) def test_cursor_property(self): """Test value of Postgres.connection property""" self.assertEqual(self.client._cursor, self.client.cursor) def test_connection_added_to_cache(self): self.conn.cursor.assert_called_once_with( name=None, cursor_factory=extras.RealDictCursor) def test_init_sets_autocommit(self): self.assertTrue(self.conn.autocommit) self.client._conn) def test_cleanup_removes_client_from_cache(self): """Ensure that Postgres._cleanup frees the client in the cache""" value = pool.CONNECTIONS[self.uri]['clients'] self.client._cleanup() self.assertEqual(pool.CONNECTIONS[self.uri]['clients'], value - 1) args = ('foo', ['bar', 'baz']) self.obj.callproc(*args) self.obj._cursor.callproc.assert_called_once_with(*args) def test_callproc_returns_results(self): @unittest.skipIf(PYPY, 'Not invoked in PyPy') def test_del_invokes_cleanup(self): """Deleting Postgres instance invokes Postgres._cleanup""" with mock.patch('queries.core.Postgres._cleanup') as cleanup: del self.client cleanup.assert_called_once_with() def test_close_removes_connection(self): @mock.patch('psycopg2.extras.register_uuid') def test_context_manager_creation(self, _reg_uuid, _reg_type): """Ensure context manager returns self""" with core.Postgres(self.uri) as conn: self.assertIsInstance(conn, core.Postgres) @mock.patch('psycopg2.extensions.register_type') @mock.patch('psycopg2.extras.register_uuid') def test_context_manager_cleanup(self, _reg_uuid, _reg_type): """Ensure context manager cleans up after self""" with mock.patch('queries.core.Postgres._cleanup') as cleanup: with core.Postgres(self.uri): pass cleanup.assert_called_with() def test_cursor_property_returns_correct_value(self): self.assertEqual(self.obj.cursor, self.obj._cursor) def test_close_removes_from_cache(self, _reg_uuid, _reg_type, _connect): """Ensure connection removed from cache on close""" uri = 'pgsql://foo@bar:9999/baz' pgsql = core.Postgres(uri) self.assertIn(uri, pool.CONNECTIONS) pgsql.close() self.assertNotIn(uri, pool.CONNECTIONS) def test_pid_value(self): expectation = hashlib.md5( @mock.patch('psycopg2.extras.register_uuid') def test_close_invokes_connection_close(self, _reg_uuid, _reg_type, connect): """Ensure close calls connection.close""" conn = core.Postgres('pgsql://foo@bar:9999/baz') close_mock = mock.Mock() conn._conn.close = close_mock conn.close() close_mock .assert_called_once_with() @mock.patch('psycopg2.connect') @mock.patch('psycopg2.extensions.register_type') @mock.patch('psycopg2.extras.register_uuid') def test_close_sets_conn_to_none(self, _reg_uuid, _reg_type, connect): """Ensure Postgres._conn is None after close""" conn = core.Postgres('pgsql://foo@bar:9999/baz') conn.close() self.assertIsNone(conn._conn) @mock.patch('psycopg2.connect') @mock.patch('psycopg2.extensions.register_type') @mock.patch('psycopg2.extras.register_uuid') def test_close_sets_cursor_to_none(self, _reg_uuid, _reg_type, connect): """Ensure Postgres._cursor is None after close""" conn = core.Postgres('pgsql://foo@bar:9999/baz') conn.close() self.assertIsNone(conn._cursor) @mock.patch('psycopg2.connect') @mock.patch('psycopg2.extensions.register_type') @mock.patch('psycopg2.extras.register_uuid') def test_close_raises_when_closed(self, _reg_uuid, _reg_type, _conn): """Ensure Postgres._cursor is None after close""" conn = core.Postgres('pgsql://foo@bar:9999/baz') conn.close() self.assertRaises(AssertionError, conn.close) def test_exit_invokes_cleanup(self): cleanup = mock.Mock() with mock.patch.multiple('queries.session.Session', _cleanup=cleanup, _connect=mock.Mock(), _get_cursor=mock.Mock(), _autocommit=mock.Mock()): with session.Session(self.URI): pass self.assertTrue(cleanup.called) def test_autocommit_sets_attribute(self): self.conn.autocommit = False self.obj._autocommit(True) self.assertTrue(self.conn.autocommit) def test_cleanup_closes_cursor(self): self.obj._cursor.close = closeit = mock.Mock() self.conn = None self.obj._cleanup() closeit.assert_called_once_with() def test_cleanup_sets_cursor_to_none(self): self.obj._cursor.close = mock.Mock() self.conn = None self.obj._cleanup() self.assertIsNone(self.obj._cursor) def test_cleanup_frees_connection(self): with mock.patch.object(self.obj._pool_manager, 'free') as free: conn = self.obj._conn self.obj._cleanup() free.assert_called_once_with(self.obj.pid, conn) def test_cleanup_sets_connect_to_none(self): self.obj._cleanup() self.assertIsNone(self.obj._conn) def test_connect_invokes_pool_manager_get(self): with mock.patch.object(self.obj._pool_manager, 'get') as get: self.obj._connect() get.assert_called_once_with(self.obj.pid, self.obj) def test_connect_raises_noidleconnectionserror(self): with mock.patch.object(self.obj._pool_manager, 'get') as get: with mock.patch.object(self.obj._pool_manager, 'is_full') as full: get.side_effect = pool.NoIdleConnectionsError(self.obj.pid) full.return_value = True self.assertRaises(pool.NoIdleConnectionsError, self.obj._connect) def test_connect_invokes_uri_to_kwargs(self): self.uri_to_kwargs.assert_called_once_with(self.URI) def test_connect_returned_the_proper_value(self): self.assertEqual(self.obj.connection, self.conn) def test_status_is_ready_by_default(self): self.assertEqual(self.obj._status, self.obj.READY) def test_status_when_not_ready(self): self.conn.status = self.obj.SETUP self.assertEqual(self.obj._status, self.obj.SETUP) def test_get_named_cursor_sets_scrollable(self): result = self.obj._get_cursor(self.obj._conn, 'test1') self.assertTrue(result.scrollable) def test_get_named_cursor_sets_withhold(self): result = self.obj._get_cursor(self.obj._conn, 'test2') self.assertTrue(result.withhhold) @unittest.skipUnless(utils.PYPY, 'connection.reset is PYPY only behavior') def test_connection_reset_in_pypy(self): self.conn.reset.assert_called_once_with() <MSG> Refactor core.Postgres -> session.Session <DFF> @@ -1,5 +1,5 @@ """ -Tests for the core Queries class +Tests for the session.Session class """ import mock @@ -11,12 +11,12 @@ except ImportError: import psycopg2 from psycopg2 import extensions -from queries import core +from queries import session from queries import pool from queries import PYPY -class PostgresTests(unittest.TestCase): +class SessionTests(unittest.TestCase): @mock.patch('psycopg2.connect') @mock.patch('psycopg2.extensions.register_type') @@ -29,7 +29,7 @@ class PostgresTests(unittest.TestCase): self.uri = 'pgsql://[email protected]:5432/queries' if self.uri in pool.CONNECTIONS: del pool.CONNECTIONS[self.uri] - self.client = core.Postgres(self.uri) + self.client = session.Session(self.uri) def test_psycopg2_connection_invoked(self): """Ensure that psycopg2.connect was invoked""" @@ -54,11 +54,11 @@ class PostgresTests(unittest.TestCase): self.assertTrue(self.client._conn.autocommit) def test_connection_property(self): - """Test value of Postgres.connection property""" + """Test value of Session.connection property""" self.assertEqual(self.client._conn, self.client.connection) def test_cursor_property(self): - """Test value of Postgres.connection property""" + """Test value of Session.connection property""" self.assertEqual(self.client._cursor, self.client.cursor) def test_connection_added_to_cache(self): @@ -71,7 +71,7 @@ class PostgresTests(unittest.TestCase): self.client._conn) def test_cleanup_removes_client_from_cache(self): - """Ensure that Postgres._cleanup frees the client in the cache""" + """Ensure that Session._cleanup frees the client in the cache""" value = pool.CONNECTIONS[self.uri]['clients'] self.client._cleanup() self.assertEqual(pool.CONNECTIONS[self.uri]['clients'], value - 1) @@ -83,8 +83,8 @@ class PostgresTests(unittest.TestCase): @unittest.skipIf(PYPY, 'Not invoked in PyPy') def test_del_invokes_cleanup(self): - """Deleting Postgres instance invokes Postgres._cleanup""" - with mock.patch('queries.core.Postgres._cleanup') as cleanup: + """Deleting Session instance invokes Session._cleanup""" + with mock.patch('queries.session.Session._cleanup') as cleanup: del self.client cleanup.assert_called_once_with() @@ -92,15 +92,15 @@ class PostgresTests(unittest.TestCase): @mock.patch('psycopg2.extras.register_uuid') def test_context_manager_creation(self, _reg_uuid, _reg_type): """Ensure context manager returns self""" - with core.Postgres(self.uri) as conn: - self.assertIsInstance(conn, core.Postgres) + with session.Session(self.uri) as conn: + self.assertIsInstance(conn, session.Session) @mock.patch('psycopg2.extensions.register_type') @mock.patch('psycopg2.extras.register_uuid') def test_context_manager_cleanup(self, _reg_uuid, _reg_type): """Ensure context manager cleans up after self""" - with mock.patch('queries.core.Postgres._cleanup') as cleanup: - with core.Postgres(self.uri): + with mock.patch('queries.session.Session._cleanup') as cleanup: + with session.Session(self.uri): pass cleanup.assert_called_with() @@ -110,7 +110,7 @@ class PostgresTests(unittest.TestCase): def test_close_removes_from_cache(self, _reg_uuid, _reg_type, _connect): """Ensure connection removed from cache on close""" uri = 'pgsql://foo@bar:9999/baz' - pgsql = core.Postgres(uri) + pgsql = session.Session(uri) self.assertIn(uri, pool.CONNECTIONS) pgsql.close() self.assertNotIn(uri, pool.CONNECTIONS) @@ -120,35 +120,35 @@ class PostgresTests(unittest.TestCase): @mock.patch('psycopg2.extras.register_uuid') def test_close_invokes_connection_close(self, _reg_uuid, _reg_type, connect): """Ensure close calls connection.close""" - conn = core.Postgres('pgsql://foo@bar:9999/baz') + sess = session.Session('pgsql://foo@bar:9999/baz') close_mock = mock.Mock() - conn._conn.close = close_mock - conn.close() + sess._conn.close = close_mock + sess.close() close_mock .assert_called_once_with() @mock.patch('psycopg2.connect') @mock.patch('psycopg2.extensions.register_type') @mock.patch('psycopg2.extras.register_uuid') def test_close_sets_conn_to_none(self, _reg_uuid, _reg_type, connect): - """Ensure Postgres._conn is None after close""" - conn = core.Postgres('pgsql://foo@bar:9999/baz') - conn.close() - self.assertIsNone(conn._conn) + """Ensure Session._conn is None after close""" + sess = session.Session('pgsql://foo@bar:9999/baz') + sess.close() + self.assertIsNone(sess._conn) @mock.patch('psycopg2.connect') @mock.patch('psycopg2.extensions.register_type') @mock.patch('psycopg2.extras.register_uuid') def test_close_sets_cursor_to_none(self, _reg_uuid, _reg_type, connect): - """Ensure Postgres._cursor is None after close""" - conn = core.Postgres('pgsql://foo@bar:9999/baz') - conn.close() - self.assertIsNone(conn._cursor) + """Ensure Session._cursor is None after close""" + sess = session.Session('pgsql://foo@bar:9999/baz') + sess.close() + self.assertIsNone(sess._cursor) @mock.patch('psycopg2.connect') @mock.patch('psycopg2.extensions.register_type') @mock.patch('psycopg2.extras.register_uuid') def test_close_raises_when_closed(self, _reg_uuid, _reg_type, _conn): - """Ensure Postgres._cursor is None after close""" - conn = core.Postgres('pgsql://foo@bar:9999/baz') - conn.close() - self.assertRaises(AssertionError, conn.close) + """Ensure Session._cursor is None after close""" + sess = session.Session('pgsql://foo@bar:9999/baz') + sess.close() + self.assertRaises(AssertionError, sess.close)
29
Refactor core.Postgres -> session.Session
29
.py
py
bsd-3-clause
gmr/queries
1240
<NME> session_tests.py <BEF> """ Tests for the core Queries class """ import mock import logging import unittest import mock from psycopg2 import extras import psycopg2 from psycopg2 import extensions from queries import core from queries import pool from queries import PYPY class PostgresTests(unittest.TestCase): @mock.patch('psycopg2.connect') @mock.patch('psycopg2.extensions.register_type') @mock.patch('psycopg2.connect') @mock.patch('psycopg2.extensions.register_type') @mock.patch('psycopg2.extras.register_uuid') @mock.patch('queries.utils.uri_to_kwargs') def setUp(self, uri_to_kwargs, register_uuid, register_type, connect): self.conn = mock.Mock() self.uri = 'pgsql://[email protected]:5432/queries' if self.uri in pool.CONNECTIONS: del pool.CONNECTIONS[self.uri] self.client = core.Postgres(self.uri) def test_psycopg2_connection_invoked(self): """Ensure that psycopg2.connect was invoked""" self.psycopg2_connect.return_value = self.conn self.psycopg2_register_type = register_type self.psycopg2_register_uuid = register_uuid self.uri_to_kwargs = uri_to_kwargs self.uri_to_kwargs.return_value = {'host': 'localhost', 'port': 5432, 'user': 'foo', 'password': 'bar', 'dbname': 'foo'} self.obj = session.Session(self.URI, pool_max_size=100) def test_init_sets_uri(self): self.assertEqual(self.obj._uri, self.URI) def test_init_creates_new_pool(self): self.assertIn(self.obj.pid, self.obj._pool_manager) self.assertTrue(self.client._conn.autocommit) def test_connection_property(self): """Test value of Postgres.connection property""" self.assertEqual(self.client._conn, self.client.connection) def test_cursor_property(self): """Test value of Postgres.connection property""" self.assertEqual(self.client._cursor, self.client.cursor) def test_connection_added_to_cache(self): self.conn.cursor.assert_called_once_with( name=None, cursor_factory=extras.RealDictCursor) def test_init_sets_autocommit(self): self.assertTrue(self.conn.autocommit) self.client._conn) def test_cleanup_removes_client_from_cache(self): """Ensure that Postgres._cleanup frees the client in the cache""" value = pool.CONNECTIONS[self.uri]['clients'] self.client._cleanup() self.assertEqual(pool.CONNECTIONS[self.uri]['clients'], value - 1) args = ('foo', ['bar', 'baz']) self.obj.callproc(*args) self.obj._cursor.callproc.assert_called_once_with(*args) def test_callproc_returns_results(self): @unittest.skipIf(PYPY, 'Not invoked in PyPy') def test_del_invokes_cleanup(self): """Deleting Postgres instance invokes Postgres._cleanup""" with mock.patch('queries.core.Postgres._cleanup') as cleanup: del self.client cleanup.assert_called_once_with() def test_close_removes_connection(self): @mock.patch('psycopg2.extras.register_uuid') def test_context_manager_creation(self, _reg_uuid, _reg_type): """Ensure context manager returns self""" with core.Postgres(self.uri) as conn: self.assertIsInstance(conn, core.Postgres) @mock.patch('psycopg2.extensions.register_type') @mock.patch('psycopg2.extras.register_uuid') def test_context_manager_cleanup(self, _reg_uuid, _reg_type): """Ensure context manager cleans up after self""" with mock.patch('queries.core.Postgres._cleanup') as cleanup: with core.Postgres(self.uri): pass cleanup.assert_called_with() def test_cursor_property_returns_correct_value(self): self.assertEqual(self.obj.cursor, self.obj._cursor) def test_close_removes_from_cache(self, _reg_uuid, _reg_type, _connect): """Ensure connection removed from cache on close""" uri = 'pgsql://foo@bar:9999/baz' pgsql = core.Postgres(uri) self.assertIn(uri, pool.CONNECTIONS) pgsql.close() self.assertNotIn(uri, pool.CONNECTIONS) def test_pid_value(self): expectation = hashlib.md5( @mock.patch('psycopg2.extras.register_uuid') def test_close_invokes_connection_close(self, _reg_uuid, _reg_type, connect): """Ensure close calls connection.close""" conn = core.Postgres('pgsql://foo@bar:9999/baz') close_mock = mock.Mock() conn._conn.close = close_mock conn.close() close_mock .assert_called_once_with() @mock.patch('psycopg2.connect') @mock.patch('psycopg2.extensions.register_type') @mock.patch('psycopg2.extras.register_uuid') def test_close_sets_conn_to_none(self, _reg_uuid, _reg_type, connect): """Ensure Postgres._conn is None after close""" conn = core.Postgres('pgsql://foo@bar:9999/baz') conn.close() self.assertIsNone(conn._conn) @mock.patch('psycopg2.connect') @mock.patch('psycopg2.extensions.register_type') @mock.patch('psycopg2.extras.register_uuid') def test_close_sets_cursor_to_none(self, _reg_uuid, _reg_type, connect): """Ensure Postgres._cursor is None after close""" conn = core.Postgres('pgsql://foo@bar:9999/baz') conn.close() self.assertIsNone(conn._cursor) @mock.patch('psycopg2.connect') @mock.patch('psycopg2.extensions.register_type') @mock.patch('psycopg2.extras.register_uuid') def test_close_raises_when_closed(self, _reg_uuid, _reg_type, _conn): """Ensure Postgres._cursor is None after close""" conn = core.Postgres('pgsql://foo@bar:9999/baz') conn.close() self.assertRaises(AssertionError, conn.close) def test_exit_invokes_cleanup(self): cleanup = mock.Mock() with mock.patch.multiple('queries.session.Session', _cleanup=cleanup, _connect=mock.Mock(), _get_cursor=mock.Mock(), _autocommit=mock.Mock()): with session.Session(self.URI): pass self.assertTrue(cleanup.called) def test_autocommit_sets_attribute(self): self.conn.autocommit = False self.obj._autocommit(True) self.assertTrue(self.conn.autocommit) def test_cleanup_closes_cursor(self): self.obj._cursor.close = closeit = mock.Mock() self.conn = None self.obj._cleanup() closeit.assert_called_once_with() def test_cleanup_sets_cursor_to_none(self): self.obj._cursor.close = mock.Mock() self.conn = None self.obj._cleanup() self.assertIsNone(self.obj._cursor) def test_cleanup_frees_connection(self): with mock.patch.object(self.obj._pool_manager, 'free') as free: conn = self.obj._conn self.obj._cleanup() free.assert_called_once_with(self.obj.pid, conn) def test_cleanup_sets_connect_to_none(self): self.obj._cleanup() self.assertIsNone(self.obj._conn) def test_connect_invokes_pool_manager_get(self): with mock.patch.object(self.obj._pool_manager, 'get') as get: self.obj._connect() get.assert_called_once_with(self.obj.pid, self.obj) def test_connect_raises_noidleconnectionserror(self): with mock.patch.object(self.obj._pool_manager, 'get') as get: with mock.patch.object(self.obj._pool_manager, 'is_full') as full: get.side_effect = pool.NoIdleConnectionsError(self.obj.pid) full.return_value = True self.assertRaises(pool.NoIdleConnectionsError, self.obj._connect) def test_connect_invokes_uri_to_kwargs(self): self.uri_to_kwargs.assert_called_once_with(self.URI) def test_connect_returned_the_proper_value(self): self.assertEqual(self.obj.connection, self.conn) def test_status_is_ready_by_default(self): self.assertEqual(self.obj._status, self.obj.READY) def test_status_when_not_ready(self): self.conn.status = self.obj.SETUP self.assertEqual(self.obj._status, self.obj.SETUP) def test_get_named_cursor_sets_scrollable(self): result = self.obj._get_cursor(self.obj._conn, 'test1') self.assertTrue(result.scrollable) def test_get_named_cursor_sets_withhold(self): result = self.obj._get_cursor(self.obj._conn, 'test2') self.assertTrue(result.withhhold) @unittest.skipUnless(utils.PYPY, 'connection.reset is PYPY only behavior') def test_connection_reset_in_pypy(self): self.conn.reset.assert_called_once_with() <MSG> Refactor core.Postgres -> session.Session <DFF> @@ -1,5 +1,5 @@ """ -Tests for the core Queries class +Tests for the session.Session class """ import mock @@ -11,12 +11,12 @@ except ImportError: import psycopg2 from psycopg2 import extensions -from queries import core +from queries import session from queries import pool from queries import PYPY -class PostgresTests(unittest.TestCase): +class SessionTests(unittest.TestCase): @mock.patch('psycopg2.connect') @mock.patch('psycopg2.extensions.register_type') @@ -29,7 +29,7 @@ class PostgresTests(unittest.TestCase): self.uri = 'pgsql://[email protected]:5432/queries' if self.uri in pool.CONNECTIONS: del pool.CONNECTIONS[self.uri] - self.client = core.Postgres(self.uri) + self.client = session.Session(self.uri) def test_psycopg2_connection_invoked(self): """Ensure that psycopg2.connect was invoked""" @@ -54,11 +54,11 @@ class PostgresTests(unittest.TestCase): self.assertTrue(self.client._conn.autocommit) def test_connection_property(self): - """Test value of Postgres.connection property""" + """Test value of Session.connection property""" self.assertEqual(self.client._conn, self.client.connection) def test_cursor_property(self): - """Test value of Postgres.connection property""" + """Test value of Session.connection property""" self.assertEqual(self.client._cursor, self.client.cursor) def test_connection_added_to_cache(self): @@ -71,7 +71,7 @@ class PostgresTests(unittest.TestCase): self.client._conn) def test_cleanup_removes_client_from_cache(self): - """Ensure that Postgres._cleanup frees the client in the cache""" + """Ensure that Session._cleanup frees the client in the cache""" value = pool.CONNECTIONS[self.uri]['clients'] self.client._cleanup() self.assertEqual(pool.CONNECTIONS[self.uri]['clients'], value - 1) @@ -83,8 +83,8 @@ class PostgresTests(unittest.TestCase): @unittest.skipIf(PYPY, 'Not invoked in PyPy') def test_del_invokes_cleanup(self): - """Deleting Postgres instance invokes Postgres._cleanup""" - with mock.patch('queries.core.Postgres._cleanup') as cleanup: + """Deleting Session instance invokes Session._cleanup""" + with mock.patch('queries.session.Session._cleanup') as cleanup: del self.client cleanup.assert_called_once_with() @@ -92,15 +92,15 @@ class PostgresTests(unittest.TestCase): @mock.patch('psycopg2.extras.register_uuid') def test_context_manager_creation(self, _reg_uuid, _reg_type): """Ensure context manager returns self""" - with core.Postgres(self.uri) as conn: - self.assertIsInstance(conn, core.Postgres) + with session.Session(self.uri) as conn: + self.assertIsInstance(conn, session.Session) @mock.patch('psycopg2.extensions.register_type') @mock.patch('psycopg2.extras.register_uuid') def test_context_manager_cleanup(self, _reg_uuid, _reg_type): """Ensure context manager cleans up after self""" - with mock.patch('queries.core.Postgres._cleanup') as cleanup: - with core.Postgres(self.uri): + with mock.patch('queries.session.Session._cleanup') as cleanup: + with session.Session(self.uri): pass cleanup.assert_called_with() @@ -110,7 +110,7 @@ class PostgresTests(unittest.TestCase): def test_close_removes_from_cache(self, _reg_uuid, _reg_type, _connect): """Ensure connection removed from cache on close""" uri = 'pgsql://foo@bar:9999/baz' - pgsql = core.Postgres(uri) + pgsql = session.Session(uri) self.assertIn(uri, pool.CONNECTIONS) pgsql.close() self.assertNotIn(uri, pool.CONNECTIONS) @@ -120,35 +120,35 @@ class PostgresTests(unittest.TestCase): @mock.patch('psycopg2.extras.register_uuid') def test_close_invokes_connection_close(self, _reg_uuid, _reg_type, connect): """Ensure close calls connection.close""" - conn = core.Postgres('pgsql://foo@bar:9999/baz') + sess = session.Session('pgsql://foo@bar:9999/baz') close_mock = mock.Mock() - conn._conn.close = close_mock - conn.close() + sess._conn.close = close_mock + sess.close() close_mock .assert_called_once_with() @mock.patch('psycopg2.connect') @mock.patch('psycopg2.extensions.register_type') @mock.patch('psycopg2.extras.register_uuid') def test_close_sets_conn_to_none(self, _reg_uuid, _reg_type, connect): - """Ensure Postgres._conn is None after close""" - conn = core.Postgres('pgsql://foo@bar:9999/baz') - conn.close() - self.assertIsNone(conn._conn) + """Ensure Session._conn is None after close""" + sess = session.Session('pgsql://foo@bar:9999/baz') + sess.close() + self.assertIsNone(sess._conn) @mock.patch('psycopg2.connect') @mock.patch('psycopg2.extensions.register_type') @mock.patch('psycopg2.extras.register_uuid') def test_close_sets_cursor_to_none(self, _reg_uuid, _reg_type, connect): - """Ensure Postgres._cursor is None after close""" - conn = core.Postgres('pgsql://foo@bar:9999/baz') - conn.close() - self.assertIsNone(conn._cursor) + """Ensure Session._cursor is None after close""" + sess = session.Session('pgsql://foo@bar:9999/baz') + sess.close() + self.assertIsNone(sess._cursor) @mock.patch('psycopg2.connect') @mock.patch('psycopg2.extensions.register_type') @mock.patch('psycopg2.extras.register_uuid') def test_close_raises_when_closed(self, _reg_uuid, _reg_type, _conn): - """Ensure Postgres._cursor is None after close""" - conn = core.Postgres('pgsql://foo@bar:9999/baz') - conn.close() - self.assertRaises(AssertionError, conn.close) + """Ensure Session._cursor is None after close""" + sess = session.Session('pgsql://foo@bar:9999/baz') + sess.close() + self.assertRaises(AssertionError, sess.close)
29
Refactor core.Postgres -> session.Session
29
.py
py
bsd-3-clause
gmr/queries
1241
<NME> __init__.py <BEF> """ Queries: PostgreSQL database access simplified Queries is an opinionated wrapper for interfacing with PostgreSQL that offers caching of connections and support for PyPy via psycopg2ct. The core `queries.Queries` class will automatically register support for UUIDs, Unicode and Unicode arrays. """ import logging import sys try: import psycopg2cffi import psycopg2cffi.extras import psycopg2cffi.extensions except ImportError: pass else: sys.modules['psycopg2'] = psycopg2cffi sys.modules['psycopg2.extras'] = psycopg2cffi.extras sys.modules['psycopg2.extensions'] = psycopg2cffi.extensions from queries.results import Results from queries.session import Session try: from queries.tornado_session import TornadoSession except ImportError: # pragma: nocover TornadoSession = None from queries.utils import uri # For ease of access to different cursor types from psycopg2.extras import DictCursor from psycopg2.extras import NamedTupleCursor from psycopg2.extras import RealDictCursor from psycopg2.extras import LoggingCursor from psycopg2.extras import MinTimeLoggingCursor # Expose exceptions so clients do not need to import psycopg2 too from psycopg2 import Warning from psycopg2 import Error from psycopg2 import DataError # Mappings to queries classes and methods from queries.session import Session from queries.simple import callproc from queries.simple import query from queries.simple import uri # For ease of access to different cursor types __version__ = '2.1.0' version = __version__ # Add a Null logging handler to prevent logging output when un-configured logging.getLogger('queries').addHandler(logging.NullHandler()) <MSG> Add query_all and callproc_all to queries.simple <DFF> @@ -44,7 +44,9 @@ DEFAULT_URI = 'pgsql://localhost:5432' # Mappings to queries classes and methods from queries.session import Session from queries.simple import callproc +from queries.simple import callproc_all from queries.simple import query +from queries.simple import query_all from queries.simple import uri # For ease of access to different cursor types
2
Add query_all and callproc_all to queries.simple
0
.py
py
bsd-3-clause
gmr/queries
1242
<NME> __init__.py <BEF> """ Queries: PostgreSQL database access simplified Queries is an opinionated wrapper for interfacing with PostgreSQL that offers caching of connections and support for PyPy via psycopg2ct. The core `queries.Queries` class will automatically register support for UUIDs, Unicode and Unicode arrays. """ import logging import sys try: import psycopg2cffi import psycopg2cffi.extras import psycopg2cffi.extensions except ImportError: pass else: sys.modules['psycopg2'] = psycopg2cffi sys.modules['psycopg2.extras'] = psycopg2cffi.extras sys.modules['psycopg2.extensions'] = psycopg2cffi.extensions from queries.results import Results from queries.session import Session try: from queries.tornado_session import TornadoSession except ImportError: # pragma: nocover TornadoSession = None from queries.utils import uri # For ease of access to different cursor types from psycopg2.extras import DictCursor from psycopg2.extras import NamedTupleCursor from psycopg2.extras import RealDictCursor from psycopg2.extras import LoggingCursor from psycopg2.extras import MinTimeLoggingCursor # Expose exceptions so clients do not need to import psycopg2 too from psycopg2 import Warning from psycopg2 import Error from psycopg2 import DataError # Mappings to queries classes and methods from queries.session import Session from queries.simple import callproc from queries.simple import query from queries.simple import uri # For ease of access to different cursor types __version__ = '2.1.0' version = __version__ # Add a Null logging handler to prevent logging output when un-configured logging.getLogger('queries').addHandler(logging.NullHandler()) <MSG> Add query_all and callproc_all to queries.simple <DFF> @@ -44,7 +44,9 @@ DEFAULT_URI = 'pgsql://localhost:5432' # Mappings to queries classes and methods from queries.session import Session from queries.simple import callproc +from queries.simple import callproc_all from queries.simple import query +from queries.simple import query_all from queries.simple import uri # For ease of access to different cursor types
2
Add query_all and callproc_all to queries.simple
0
.py
py
bsd-3-clause
gmr/queries
1243
<NME> __init__.py <BEF> """ Queries: PostgreSQL database access simplified Queries is an opinionated wrapper for interfacing with PostgreSQL that offers caching of connections and support for PyPy via psycopg2ct. The core `queries.Queries` class will automatically register support for UUIDs, Unicode and Unicode arrays. """ import logging import sys try: import psycopg2cffi import psycopg2cffi.extras import psycopg2cffi.extensions except ImportError: pass else: sys.modules['psycopg2'] = psycopg2cffi sys.modules['psycopg2.extras'] = psycopg2cffi.extras sys.modules['psycopg2.extensions'] = psycopg2cffi.extensions from queries.results import Results from queries.session import Session try: from queries.tornado_session import TornadoSession except ImportError: # pragma: nocover TornadoSession = None from queries.utils import uri # For ease of access to different cursor types from psycopg2.extras import DictCursor from psycopg2.extras import NamedTupleCursor from psycopg2.extras import RealDictCursor from psycopg2.extras import LoggingCursor from psycopg2.extras import MinTimeLoggingCursor # Expose exceptions so clients do not need to import psycopg2 too from psycopg2 import Warning from psycopg2 import Error from psycopg2 import DataError from psycopg2 import DatabaseError from psycopg2 import IntegrityError from psycopg2 import InterfaceError from psycopg2 import InternalError from psycopg2 import NotSupportedError from psycopg2 import OperationalError from psycopg2 import ProgrammingError from psycopg2.extensions import QueryCanceledError from psycopg2.extensions import TransactionRollbackError __version__ = '2.1.0' version = __version__ # Add a Null logging handler to prevent logging output when un-configured logging.getLogger('queries').addHandler(logging.NullHandler()) from psycopg2.extras import DictCursor from psycopg2.extras import NamedTupleCursor from psycopg2.extras import RealDictCursor # Expose exceptions so clients do not need to import psycopg2 too from psycopg2 import DataError <MSG> Add additional cursor types <DFF> @@ -72,6 +72,8 @@ def uri(host='localhost', port='5432', dbname='postgres', user='postgres', from psycopg2.extras import DictCursor from psycopg2.extras import NamedTupleCursor from psycopg2.extras import RealDictCursor +from psycopg2.extras import LoggingCursor +from psycopg2.extras import MinTimeLoggingCursor # Expose exceptions so clients do not need to import psycopg2 too from psycopg2 import DataError
2
Add additional cursor types
0
.py
py
bsd-3-clause
gmr/queries
1244
<NME> __init__.py <BEF> """ Queries: PostgreSQL database access simplified Queries is an opinionated wrapper for interfacing with PostgreSQL that offers caching of connections and support for PyPy via psycopg2ct. The core `queries.Queries` class will automatically register support for UUIDs, Unicode and Unicode arrays. """ import logging import sys try: import psycopg2cffi import psycopg2cffi.extras import psycopg2cffi.extensions except ImportError: pass else: sys.modules['psycopg2'] = psycopg2cffi sys.modules['psycopg2.extras'] = psycopg2cffi.extras sys.modules['psycopg2.extensions'] = psycopg2cffi.extensions from queries.results import Results from queries.session import Session try: from queries.tornado_session import TornadoSession except ImportError: # pragma: nocover TornadoSession = None from queries.utils import uri # For ease of access to different cursor types from psycopg2.extras import DictCursor from psycopg2.extras import NamedTupleCursor from psycopg2.extras import RealDictCursor from psycopg2.extras import LoggingCursor from psycopg2.extras import MinTimeLoggingCursor # Expose exceptions so clients do not need to import psycopg2 too from psycopg2 import Warning from psycopg2 import Error from psycopg2 import DataError from psycopg2 import DatabaseError from psycopg2 import IntegrityError from psycopg2 import InterfaceError from psycopg2 import InternalError from psycopg2 import NotSupportedError from psycopg2 import OperationalError from psycopg2 import ProgrammingError from psycopg2.extensions import QueryCanceledError from psycopg2.extensions import TransactionRollbackError __version__ = '2.1.0' version = __version__ # Add a Null logging handler to prevent logging output when un-configured logging.getLogger('queries').addHandler(logging.NullHandler()) from psycopg2.extras import DictCursor from psycopg2.extras import NamedTupleCursor from psycopg2.extras import RealDictCursor # Expose exceptions so clients do not need to import psycopg2 too from psycopg2 import DataError <MSG> Add additional cursor types <DFF> @@ -72,6 +72,8 @@ def uri(host='localhost', port='5432', dbname='postgres', user='postgres', from psycopg2.extras import DictCursor from psycopg2.extras import NamedTupleCursor from psycopg2.extras import RealDictCursor +from psycopg2.extras import LoggingCursor +from psycopg2.extras import MinTimeLoggingCursor # Expose exceptions so clients do not need to import psycopg2 too from psycopg2 import DataError
2
Add additional cursor types
0
.py
py
bsd-3-clause
gmr/queries
1245
<NME> .gitignore <BEF> .DS_Store .idea *.pyc build dist *.egg-info atlassian-ide-plugin.xml docs/_build env <MSG> Ignore .coverage <DFF> @@ -6,3 +6,4 @@ dist *.egg-info atlassian-ide-plugin.xml docs/_build +.coverage
1
Ignore .coverage
0
gitignore
bsd-3-clause
gmr/queries
1246
<NME> .gitignore <BEF> .DS_Store .idea *.pyc build dist *.egg-info atlassian-ide-plugin.xml docs/_build env <MSG> Ignore .coverage <DFF> @@ -6,3 +6,4 @@ dist *.egg-info atlassian-ide-plugin.xml docs/_build +.coverage
1
Ignore .coverage
0
gitignore
bsd-3-clause
gmr/queries
1247
<NME> README.rst <BEF> Queries: PostgreSQL Simplified ============================== *Queries* is a BSD licensed opinionated wrapper of the psycopg2_ library for interacting with PostgreSQL. The popular psycopg2_ package is a full-featured python client. Unfortunately as a developer, you're often repeating the same steps to get started with your applications that use it. Queries aims to reduce the complexity of psycopg2 while adding additional features to make writing PostgreSQL client applications both fast and easy. Check out the `Usage`_ section below to see how easy it can be. Key features include: - Simplified API - Support of Python 2.7+ and 3.4+ - PyPy support via psycopg2cffi_ - Asynchronous support for Tornado_ - Connection information provided by URI - Query results delivered as a generator based iterators - Automatically registered data-type support for UUIDs, Unicode and Unicode Arrays - Ability to directly access psycopg2 ``connection`` and ``cursor`` objects - Internal connection pooling |Version| |Status| |Coverage| |License| Documentation ------------- Documentation is available at https://queries.readthedocs.org Installation ------------ Queries is available via pypi_ and can be installed with easy_install or pip: .. code:: bash pip install queries Usage ----- Queries provides a session based API for interacting with PostgreSQL. Simply pass in the URI_ of the PostgreSQL server to connect to when creating a session: .. code:: python session = queries.Session("postgresql://postgres@localhost:5432/postgres") Queries built-in connection pooling will re-use connections when possible, lowering the overhead of connecting and reconnecting. When specifying a URI, if you omit the username and database name to connect with, Queries will use the current OS username for both. You can also omit the URI when connecting to connect to localhost on port 5432 as the current OS user, connecting to a database named for the current user. For example, if your username is ``fred`` and you omit the URI when issuing ``queries.query`` the URI that is constructed would be ``postgresql://fred@localhost:5432/fred``. If you'd rather use individual values for the connection, the queries.uri() method provides a quick and easy way to create a URI to pass into the various methods. .. code:: python >>> queries.uri("server-name", 5432, "dbname", "user", "pass") 'postgresql://user:pass@server-name:5432/dbname' Environment Variables ^^^^^^^^^^^^^^^^^^^^^ Currently Queries uses the following environment variables for tweaking various configuration values. The supported ones are: * ``QUERIES_MAX_POOL_SIZE`` - Modify the maximum size of the connection pool (default: 1) Using the queries.Session class ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ To execute queries or call stored procedures, you start by creating an instance of the ``queries.Session`` class. It can act as a context manager, meaning you can use it with the ``with`` keyword and it will take care of cleaning up after itself. For **Using queries.Session.query** The following example shows how a :py:class:`queries.Session` object can be used as a context manager to query the database table: .. code:: python The following example shows how a ``queries.Session`` object can be used as a context manager to query the database table: .. code:: python >>> import pprint >>> import queries >>> >>> with queries.Session() as session: ... for row in session.query('SELECT * FROM names'): ... pprint.pprint(row) ... {'id': 1, 'name': u'Jacob'} {'id': 2, 'name': u'Mason'} {'id': 3, 'name': u'Ethan'} **Using queries.Session.callproc** This example uses ``queries.Session.callproc`` to execute a stored procedure and then pretty-prints the single row results as a dictionary: .. code:: python >>> import pprint >>> import queries >>> with queries.Session() as session: ... results = session.callproc('chr', [65]) ... pprint.pprint(results.as_dict()) ... {'chr': u'A'} **Asynchronous Queries with Tornado** In addition to providing a Pythonic, synchronous client API for PostgreSQL, Queries provides a very similar asynchronous API for use with Tornado. The only major difference API difference between ``queries.TornadoSession`` and ``queries.Session`` is the ``TornadoSession.query`` and ``TornadoSession.callproc`` methods return the entire result set instead of acting as an iterator over the results. The following example uses ``TornadoSession.query`` in an asynchronous Tornado_ web application to send a JSON payload with the query result set. .. code:: python from tornado import gen, ioloop, web import queries class MainHandler(web.RequestHandler): def initialize(self): self.session = queries.TornadoSession() @gen.coroutine def get(self): results = yield self.session.query('SELECT * FROM names') self.finish({'data': results.items()}) results.free() application = web.Application([ (r"/", MainHandler), ]) if __name__ == "__main__": application.listen(8888) ioloop.IOLoop.instance().start() Inspiration ----------- Queries is inspired by `Kenneth Reitz's <https://github.com/kennethreitz/>`_ awesome work on `requests <http://docs.python-requests.org/en/latest/>`_. History ------- Queries is a fork and enhancement of pgsql_wrapper_, which can be found in the main GitHub repository of Queries as tags prior to version 1.2.0. .. _pypi: https://pypi.python.org/pypi/queries .. _psycopg2: https://pypi.python.org/pypi/psycopg2 .. _documentation: https://queries.readthedocs.org .. _URI: http://www.postgresql.org/docs/9.3/static/libpq-connect.html#LIBPQ-CONNSTRING .. _pgsql_wrapper: https://pypi.python.org/pypi/pgsql_wrapper .. _Tornado: http://tornadoweb.org .. _PEP343: http://legacy.python.org/dev/peps/pep-0343/ .. _psycopg2cffi: https://pypi.python.org/pypi/psycopg2cffi .. |Version| image:: https://img.shields.io/pypi/v/queries.svg? :target: https://pypi.python.org/pypi/queries .. |Status| image:: https://img.shields.io/travis/gmr/queries.svg? :target: https://travis-ci.org/gmr/queries .. |Coverage| image:: https://img.shields.io/codecov/c/github/gmr/queries.svg? :target: https://codecov.io/github/gmr/queries?branch=master .. |License| image:: https://img.shields.io/github/license/gmr/queries.svg? :target: https://github.com/gmr/queries <MSG> Additional README updates <DFF> @@ -81,7 +81,7 @@ functionality provided by PostgreSQL. **Using queries.Session.query** -The following example shows how a :py:class:`queries.Session` object can be used +The following example shows how a ``queries.Session`` object can be used as a context manager to query the database table: .. code:: python
1
Additional README updates
1
.rst
rst
bsd-3-clause
gmr/queries
1248
<NME> README.rst <BEF> Queries: PostgreSQL Simplified ============================== *Queries* is a BSD licensed opinionated wrapper of the psycopg2_ library for interacting with PostgreSQL. The popular psycopg2_ package is a full-featured python client. Unfortunately as a developer, you're often repeating the same steps to get started with your applications that use it. Queries aims to reduce the complexity of psycopg2 while adding additional features to make writing PostgreSQL client applications both fast and easy. Check out the `Usage`_ section below to see how easy it can be. Key features include: - Simplified API - Support of Python 2.7+ and 3.4+ - PyPy support via psycopg2cffi_ - Asynchronous support for Tornado_ - Connection information provided by URI - Query results delivered as a generator based iterators - Automatically registered data-type support for UUIDs, Unicode and Unicode Arrays - Ability to directly access psycopg2 ``connection`` and ``cursor`` objects - Internal connection pooling |Version| |Status| |Coverage| |License| Documentation ------------- Documentation is available at https://queries.readthedocs.org Installation ------------ Queries is available via pypi_ and can be installed with easy_install or pip: .. code:: bash pip install queries Usage ----- Queries provides a session based API for interacting with PostgreSQL. Simply pass in the URI_ of the PostgreSQL server to connect to when creating a session: .. code:: python session = queries.Session("postgresql://postgres@localhost:5432/postgres") Queries built-in connection pooling will re-use connections when possible, lowering the overhead of connecting and reconnecting. When specifying a URI, if you omit the username and database name to connect with, Queries will use the current OS username for both. You can also omit the URI when connecting to connect to localhost on port 5432 as the current OS user, connecting to a database named for the current user. For example, if your username is ``fred`` and you omit the URI when issuing ``queries.query`` the URI that is constructed would be ``postgresql://fred@localhost:5432/fred``. If you'd rather use individual values for the connection, the queries.uri() method provides a quick and easy way to create a URI to pass into the various methods. .. code:: python >>> queries.uri("server-name", 5432, "dbname", "user", "pass") 'postgresql://user:pass@server-name:5432/dbname' Environment Variables ^^^^^^^^^^^^^^^^^^^^^ Currently Queries uses the following environment variables for tweaking various configuration values. The supported ones are: * ``QUERIES_MAX_POOL_SIZE`` - Modify the maximum size of the connection pool (default: 1) Using the queries.Session class ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ To execute queries or call stored procedures, you start by creating an instance of the ``queries.Session`` class. It can act as a context manager, meaning you can use it with the ``with`` keyword and it will take care of cleaning up after itself. For **Using queries.Session.query** The following example shows how a :py:class:`queries.Session` object can be used as a context manager to query the database table: .. code:: python The following example shows how a ``queries.Session`` object can be used as a context manager to query the database table: .. code:: python >>> import pprint >>> import queries >>> >>> with queries.Session() as session: ... for row in session.query('SELECT * FROM names'): ... pprint.pprint(row) ... {'id': 1, 'name': u'Jacob'} {'id': 2, 'name': u'Mason'} {'id': 3, 'name': u'Ethan'} **Using queries.Session.callproc** This example uses ``queries.Session.callproc`` to execute a stored procedure and then pretty-prints the single row results as a dictionary: .. code:: python >>> import pprint >>> import queries >>> with queries.Session() as session: ... results = session.callproc('chr', [65]) ... pprint.pprint(results.as_dict()) ... {'chr': u'A'} **Asynchronous Queries with Tornado** In addition to providing a Pythonic, synchronous client API for PostgreSQL, Queries provides a very similar asynchronous API for use with Tornado. The only major difference API difference between ``queries.TornadoSession`` and ``queries.Session`` is the ``TornadoSession.query`` and ``TornadoSession.callproc`` methods return the entire result set instead of acting as an iterator over the results. The following example uses ``TornadoSession.query`` in an asynchronous Tornado_ web application to send a JSON payload with the query result set. .. code:: python from tornado import gen, ioloop, web import queries class MainHandler(web.RequestHandler): def initialize(self): self.session = queries.TornadoSession() @gen.coroutine def get(self): results = yield self.session.query('SELECT * FROM names') self.finish({'data': results.items()}) results.free() application = web.Application([ (r"/", MainHandler), ]) if __name__ == "__main__": application.listen(8888) ioloop.IOLoop.instance().start() Inspiration ----------- Queries is inspired by `Kenneth Reitz's <https://github.com/kennethreitz/>`_ awesome work on `requests <http://docs.python-requests.org/en/latest/>`_. History ------- Queries is a fork and enhancement of pgsql_wrapper_, which can be found in the main GitHub repository of Queries as tags prior to version 1.2.0. .. _pypi: https://pypi.python.org/pypi/queries .. _psycopg2: https://pypi.python.org/pypi/psycopg2 .. _documentation: https://queries.readthedocs.org .. _URI: http://www.postgresql.org/docs/9.3/static/libpq-connect.html#LIBPQ-CONNSTRING .. _pgsql_wrapper: https://pypi.python.org/pypi/pgsql_wrapper .. _Tornado: http://tornadoweb.org .. _PEP343: http://legacy.python.org/dev/peps/pep-0343/ .. _psycopg2cffi: https://pypi.python.org/pypi/psycopg2cffi .. |Version| image:: https://img.shields.io/pypi/v/queries.svg? :target: https://pypi.python.org/pypi/queries .. |Status| image:: https://img.shields.io/travis/gmr/queries.svg? :target: https://travis-ci.org/gmr/queries .. |Coverage| image:: https://img.shields.io/codecov/c/github/gmr/queries.svg? :target: https://codecov.io/github/gmr/queries?branch=master .. |License| image:: https://img.shields.io/github/license/gmr/queries.svg? :target: https://github.com/gmr/queries <MSG> Additional README updates <DFF> @@ -81,7 +81,7 @@ functionality provided by PostgreSQL. **Using queries.Session.query** -The following example shows how a :py:class:`queries.Session` object can be used +The following example shows how a ``queries.Session`` object can be used as a context manager to query the database table: .. code:: python
1
Additional README updates
1
.rst
rst
bsd-3-clause
gmr/queries
1249
<NME> tornado_session.py <BEF> """ Tornado Session Adapter Use Queries asynchronously within the Tornado framework. Example Use: .. code:: python class NameListHandler(web.RequestHandler): def initialize(self): self.session = queries.TornadoSession(pool_max_size=60) @gen.coroutine def get(self): data = yield self.session.query('SELECT * FROM names') self.finish({'data': data}) """ import logging """ import logging import socket import warnings from tornado import concurrent, ioloop from psycopg2 import extras, extensions import psycopg2 from queries import pool, results, session, utils LOGGER = logging.getLogger(__name__) LOGGER = logging.getLogger(__name__) class Results(results.Results): """Class that is created for each query that allows for the use of query results... """ def __init__(self, cursor, cleanup, fd): self._cleanup = cleanup self._fd = fd self._fd = fd @gen.coroutine def release(self): yield self._cleanup(self.cursor, self._fd) class TornadoSession(session.Session): LOGGER.warning('Auto-freeing result on deletion') self.free() queries do not block each other. Heavily trafficked services will require a higher ``max_pool_size`` to allow for greater connection concurrency. .. Note:: Unlike :py:meth:`Session.query <queries.Session.query>` and :py:meth:`Session.callproc <queries.Session.callproc>`, the :py:meth:`TornadoSession.query <queries.TornadoSession.query>` and :py:meth:`TornadoSession.callproc <queries.TornadoSession.callproc>` methods are not iterators and will return the full result set using :py:meth:`cursor.fetchall`. :param str uri: PostgreSQL connection URI :param psycopg2.extensions.cursor: The cursor type to use :py:meth:`TornadoSession.callproc <queries.TornadoSession.callproc>` must call :py:meth:`Results.free <queries.tornado_session.Results.free>` :param str uri: PostgreSQL connection URI :param psycopg2.extensions.cursor: The cursor type to use :param int pool_idle_ttl: How long idle pools keep connections open :param int pool_max_size: The maximum size of the pool to use """ def __init__(self, uri=session.DEFAULT_URI, cursor_factory=extras.RealDictCursor, pool_idle_ttl=pool.DEFAULT_IDLE_TTL, pool_max_size=DEFAULT_MAX_POOL_SIZE, io_loop=None): """Connect to a PostgreSQL server using the module wide connection and set the isolation level. :param str uri: PostgreSQL connection URI :param psycopg2.extensions.cursor: The cursor type to use :param int pool_idle_ttl: How long idle pools keep connections open :param int pool_max_size: The maximum size of the pool to use :param tornado.ioloop.IOLoop io_loop: IOLoop instance to use """ self._connections = dict() self._cleanup_callback = None self._cursor_factory = cursor_factory self._futures = dict() self._ioloop = io_loop or ioloop.IOLoop.current() self._pool_manager = pool.PoolManager.instance() self._pool_max_size = pool_max_size @gen.coroutine def callproc(self, name, args=None): """Call a stored procedure asynchronously on the server, passing in the arguments to be passed to the stored procedure, returning the results as a tuple of row count and result set. :param str name: The stored procedure name :param list args: An optional list of procedure arguments :return tuple: int, list :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError return None @property def cursor(self): return None def callproc(self, name, args=None): """Call a stored procedure asynchronously on the server, passing in the arguments to be passed to the stored procedure, yielding the results as a :py:class:`Results <queries.tornado_session.Results>` object. You **must** free the results that are returned by this method to unlock the connection used to perform the query. Failure to do so will cause your Tornado application to run out of connections. :param str name: The stored procedure name :param list args: An optional list of procedure arguments :rtype: Results :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError :raises: queries.InternalError :raises: queries.InterfaceError :raises: queries.NotSupportedError :raises: queries.OperationalError :raises: queries.ProgrammingError """ return self._execute('callproc', name, args) def query(self, sql, parameters=None): """Issue a query asynchronously on the server, mogrifying the parameters against the sql statement and yielding the results as a :py:class:`Results <queries.tornado_session.Results>` object. You **must** free the results that are returned by this method to unlock the connection used to perform the query. Failure to do so will cause your Tornado application to run out of connections. :param str sql: The SQL statement :param dict parameters: A dictionary of query parameters :rtype: Results :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError :raises: queries.InternalError :raises: queries.InterfaceError :raises: queries.NotSupportedError :raises: queries.OperationalError :raises: queries.ProgrammingError """ return self._execute('execute', sql, parameters) def validate(self): """Validate the session can connect or has open connections to PostgreSQL. As of ``1.10.3`` .. deprecated:: 1.10.3 As of 1.10.3, this method only warns about Deprecation :rtype: bool """ warnings.warn( @gen.coroutine def query(self, sql, parameters=None): """Issue a query asynchronously on the server, mogrifying the parameters against the sql statement and yielding the results as a tuple of row count and result set. :param str sql: The SQL statement :param dict parameters: A dictionary of query parameters :return tuple: int, list :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError future.set_result(connection) # Add the connection to the IOLoop self._ioloop.add_handler(connection.fileno(), self._on_io_events, ioloop.IOLoop.WRITE) except pool.NoIdleConnectionsError: self._create_connection(future) return future def _create_connection(self, future): """Create a new PostgreSQL connection :param tornado.concurrent.Future future: future for new conn result """ LOGGER.debug('Creating a new connection for %s', self.pid) # Create a new PostgreSQL connection kwargs = utils.uri_to_kwargs(self._uri) try: connection = self._psycopg2_connect(kwargs) except (psycopg2.Error, OSError, socket.error) as error: future.set_exception(error) return # Add the connection for use in _poll_connection fd = connection.fileno() self._connections[fd] = connection def on_connected(cf): """Invoked by the IOLoop when the future is complete for the connection :param Future cf: The future for the initial connection """ if cf.exception(): self._cleanup_fd(fd, True) future.set_exception(cf.exception()) else: try: # Add the connection to the pool LOGGER.debug('Connection established for %s', self.pid) self._pool_manager.add(self.pid, connection) except (ValueError, pool.PoolException) as err: LOGGER.exception('Failed to add %r to the pool', self.pid) self._cleanup_fd(fd) future.set_exception(err) return self._pool_manager.lock(self.pid, connection, self) # Added in because psycopg2cffi connects and leaves the # connection in a weird state: consts.STATUS_DATESTYLE, # returning from Connection._setup without setting the state # as const.STATUS_OK if utils.PYPY: connection.status = extensions.STATUS_READY # Register the custom data types self._register_unicode(connection) self._register_uuid(connection) # Set the future result future.set_result(connection) # Add a future that fires once connected self._futures[fd] = concurrent.Future() self._ioloop.add_future(self._futures[fd], on_connected) # Add the connection to the IOLoop self._ioloop.add_handler(connection.fileno(), self._on_io_events, ioloop.IOLoop.WRITE) def _execute(self, method, query, parameters=None): """Issue a query asynchronously on the server, mogrifying the parameters against the sql statement and yielding the results as a :py:class:`Results <queries.tornado_session.Results>` object. This function reduces duplicate code for callproc and query by getting the class attribute for the method passed in as the function to call. :param str method: The method attribute to use :param str query: The SQL statement or Stored Procedure name :param list|dict parameters: A dictionary of query parameters :rtype: Results :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError :raises: queries.InternalError :raises: queries.InterfaceError :raises: queries.NotSupportedError :raises: queries.OperationalError :raises: queries.ProgrammingError """ future = concurrent.Future() def on_connected(cf): """Invoked by the future returned by self._connect""" if cf.exception(): future.set_exception(cf.exception()) return # Get the psycopg2 connection object and cursor conn = cf.result() cursor = self._get_cursor(conn) def completed(qf): """Invoked by the IOLoop when the future has completed""" if qf.exception(): self._incr_exceptions(conn) err = qf.exception() LOGGER.debug('Cleaning cursor due to exception: %r', err) self._exec_cleanup(cursor, conn.fileno()) future.set_exception(err) else: self._incr_executions(conn) value = Results(cursor, self._exec_cleanup, conn.fileno()) future.set_result(value) # Setup a callback to wait on the query result self._futures[conn.fileno()] = concurrent.Future() # Add the future to the IOLoop self._ioloop.add_future(self._futures[conn.fileno()], completed) # Get the cursor, execute the query func = getattr(cursor, method) try: func(query, parameters) except Exception as error: future.set_exception(error) # Ensure the pool exists for the connection self._ensure_pool_exists() # Grab a connection to PostgreSQL self._ioloop.add_future(self._connect(), on_connected) # Return the future for the query result return future def _exec_cleanup(self, cursor, fd): """Close the cursor, remove any references to the fd in internal state and remove the fd from the ioloop. :param psycopg2.extensions.cursor cursor: The cursor to close :param int fd: The connection file descriptor """ LOGGER.debug('Closing cursor and cleaning %s', fd) try: cursor.close() except (psycopg2.Error, psycopg2.Warning) as error: LOGGER.debug('Error closing the cursor: %s', error) self._cleanup_fd(fd) # If the cleanup callback exists, remove it if self._cleanup_callback: self._ioloop.remove_timeout(self._cleanup_callback) # Create a new cleanup callback to clean the pool of idle connections self._cleanup_callback = self._ioloop.add_timeout( self._ioloop.time() + self._pool_idle_ttl + 1, self._pool_manager.clean, self.pid) def _cleanup_fd(self, fd, close=False): """Ensure the socket socket is removed from the IOLoop, the @property def connection(self): """The connection property is not supported in :py:class:`~queries.TornadoSession`. :rtype: None """ return None @property def cursor(self): """The cursor property is not supported in :py:class:`~queries.TornadoSession`. :rtype: None """ return None def _psycopg2_connect(self, kwargs): pass if close: self._connections[fd].close() del self._connections[fd] if fd in self._futures: del self._futures[fd] def _incr_exceptions(self, conn): """Increment the number of exceptions for the current connection. :param psycopg2.extensions.connection conn: the psycopg2 connection """ self._pool_manager.get_connection(self.pid, conn).exceptions += 1 def _incr_executions(self, conn): """Increment the number of executions for the current connection. :param psycopg2.extensions.connection conn: the psycopg2 connection """ self._pool_manager.get_connection(self.pid, conn).executions += 1 def _on_io_events(self, fd=None, _events=None): """Invoked by Tornado's IOLoop when there are events for the fd :param int fd: The file descriptor for the event :param int _events: The events raised """ if fd not in self._connections: LOGGER.warning('Received IO event for non-existing connection') return self._poll_connection(fd) def _poll_connection(self, fd): """Check with psycopg2 to see what action to take. If the state is POLL_OK, we should have a pending callback for that fd. :param int fd: The socket fd for the postgresql connection """ try: state = self._connections[fd].poll() except (OSError, socket.error) as error: self._ioloop.remove_handler(fd) if fd in self._futures and not self._futures[fd].done(): self._futures[fd].set_exception( psycopg2.OperationalError('Connection error (%s)' % error) ) except (psycopg2.Error, psycopg2.Warning) as error: if fd in self._futures and not self._futures[fd].done(): self._futures[fd].set_exception(error) else: if state == extensions.POLL_OK: if fd in self._futures and not self._futures[fd].done(): self._futures[fd].set_result(True) elif state == extensions.POLL_WRITE: self._ioloop.update_handler(fd, ioloop.IOLoop.WRITE) elif state == extensions.POLL_READ: self._ioloop.update_handler(fd, ioloop.IOLoop.READ) elif state == extensions.POLL_ERROR: self._ioloop.remove_handler(fd) if fd in self._futures and not self._futures[fd].done(): self._futures[fd].set_exception( psycopg2.Error('Poll Error')) def _psycopg2_connect(self, kwargs): """Return a psycopg2 connection for the specified kwargs. Extend for use in async session adapters. :param dict kwargs: Keyword connection args :rtype: psycopg2.extensions.connection """ kwargs['async'] = True return psycopg2.connect(**kwargs) <MSG> Documentation updates <DFF> @@ -14,8 +14,9 @@ Example Use: @gen.coroutine def get(self): - data = yield self.session.query('SELECT * FROM names') - self.finish({'data': data}) + results = yield self.session.query('SELECT * FROM names') + self.finish({'data': results.items()}) + results.free() """ import logging @@ -36,9 +37,62 @@ from queries import PYPY LOGGER = logging.getLogger(__name__) + class Results(results.Results): - """Class that is created for each query that allows for the use of query - results... + """A TornadoSession specific :py:class:`queries.Results` class that adds + the :py:meth:`Results.free <queries.tornado_session.Results.free>` method. + The :py:meth:`Results.free <queries.tornado_session.Results.free>` method + **must** be called to free the connection that the results were generated + on. `Results` objects that are not freed will cause the connections to + remain locked and your application will eventually run out of connections + in the pool. + + The following examples illustrate the various behaviors that the + ::py:class:`queries.Results <queries.tornado_session.Requests>` class + implements: + + **Using Results as an Iterator** + + .. code:: python + + results = yield session.query('SELECT * FROM foo') + for row in results + print row + results.free() + + **Accessing an individual row by index** + + .. code:: python + + results = yield session.query('SELECT * FROM foo') + print results[1] # Access the second row of the results + results.free() + + **Casting single row results as a dict** + + .. code:: python + + results = yield session.query('SELECT * FROM foo LIMIT 1') + print results.as_dict() + results.free() + + **Checking to see if a query was successful** + + .. code:: python + + sql = "UPDATE foo SET bar='baz' WHERE qux='corgie'" + results = yield session.query(sql) + if results: + print 'Success' + results.free() + + **Checking the number of rows by using len(Results)** + + .. code:: python + + results = yield session.query('SELECT * FROM foo') + print '%i rows' % len(results) + results.free() """ def __init__(self, cursor, cleanup, fd): @@ -47,9 +101,16 @@ class Results(results.Results): self._fd = fd @gen.coroutine - def release(self): - yield self._cleanup(self.cursor, self._fd) + def free(self): + """Release the results and connection lock from the TornadoSession + object. This **must** be called after you finish processing the results + from :py:meth:`TornadoSession.query <queries.TornadoSession.query>` or + :py:meth:`TornadoSession.callproc <queries.TornadoSession.callproc>` + or the connection will not be able to be reused by other asynchronous + requests. + """ + yield self._cleanup(self.cursor, self._fd) class TornadoSession(session.Session): @@ -60,12 +121,9 @@ class TornadoSession(session.Session): queries do not block each other. Heavily trafficked services will require a higher ``max_pool_size`` to allow for greater connection concurrency. - .. Note:: Unlike :py:meth:`Session.query <queries.Session.query>` and - :py:meth:`Session.callproc <queries.Session.callproc>`, the - :py:meth:`TornadoSession.query <queries.TornadoSession.query>` and - :py:meth:`TornadoSession.callproc <queries.TornadoSession.callproc>` - methods are not iterators and will return the full result set using - :py:meth:`cursor.fetchall`. + :py:meth:`TornadoSession.query <queries.TornadoSession.query>` and + :py:meth:`TornadoSession.callproc <queries.TornadoSession.callproc>` must + call :py:meth:`Results.free <queries.tornado_session.Results.free>` :param str uri: PostgreSQL connection URI :param psycopg2.extensions.cursor: The cursor type to use @@ -103,12 +161,16 @@ class TornadoSession(session.Session): @gen.coroutine def callproc(self, name, args=None): """Call a stored procedure asynchronously on the server, passing in the - arguments to be passed to the stored procedure, returning the results - as a tuple of row count and result set. + arguments to be passed to the stored procedure, yielding the results + as a :py:class:`Results <queries.tornado_session.Results>` object. + + You **must** free the results that are returned by this method to + unlock the connection used to perform the query. Failure to do so + will cause your Tornado application to run out of connections. :param str name: The stored procedure name :param list args: An optional list of procedure arguments - :return tuple: int, list + :rtype: Results :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError @@ -180,12 +242,16 @@ class TornadoSession(session.Session): @gen.coroutine def query(self, sql, parameters=None): """Issue a query asynchronously on the server, mogrifying the - parameters against the sql statement and yielding the results as a - tuple of row count and result set. + parameters against the sql statement and yielding the results + as a :py:class:`Results <queries.tornado_session.Results>` object. + + You **must** free the results that are returned by this method to + unlock the connection used to perform the query. Failure to do so + will cause your Tornado application to run out of connections. :param str sql: The SQL statement :param dict parameters: A dictionary of query parameters - :return tuple: int, list + :rtype: Results :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError @@ -369,22 +435,10 @@ class TornadoSession(session.Session): @property def connection(self): - """The connection property is not supported in - :py:class:`~queries.TornadoSession`. - - :rtype: None - - """ return None @property def cursor(self): - """The cursor property is not supported in - :py:class:`~queries.TornadoSession`. - - :rtype: None - - """ return None def _psycopg2_connect(self, kwargs):
84
Documentation updates
30
.py
py
bsd-3-clause
gmr/queries
1250
<NME> tornado_session.py <BEF> """ Tornado Session Adapter Use Queries asynchronously within the Tornado framework. Example Use: .. code:: python class NameListHandler(web.RequestHandler): def initialize(self): self.session = queries.TornadoSession(pool_max_size=60) @gen.coroutine def get(self): data = yield self.session.query('SELECT * FROM names') self.finish({'data': data}) """ import logging """ import logging import socket import warnings from tornado import concurrent, ioloop from psycopg2 import extras, extensions import psycopg2 from queries import pool, results, session, utils LOGGER = logging.getLogger(__name__) LOGGER = logging.getLogger(__name__) class Results(results.Results): """Class that is created for each query that allows for the use of query results... """ def __init__(self, cursor, cleanup, fd): self._cleanup = cleanup self._fd = fd self._fd = fd @gen.coroutine def release(self): yield self._cleanup(self.cursor, self._fd) class TornadoSession(session.Session): LOGGER.warning('Auto-freeing result on deletion') self.free() queries do not block each other. Heavily trafficked services will require a higher ``max_pool_size`` to allow for greater connection concurrency. .. Note:: Unlike :py:meth:`Session.query <queries.Session.query>` and :py:meth:`Session.callproc <queries.Session.callproc>`, the :py:meth:`TornadoSession.query <queries.TornadoSession.query>` and :py:meth:`TornadoSession.callproc <queries.TornadoSession.callproc>` methods are not iterators and will return the full result set using :py:meth:`cursor.fetchall`. :param str uri: PostgreSQL connection URI :param psycopg2.extensions.cursor: The cursor type to use :py:meth:`TornadoSession.callproc <queries.TornadoSession.callproc>` must call :py:meth:`Results.free <queries.tornado_session.Results.free>` :param str uri: PostgreSQL connection URI :param psycopg2.extensions.cursor: The cursor type to use :param int pool_idle_ttl: How long idle pools keep connections open :param int pool_max_size: The maximum size of the pool to use """ def __init__(self, uri=session.DEFAULT_URI, cursor_factory=extras.RealDictCursor, pool_idle_ttl=pool.DEFAULT_IDLE_TTL, pool_max_size=DEFAULT_MAX_POOL_SIZE, io_loop=None): """Connect to a PostgreSQL server using the module wide connection and set the isolation level. :param str uri: PostgreSQL connection URI :param psycopg2.extensions.cursor: The cursor type to use :param int pool_idle_ttl: How long idle pools keep connections open :param int pool_max_size: The maximum size of the pool to use :param tornado.ioloop.IOLoop io_loop: IOLoop instance to use """ self._connections = dict() self._cleanup_callback = None self._cursor_factory = cursor_factory self._futures = dict() self._ioloop = io_loop or ioloop.IOLoop.current() self._pool_manager = pool.PoolManager.instance() self._pool_max_size = pool_max_size @gen.coroutine def callproc(self, name, args=None): """Call a stored procedure asynchronously on the server, passing in the arguments to be passed to the stored procedure, returning the results as a tuple of row count and result set. :param str name: The stored procedure name :param list args: An optional list of procedure arguments :return tuple: int, list :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError return None @property def cursor(self): return None def callproc(self, name, args=None): """Call a stored procedure asynchronously on the server, passing in the arguments to be passed to the stored procedure, yielding the results as a :py:class:`Results <queries.tornado_session.Results>` object. You **must** free the results that are returned by this method to unlock the connection used to perform the query. Failure to do so will cause your Tornado application to run out of connections. :param str name: The stored procedure name :param list args: An optional list of procedure arguments :rtype: Results :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError :raises: queries.InternalError :raises: queries.InterfaceError :raises: queries.NotSupportedError :raises: queries.OperationalError :raises: queries.ProgrammingError """ return self._execute('callproc', name, args) def query(self, sql, parameters=None): """Issue a query asynchronously on the server, mogrifying the parameters against the sql statement and yielding the results as a :py:class:`Results <queries.tornado_session.Results>` object. You **must** free the results that are returned by this method to unlock the connection used to perform the query. Failure to do so will cause your Tornado application to run out of connections. :param str sql: The SQL statement :param dict parameters: A dictionary of query parameters :rtype: Results :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError :raises: queries.InternalError :raises: queries.InterfaceError :raises: queries.NotSupportedError :raises: queries.OperationalError :raises: queries.ProgrammingError """ return self._execute('execute', sql, parameters) def validate(self): """Validate the session can connect or has open connections to PostgreSQL. As of ``1.10.3`` .. deprecated:: 1.10.3 As of 1.10.3, this method only warns about Deprecation :rtype: bool """ warnings.warn( @gen.coroutine def query(self, sql, parameters=None): """Issue a query asynchronously on the server, mogrifying the parameters against the sql statement and yielding the results as a tuple of row count and result set. :param str sql: The SQL statement :param dict parameters: A dictionary of query parameters :return tuple: int, list :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError future.set_result(connection) # Add the connection to the IOLoop self._ioloop.add_handler(connection.fileno(), self._on_io_events, ioloop.IOLoop.WRITE) except pool.NoIdleConnectionsError: self._create_connection(future) return future def _create_connection(self, future): """Create a new PostgreSQL connection :param tornado.concurrent.Future future: future for new conn result """ LOGGER.debug('Creating a new connection for %s', self.pid) # Create a new PostgreSQL connection kwargs = utils.uri_to_kwargs(self._uri) try: connection = self._psycopg2_connect(kwargs) except (psycopg2.Error, OSError, socket.error) as error: future.set_exception(error) return # Add the connection for use in _poll_connection fd = connection.fileno() self._connections[fd] = connection def on_connected(cf): """Invoked by the IOLoop when the future is complete for the connection :param Future cf: The future for the initial connection """ if cf.exception(): self._cleanup_fd(fd, True) future.set_exception(cf.exception()) else: try: # Add the connection to the pool LOGGER.debug('Connection established for %s', self.pid) self._pool_manager.add(self.pid, connection) except (ValueError, pool.PoolException) as err: LOGGER.exception('Failed to add %r to the pool', self.pid) self._cleanup_fd(fd) future.set_exception(err) return self._pool_manager.lock(self.pid, connection, self) # Added in because psycopg2cffi connects and leaves the # connection in a weird state: consts.STATUS_DATESTYLE, # returning from Connection._setup without setting the state # as const.STATUS_OK if utils.PYPY: connection.status = extensions.STATUS_READY # Register the custom data types self._register_unicode(connection) self._register_uuid(connection) # Set the future result future.set_result(connection) # Add a future that fires once connected self._futures[fd] = concurrent.Future() self._ioloop.add_future(self._futures[fd], on_connected) # Add the connection to the IOLoop self._ioloop.add_handler(connection.fileno(), self._on_io_events, ioloop.IOLoop.WRITE) def _execute(self, method, query, parameters=None): """Issue a query asynchronously on the server, mogrifying the parameters against the sql statement and yielding the results as a :py:class:`Results <queries.tornado_session.Results>` object. This function reduces duplicate code for callproc and query by getting the class attribute for the method passed in as the function to call. :param str method: The method attribute to use :param str query: The SQL statement or Stored Procedure name :param list|dict parameters: A dictionary of query parameters :rtype: Results :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError :raises: queries.InternalError :raises: queries.InterfaceError :raises: queries.NotSupportedError :raises: queries.OperationalError :raises: queries.ProgrammingError """ future = concurrent.Future() def on_connected(cf): """Invoked by the future returned by self._connect""" if cf.exception(): future.set_exception(cf.exception()) return # Get the psycopg2 connection object and cursor conn = cf.result() cursor = self._get_cursor(conn) def completed(qf): """Invoked by the IOLoop when the future has completed""" if qf.exception(): self._incr_exceptions(conn) err = qf.exception() LOGGER.debug('Cleaning cursor due to exception: %r', err) self._exec_cleanup(cursor, conn.fileno()) future.set_exception(err) else: self._incr_executions(conn) value = Results(cursor, self._exec_cleanup, conn.fileno()) future.set_result(value) # Setup a callback to wait on the query result self._futures[conn.fileno()] = concurrent.Future() # Add the future to the IOLoop self._ioloop.add_future(self._futures[conn.fileno()], completed) # Get the cursor, execute the query func = getattr(cursor, method) try: func(query, parameters) except Exception as error: future.set_exception(error) # Ensure the pool exists for the connection self._ensure_pool_exists() # Grab a connection to PostgreSQL self._ioloop.add_future(self._connect(), on_connected) # Return the future for the query result return future def _exec_cleanup(self, cursor, fd): """Close the cursor, remove any references to the fd in internal state and remove the fd from the ioloop. :param psycopg2.extensions.cursor cursor: The cursor to close :param int fd: The connection file descriptor """ LOGGER.debug('Closing cursor and cleaning %s', fd) try: cursor.close() except (psycopg2.Error, psycopg2.Warning) as error: LOGGER.debug('Error closing the cursor: %s', error) self._cleanup_fd(fd) # If the cleanup callback exists, remove it if self._cleanup_callback: self._ioloop.remove_timeout(self._cleanup_callback) # Create a new cleanup callback to clean the pool of idle connections self._cleanup_callback = self._ioloop.add_timeout( self._ioloop.time() + self._pool_idle_ttl + 1, self._pool_manager.clean, self.pid) def _cleanup_fd(self, fd, close=False): """Ensure the socket socket is removed from the IOLoop, the @property def connection(self): """The connection property is not supported in :py:class:`~queries.TornadoSession`. :rtype: None """ return None @property def cursor(self): """The cursor property is not supported in :py:class:`~queries.TornadoSession`. :rtype: None """ return None def _psycopg2_connect(self, kwargs): pass if close: self._connections[fd].close() del self._connections[fd] if fd in self._futures: del self._futures[fd] def _incr_exceptions(self, conn): """Increment the number of exceptions for the current connection. :param psycopg2.extensions.connection conn: the psycopg2 connection """ self._pool_manager.get_connection(self.pid, conn).exceptions += 1 def _incr_executions(self, conn): """Increment the number of executions for the current connection. :param psycopg2.extensions.connection conn: the psycopg2 connection """ self._pool_manager.get_connection(self.pid, conn).executions += 1 def _on_io_events(self, fd=None, _events=None): """Invoked by Tornado's IOLoop when there are events for the fd :param int fd: The file descriptor for the event :param int _events: The events raised """ if fd not in self._connections: LOGGER.warning('Received IO event for non-existing connection') return self._poll_connection(fd) def _poll_connection(self, fd): """Check with psycopg2 to see what action to take. If the state is POLL_OK, we should have a pending callback for that fd. :param int fd: The socket fd for the postgresql connection """ try: state = self._connections[fd].poll() except (OSError, socket.error) as error: self._ioloop.remove_handler(fd) if fd in self._futures and not self._futures[fd].done(): self._futures[fd].set_exception( psycopg2.OperationalError('Connection error (%s)' % error) ) except (psycopg2.Error, psycopg2.Warning) as error: if fd in self._futures and not self._futures[fd].done(): self._futures[fd].set_exception(error) else: if state == extensions.POLL_OK: if fd in self._futures and not self._futures[fd].done(): self._futures[fd].set_result(True) elif state == extensions.POLL_WRITE: self._ioloop.update_handler(fd, ioloop.IOLoop.WRITE) elif state == extensions.POLL_READ: self._ioloop.update_handler(fd, ioloop.IOLoop.READ) elif state == extensions.POLL_ERROR: self._ioloop.remove_handler(fd) if fd in self._futures and not self._futures[fd].done(): self._futures[fd].set_exception( psycopg2.Error('Poll Error')) def _psycopg2_connect(self, kwargs): """Return a psycopg2 connection for the specified kwargs. Extend for use in async session adapters. :param dict kwargs: Keyword connection args :rtype: psycopg2.extensions.connection """ kwargs['async'] = True return psycopg2.connect(**kwargs) <MSG> Documentation updates <DFF> @@ -14,8 +14,9 @@ Example Use: @gen.coroutine def get(self): - data = yield self.session.query('SELECT * FROM names') - self.finish({'data': data}) + results = yield self.session.query('SELECT * FROM names') + self.finish({'data': results.items()}) + results.free() """ import logging @@ -36,9 +37,62 @@ from queries import PYPY LOGGER = logging.getLogger(__name__) + class Results(results.Results): - """Class that is created for each query that allows for the use of query - results... + """A TornadoSession specific :py:class:`queries.Results` class that adds + the :py:meth:`Results.free <queries.tornado_session.Results.free>` method. + The :py:meth:`Results.free <queries.tornado_session.Results.free>` method + **must** be called to free the connection that the results were generated + on. `Results` objects that are not freed will cause the connections to + remain locked and your application will eventually run out of connections + in the pool. + + The following examples illustrate the various behaviors that the + ::py:class:`queries.Results <queries.tornado_session.Requests>` class + implements: + + **Using Results as an Iterator** + + .. code:: python + + results = yield session.query('SELECT * FROM foo') + for row in results + print row + results.free() + + **Accessing an individual row by index** + + .. code:: python + + results = yield session.query('SELECT * FROM foo') + print results[1] # Access the second row of the results + results.free() + + **Casting single row results as a dict** + + .. code:: python + + results = yield session.query('SELECT * FROM foo LIMIT 1') + print results.as_dict() + results.free() + + **Checking to see if a query was successful** + + .. code:: python + + sql = "UPDATE foo SET bar='baz' WHERE qux='corgie'" + results = yield session.query(sql) + if results: + print 'Success' + results.free() + + **Checking the number of rows by using len(Results)** + + .. code:: python + + results = yield session.query('SELECT * FROM foo') + print '%i rows' % len(results) + results.free() """ def __init__(self, cursor, cleanup, fd): @@ -47,9 +101,16 @@ class Results(results.Results): self._fd = fd @gen.coroutine - def release(self): - yield self._cleanup(self.cursor, self._fd) + def free(self): + """Release the results and connection lock from the TornadoSession + object. This **must** be called after you finish processing the results + from :py:meth:`TornadoSession.query <queries.TornadoSession.query>` or + :py:meth:`TornadoSession.callproc <queries.TornadoSession.callproc>` + or the connection will not be able to be reused by other asynchronous + requests. + """ + yield self._cleanup(self.cursor, self._fd) class TornadoSession(session.Session): @@ -60,12 +121,9 @@ class TornadoSession(session.Session): queries do not block each other. Heavily trafficked services will require a higher ``max_pool_size`` to allow for greater connection concurrency. - .. Note:: Unlike :py:meth:`Session.query <queries.Session.query>` and - :py:meth:`Session.callproc <queries.Session.callproc>`, the - :py:meth:`TornadoSession.query <queries.TornadoSession.query>` and - :py:meth:`TornadoSession.callproc <queries.TornadoSession.callproc>` - methods are not iterators and will return the full result set using - :py:meth:`cursor.fetchall`. + :py:meth:`TornadoSession.query <queries.TornadoSession.query>` and + :py:meth:`TornadoSession.callproc <queries.TornadoSession.callproc>` must + call :py:meth:`Results.free <queries.tornado_session.Results.free>` :param str uri: PostgreSQL connection URI :param psycopg2.extensions.cursor: The cursor type to use @@ -103,12 +161,16 @@ class TornadoSession(session.Session): @gen.coroutine def callproc(self, name, args=None): """Call a stored procedure asynchronously on the server, passing in the - arguments to be passed to the stored procedure, returning the results - as a tuple of row count and result set. + arguments to be passed to the stored procedure, yielding the results + as a :py:class:`Results <queries.tornado_session.Results>` object. + + You **must** free the results that are returned by this method to + unlock the connection used to perform the query. Failure to do so + will cause your Tornado application to run out of connections. :param str name: The stored procedure name :param list args: An optional list of procedure arguments - :return tuple: int, list + :rtype: Results :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError @@ -180,12 +242,16 @@ class TornadoSession(session.Session): @gen.coroutine def query(self, sql, parameters=None): """Issue a query asynchronously on the server, mogrifying the - parameters against the sql statement and yielding the results as a - tuple of row count and result set. + parameters against the sql statement and yielding the results + as a :py:class:`Results <queries.tornado_session.Results>` object. + + You **must** free the results that are returned by this method to + unlock the connection used to perform the query. Failure to do so + will cause your Tornado application to run out of connections. :param str sql: The SQL statement :param dict parameters: A dictionary of query parameters - :return tuple: int, list + :rtype: Results :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError @@ -369,22 +435,10 @@ class TornadoSession(session.Session): @property def connection(self): - """The connection property is not supported in - :py:class:`~queries.TornadoSession`. - - :rtype: None - - """ return None @property def cursor(self): - """The cursor property is not supported in - :py:class:`~queries.TornadoSession`. - - :rtype: None - - """ return None def _psycopg2_connect(self, kwargs):
84
Documentation updates
30
.py
py
bsd-3-clause
gmr/queries
1251
<NME> README.rst <BEF> ADDFILE <MSG> Move to rst for README <DFF> @@ -0,0 +1,41 @@ +pgsql_wrapper +============= +An opinionated wrapper for interfacing with PostgreSQL that offers caching of +connections and support for PyPy via psycopg2ct. By default the PgSQL class +sets the cursor type to extras.DictCursor, and turns on both Unicode and UUID +support. In addition, isolation level is set to auto-commit. + +As a convenience tool, pgsql_wrapper reduces the steps required in connecting to +and setting up the connections and cursors required to talk to PostgreSQL. + +Without requiring any additional code, the module level caching of connections +allows for multiple modules in the same interpreter to use the same PostgreSQL +connection. + +Installation +------------ +pgsql_wrapper is available via pypi and can be installed with easy_install or pip: + +pip install pgsql_wrapper + +Requirements +------------ + + - psycopg2 (for cpython support) + - psycopg2ct (for PyPy support) + +Example +------- + + import pgsql_wrapper + + HOST = 'localhost' + PORT = 5432 + DBNAME = 'production' + USER = 'www' + + + connection = pgsql_wrapper.PgSQL(HOST, PORT, DBNAME, USER) + connection.cursor.execute('SELECT 1 as value') + data = connection.cursor.fetchone() + print data['value']
41
Move to rst for README
0
.rst
rst
bsd-3-clause
gmr/queries
1252
<NME> README.rst <BEF> ADDFILE <MSG> Move to rst for README <DFF> @@ -0,0 +1,41 @@ +pgsql_wrapper +============= +An opinionated wrapper for interfacing with PostgreSQL that offers caching of +connections and support for PyPy via psycopg2ct. By default the PgSQL class +sets the cursor type to extras.DictCursor, and turns on both Unicode and UUID +support. In addition, isolation level is set to auto-commit. + +As a convenience tool, pgsql_wrapper reduces the steps required in connecting to +and setting up the connections and cursors required to talk to PostgreSQL. + +Without requiring any additional code, the module level caching of connections +allows for multiple modules in the same interpreter to use the same PostgreSQL +connection. + +Installation +------------ +pgsql_wrapper is available via pypi and can be installed with easy_install or pip: + +pip install pgsql_wrapper + +Requirements +------------ + + - psycopg2 (for cpython support) + - psycopg2ct (for PyPy support) + +Example +------- + + import pgsql_wrapper + + HOST = 'localhost' + PORT = 5432 + DBNAME = 'production' + USER = 'www' + + + connection = pgsql_wrapper.PgSQL(HOST, PORT, DBNAME, USER) + connection.cursor.execute('SELECT 1 as value') + data = connection.cursor.fetchone() + print data['value']
41
Move to rst for README
0
.rst
rst
bsd-3-clause
gmr/queries
1253
<NME> README.rst <BEF> Queries: PostgreSQL Simplified ============================== *Queries* is a BSD licensed opinionated wrapper of the psycopg2_ library for interacting with PostgreSQL. The popular psycopg2_ package is a full-featured python client. Unfortunately as a developer, you're often repeating the same steps to get started with your applications that use it. Queries aims to reduce the complexity of psycopg2 while adding additional features to make writing PostgreSQL client applications both fast and easy. Check out the `Usage`_ section below to see how easy it can be. Key features include: - Simplified API - Support of Python 2.7+ and 3.4+ - PyPy support via psycopg2cffi_ - Asynchronous support for Tornado_ - Connection information provided by URI - Query results delivered as a generator based iterators - Automatically registered data-type support for UUIDs, Unicode and Unicode Arrays - Ability to directly access psycopg2 ``connection`` and ``cursor`` objects - Internal connection pooling |Version| |Status| |Coverage| |License| Documentation ------------- Documentation is available at https://queries.readthedocs.org Installation ------------ Queries is available via pypi_ and can be installed with easy_install or pip: .. code:: bash pip install queries Usage ----- Queries provides a session based API for interacting with PostgreSQL. Simply pass in the URI_ of the PostgreSQL server to connect to when creating a session: .. code:: python session = queries.Session("postgresql://postgres@localhost:5432/postgres") Queries built-in connection pooling will re-use connections when possible, lowering the overhead of connecting and reconnecting. When specifying a URI, if you omit the username and database name to connect with, Queries will use the current OS username for both. You can also omit the URI when connecting to connect to localhost on port 5432 as the current OS user, connecting to a database named for the current user. For example, if your username is ``fred`` and you omit the URI when issuing ``queries.query`` the URI that is constructed would be ``postgresql://fred@localhost:5432/fred``. If you'd rather use individual values for the connection, the queries.uri() method provides a quick and easy way to create a URI to pass into the various methods. .. code:: python >>> queries.uri("server-name", 5432, "dbname", "user", "pass") 'postgresql://user:pass@server-name:5432/dbname' Environment Variables ^^^^^^^^^^^^^^^^^^^^^ Currently Queries uses the following environment variables for tweaking various configuration values. The supported ones are: * ``QUERIES_MAX_POOL_SIZE`` - Modify the maximum size of the connection pool (default: 1) Using the queries.Session class ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ To execute queries or call stored procedures, you start by creating an instance of the ``queries.Session`` class. It can act as a context manager, meaning you can use it with the ``with`` keyword and it will take care of cleaning up after itself. For more information on the ``with`` keyword and context managers, see PEP343_. In addition to both the ``queries.Session.query`` and ``queries.Session.callproc`` methods that are similar to the simple API methods, the ``queries.Session`` class provides access to the psycopg2 connection and cursor objects. **Using queries.Session.query** The following example shows how a ``queries.Session`` object can be used as a context manager to query the database table: .. code:: python >>> import pprint >>> import queries >>> >>> with queries.Session() as session: ... for row in session.query('SELECT * FROM names'): ... pprint.pprint(row) ... {'id': 1, 'name': u'Jacob'} {'id': 2, 'name': u'Mason'} {'id': 3, 'name': u'Ethan'} **Using queries.Session.callproc** This example uses ``queries.Session.callproc`` to execute a stored procedure and then pretty-prints the single row results as a dictionary: .. code:: python >>> import pprint >>> import queries >>> with queries.Session() as session: ... results = session.callproc('chr', [65]) ... pprint.pprint(results.as_dict()) ... {'chr': u'A'} **Asynchronous Queries with Tornado** In addition to providing a Pythonic, synchronous client API for PostgreSQL, Queries provides a very similar asynchronous API for use with Tornado. The only major difference API difference between ``queries.TornadoSession`` and ``queries.Session`` is the ``TornadoSession.query`` and ``TornadoSession.callproc`` methods return the entire result set instead of acting as an iterator over the results. The following example uses ``TornadoSession.query`` in an asynchronous Tornado_ web application to send a JSON payload with the query result set. .. code:: python from tornado import gen, ioloop, web import queries class MainHandler(web.RequestHandler): def initialize(self): self.session = queries.TornadoSession() @gen.coroutine def get(self): @gen.coroutine def get(self): data = yield self.session.query('SELECT * FROM names') self.finish({'data': data}) if __name__ == "__main__": application.listen(8888) ioloop.IOLoop.instance().start() Inspiration ----------- Queries is inspired by `Kenneth Reitz's <https://github.com/kennethreitz/>`_ awesome work on `requests <http://docs.python-requests.org/en/latest/>`_. History ------- Queries is a fork and enhancement of pgsql_wrapper_, which can be found in the main GitHub repository of Queries as tags prior to version 1.2.0. .. _pypi: https://pypi.python.org/pypi/queries .. _psycopg2: https://pypi.python.org/pypi/psycopg2 .. _documentation: https://queries.readthedocs.org .. _URI: http://www.postgresql.org/docs/9.3/static/libpq-connect.html#LIBPQ-CONNSTRING .. _pgsql_wrapper: https://pypi.python.org/pypi/pgsql_wrapper .. _Tornado: http://tornadoweb.org .. _PEP343: http://legacy.python.org/dev/peps/pep-0343/ .. _psycopg2cffi: https://pypi.python.org/pypi/psycopg2cffi .. |Version| image:: https://img.shields.io/pypi/v/queries.svg? :target: https://pypi.python.org/pypi/queries .. |Status| image:: https://img.shields.io/travis/gmr/queries.svg? :target: https://travis-ci.org/gmr/queries .. |Coverage| image:: https://img.shields.io/codecov/c/github/gmr/queries.svg? :target: https://codecov.io/github/gmr/queries?branch=master .. |License| image:: https://img.shields.io/github/license/gmr/queries.svg? :target: https://github.com/gmr/queries <MSG> Reflect the API update [ci skip] <DFF> @@ -142,7 +142,7 @@ Tornado_ web application to send a JSON payload with the query result set. @gen.coroutine def get(self): - data = yield self.session.query('SELECT * FROM names') + rows, data = yield self.session.query('SELECT * FROM names') self.finish({'data': data})
1
Reflect the API update
1
.rst
rst
bsd-3-clause
gmr/queries
1254
<NME> README.rst <BEF> Queries: PostgreSQL Simplified ============================== *Queries* is a BSD licensed opinionated wrapper of the psycopg2_ library for interacting with PostgreSQL. The popular psycopg2_ package is a full-featured python client. Unfortunately as a developer, you're often repeating the same steps to get started with your applications that use it. Queries aims to reduce the complexity of psycopg2 while adding additional features to make writing PostgreSQL client applications both fast and easy. Check out the `Usage`_ section below to see how easy it can be. Key features include: - Simplified API - Support of Python 2.7+ and 3.4+ - PyPy support via psycopg2cffi_ - Asynchronous support for Tornado_ - Connection information provided by URI - Query results delivered as a generator based iterators - Automatically registered data-type support for UUIDs, Unicode and Unicode Arrays - Ability to directly access psycopg2 ``connection`` and ``cursor`` objects - Internal connection pooling |Version| |Status| |Coverage| |License| Documentation ------------- Documentation is available at https://queries.readthedocs.org Installation ------------ Queries is available via pypi_ and can be installed with easy_install or pip: .. code:: bash pip install queries Usage ----- Queries provides a session based API for interacting with PostgreSQL. Simply pass in the URI_ of the PostgreSQL server to connect to when creating a session: .. code:: python session = queries.Session("postgresql://postgres@localhost:5432/postgres") Queries built-in connection pooling will re-use connections when possible, lowering the overhead of connecting and reconnecting. When specifying a URI, if you omit the username and database name to connect with, Queries will use the current OS username for both. You can also omit the URI when connecting to connect to localhost on port 5432 as the current OS user, connecting to a database named for the current user. For example, if your username is ``fred`` and you omit the URI when issuing ``queries.query`` the URI that is constructed would be ``postgresql://fred@localhost:5432/fred``. If you'd rather use individual values for the connection, the queries.uri() method provides a quick and easy way to create a URI to pass into the various methods. .. code:: python >>> queries.uri("server-name", 5432, "dbname", "user", "pass") 'postgresql://user:pass@server-name:5432/dbname' Environment Variables ^^^^^^^^^^^^^^^^^^^^^ Currently Queries uses the following environment variables for tweaking various configuration values. The supported ones are: * ``QUERIES_MAX_POOL_SIZE`` - Modify the maximum size of the connection pool (default: 1) Using the queries.Session class ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ To execute queries or call stored procedures, you start by creating an instance of the ``queries.Session`` class. It can act as a context manager, meaning you can use it with the ``with`` keyword and it will take care of cleaning up after itself. For more information on the ``with`` keyword and context managers, see PEP343_. In addition to both the ``queries.Session.query`` and ``queries.Session.callproc`` methods that are similar to the simple API methods, the ``queries.Session`` class provides access to the psycopg2 connection and cursor objects. **Using queries.Session.query** The following example shows how a ``queries.Session`` object can be used as a context manager to query the database table: .. code:: python >>> import pprint >>> import queries >>> >>> with queries.Session() as session: ... for row in session.query('SELECT * FROM names'): ... pprint.pprint(row) ... {'id': 1, 'name': u'Jacob'} {'id': 2, 'name': u'Mason'} {'id': 3, 'name': u'Ethan'} **Using queries.Session.callproc** This example uses ``queries.Session.callproc`` to execute a stored procedure and then pretty-prints the single row results as a dictionary: .. code:: python >>> import pprint >>> import queries >>> with queries.Session() as session: ... results = session.callproc('chr', [65]) ... pprint.pprint(results.as_dict()) ... {'chr': u'A'} **Asynchronous Queries with Tornado** In addition to providing a Pythonic, synchronous client API for PostgreSQL, Queries provides a very similar asynchronous API for use with Tornado. The only major difference API difference between ``queries.TornadoSession`` and ``queries.Session`` is the ``TornadoSession.query`` and ``TornadoSession.callproc`` methods return the entire result set instead of acting as an iterator over the results. The following example uses ``TornadoSession.query`` in an asynchronous Tornado_ web application to send a JSON payload with the query result set. .. code:: python from tornado import gen, ioloop, web import queries class MainHandler(web.RequestHandler): def initialize(self): self.session = queries.TornadoSession() @gen.coroutine def get(self): @gen.coroutine def get(self): data = yield self.session.query('SELECT * FROM names') self.finish({'data': data}) if __name__ == "__main__": application.listen(8888) ioloop.IOLoop.instance().start() Inspiration ----------- Queries is inspired by `Kenneth Reitz's <https://github.com/kennethreitz/>`_ awesome work on `requests <http://docs.python-requests.org/en/latest/>`_. History ------- Queries is a fork and enhancement of pgsql_wrapper_, which can be found in the main GitHub repository of Queries as tags prior to version 1.2.0. .. _pypi: https://pypi.python.org/pypi/queries .. _psycopg2: https://pypi.python.org/pypi/psycopg2 .. _documentation: https://queries.readthedocs.org .. _URI: http://www.postgresql.org/docs/9.3/static/libpq-connect.html#LIBPQ-CONNSTRING .. _pgsql_wrapper: https://pypi.python.org/pypi/pgsql_wrapper .. _Tornado: http://tornadoweb.org .. _PEP343: http://legacy.python.org/dev/peps/pep-0343/ .. _psycopg2cffi: https://pypi.python.org/pypi/psycopg2cffi .. |Version| image:: https://img.shields.io/pypi/v/queries.svg? :target: https://pypi.python.org/pypi/queries .. |Status| image:: https://img.shields.io/travis/gmr/queries.svg? :target: https://travis-ci.org/gmr/queries .. |Coverage| image:: https://img.shields.io/codecov/c/github/gmr/queries.svg? :target: https://codecov.io/github/gmr/queries?branch=master .. |License| image:: https://img.shields.io/github/license/gmr/queries.svg? :target: https://github.com/gmr/queries <MSG> Reflect the API update [ci skip] <DFF> @@ -142,7 +142,7 @@ Tornado_ web application to send a JSON payload with the query result set. @gen.coroutine def get(self): - data = yield self.session.query('SELECT * FROM names') + rows, data = yield self.session.query('SELECT * FROM names') self.finish({'data': data})
1
Reflect the API update
1
.rst
rst
bsd-3-clause
gmr/queries
1255
<NME> bootstrap <BEF> #!/bin/sh # vim: set ts=2 sts=2 sw=2 et: test -n "$SHELLDEBUG" && set -x if test -e /var/run/docker.sock then DOCKER_IP=127.0.0.1 else echo "Docker environment not detected." exit 1 fi set -e if test -z "$COMPOSE_PROJECT_NAME" then CWD=${PWD##*/} export COMPOSE_PROJECT_NAME=${CWD/_/} fi mkdir -p build get_exposed_port() { docker-compose port $1 $2 | cut -d: -f2 } docker-compose down -t 0 --volumes --remove-orphans docker-compose pull docker-compose up -d --no-recreate PORT=$(get_exposed_port postgres 5432) echo "Waiting for postgres \c" export PG until psql -U postgres -h ${DOCKER_IP} -p ${PORT} -c 'SELECT 1' > /dev/null 2> /dev/null; do echo ".\c" sleep 1 done echo " done" cat > build/test-environment<<EOF export PGHOST=${DOCKER_IP} export PGPORT=${PORT} EOF <MSG> Use container's psql Especially useful on systems that don't have postgresql locally <DFF> @@ -30,7 +30,7 @@ PORT=$(get_exposed_port postgres 5432) echo "Waiting for postgres \c" export PG -until psql -U postgres -h ${DOCKER_IP} -p ${PORT} -c 'SELECT 1' > /dev/null 2> /dev/null; do +until docker-compose exec postgres psql -U postgres -c 'SELECT 1' > /dev/null 2> /dev/null; do echo ".\c" sleep 1 done
1
Use container's psql Especially useful on systems that don't have postgresql locally
1
bootstrap
bsd-3-clause
gmr/queries
1256
<NME> bootstrap <BEF> #!/bin/sh # vim: set ts=2 sts=2 sw=2 et: test -n "$SHELLDEBUG" && set -x if test -e /var/run/docker.sock then DOCKER_IP=127.0.0.1 else echo "Docker environment not detected." exit 1 fi set -e if test -z "$COMPOSE_PROJECT_NAME" then CWD=${PWD##*/} export COMPOSE_PROJECT_NAME=${CWD/_/} fi mkdir -p build get_exposed_port() { docker-compose port $1 $2 | cut -d: -f2 } docker-compose down -t 0 --volumes --remove-orphans docker-compose pull docker-compose up -d --no-recreate PORT=$(get_exposed_port postgres 5432) echo "Waiting for postgres \c" export PG until psql -U postgres -h ${DOCKER_IP} -p ${PORT} -c 'SELECT 1' > /dev/null 2> /dev/null; do echo ".\c" sleep 1 done echo " done" cat > build/test-environment<<EOF export PGHOST=${DOCKER_IP} export PGPORT=${PORT} EOF <MSG> Use container's psql Especially useful on systems that don't have postgresql locally <DFF> @@ -30,7 +30,7 @@ PORT=$(get_exposed_port postgres 5432) echo "Waiting for postgres \c" export PG -until psql -U postgres -h ${DOCKER_IP} -p ${PORT} -c 'SELECT 1' > /dev/null 2> /dev/null; do +until docker-compose exec postgres psql -U postgres -c 'SELECT 1' > /dev/null 2> /dev/null; do echo ".\c" sleep 1 done
1
Use container's psql Especially useful on systems that don't have postgresql locally
1
bootstrap
bsd-3-clause
gmr/queries
1257
<NME> results.py <BEF> """ query or callproc Results """ import logging import psycopg2 LOGGER = logging.getLogger(__name__) class Results(object): """The :py:class:`Results` class contains the results returned from :py:meth:`Session.query <queries.Session.query>` and :py:meth:`Session.callproc <queries.Session.callproc>`. It is able to act as an iterator and provides many different methods for accessing the information about and results from a query. :param psycopg2.extensions.cursor cursor: The cursor for the results """ def __init__(self, cursor): self.cursor = cursor def __getitem__(self, item): """Fetch an individual row from the result set :rtype: mixed :raises: IndexError """ try: self.cursor.scroll(item, 'absolute') except psycopg2.ProgrammingError: raise IndexError('No such row') else: return self.cursor.fetchone() def __iter__(self): """Iterate through the result set :rtype: mixed """ if not self.cursor.rowcount: raise StopIteration self._rewind() for row in self.cursor: yield row def __len__(self): """Return the number of rows that were returned from the query :rtype: int """ return self.cursor.rowcount if self.cursor.rowcount >= 0 else 0 def __nonzero__(self): return bool(self.cursor.rowcount) def __bool__(self): return self.__nonzero__() def __repr__(self): return '<queries.%s rows=%s>' % (self.__class__.__name__, len(self)) def as_dict(self): """Return a single row result as a dictionary. If the results contain multiple rows, a :py:class:`ValueError` will be raised. :return: dict :raises: ValueError """ if not self.cursor.rowcount: return {} self._rewind() if self.cursor.rowcount == 1: return dict(self.cursor.fetchone()) else: raise ValueError('More than one row') def count(self): """Return the number of rows that were returned from the query :rtype: int """ return self.cursor.rowcount def free(self): """Used in asynchronous sessions for freeing results and their locked connections. """ LOGGER.debug('Invoking synchronous free has no effect') def items(self): """Return all of the rows that are in the result set. :rtype: list """ if not self.cursor.rowcount: return [] self.cursor.scroll(0, 'absolute') return self.cursor.fetchall() @property def rownumber(self): """Return the current offset of the result set :rtype: int """ return self.cursor.rownumber @property def query(self): """Return a read-only value of the query that was submitted to PostgreSQL. :rtype: str """ return self.cursor.query @property def status(self): """Return the status message returned by PostgreSQL after the query was executed. :rtype: str """ return self.cursor.statusmessage def _rewind(self): """Rewind the cursor to the first row""" self.cursor.scroll(0, 'absolute') <MSG> Update generator for Python 3.7 StopIteration is a runtime error in Python 3.7. This change eliminates that while preserving compatibility down to Python 2.7. <DFF> @@ -41,12 +41,10 @@ class Results(object): :rtype: mixed """ - if not self.cursor.rowcount: - raise StopIteration - - self._rewind() - for row in self.cursor: - yield row + if self.cursor.rowcount: + self._rewind() + for row in self.cursor: + yield row def __len__(self): """Return the number of rows that were returned from the query
4
Update generator for Python 3.7 StopIteration is a runtime error in Python 3.7. This change eliminates that while preserving compatibility down to Python 2.7.
6
.py
py
bsd-3-clause
gmr/queries
1258
<NME> results.py <BEF> """ query or callproc Results """ import logging import psycopg2 LOGGER = logging.getLogger(__name__) class Results(object): """The :py:class:`Results` class contains the results returned from :py:meth:`Session.query <queries.Session.query>` and :py:meth:`Session.callproc <queries.Session.callproc>`. It is able to act as an iterator and provides many different methods for accessing the information about and results from a query. :param psycopg2.extensions.cursor cursor: The cursor for the results """ def __init__(self, cursor): self.cursor = cursor def __getitem__(self, item): """Fetch an individual row from the result set :rtype: mixed :raises: IndexError """ try: self.cursor.scroll(item, 'absolute') except psycopg2.ProgrammingError: raise IndexError('No such row') else: return self.cursor.fetchone() def __iter__(self): """Iterate through the result set :rtype: mixed """ if not self.cursor.rowcount: raise StopIteration self._rewind() for row in self.cursor: yield row def __len__(self): """Return the number of rows that were returned from the query :rtype: int """ return self.cursor.rowcount if self.cursor.rowcount >= 0 else 0 def __nonzero__(self): return bool(self.cursor.rowcount) def __bool__(self): return self.__nonzero__() def __repr__(self): return '<queries.%s rows=%s>' % (self.__class__.__name__, len(self)) def as_dict(self): """Return a single row result as a dictionary. If the results contain multiple rows, a :py:class:`ValueError` will be raised. :return: dict :raises: ValueError """ if not self.cursor.rowcount: return {} self._rewind() if self.cursor.rowcount == 1: return dict(self.cursor.fetchone()) else: raise ValueError('More than one row') def count(self): """Return the number of rows that were returned from the query :rtype: int """ return self.cursor.rowcount def free(self): """Used in asynchronous sessions for freeing results and their locked connections. """ LOGGER.debug('Invoking synchronous free has no effect') def items(self): """Return all of the rows that are in the result set. :rtype: list """ if not self.cursor.rowcount: return [] self.cursor.scroll(0, 'absolute') return self.cursor.fetchall() @property def rownumber(self): """Return the current offset of the result set :rtype: int """ return self.cursor.rownumber @property def query(self): """Return a read-only value of the query that was submitted to PostgreSQL. :rtype: str """ return self.cursor.query @property def status(self): """Return the status message returned by PostgreSQL after the query was executed. :rtype: str """ return self.cursor.statusmessage def _rewind(self): """Rewind the cursor to the first row""" self.cursor.scroll(0, 'absolute') <MSG> Update generator for Python 3.7 StopIteration is a runtime error in Python 3.7. This change eliminates that while preserving compatibility down to Python 2.7. <DFF> @@ -41,12 +41,10 @@ class Results(object): :rtype: mixed """ - if not self.cursor.rowcount: - raise StopIteration - - self._rewind() - for row in self.cursor: - yield row + if self.cursor.rowcount: + self._rewind() + for row in self.cursor: + yield row def __len__(self): """Return the number of rows that were returned from the query
4
Update generator for Python 3.7 StopIteration is a runtime error in Python 3.7. This change eliminates that while preserving compatibility down to Python 2.7.
6
.py
py
bsd-3-clause
gmr/queries
1259
<NME> requirements.pypy <BEF> ADDFILE <MSG> Update the requirements files breaking out for pypy and non-pypy installs <DFF> @@ -0,0 +1,6 @@ +coverage +mock +nose +psycopg2ct +python-coveralls +tornado
6
Update the requirements files breaking out for pypy and non-pypy installs
0
.pypy
pypy
bsd-3-clause
gmr/queries
1260
<NME> requirements.pypy <BEF> ADDFILE <MSG> Update the requirements files breaking out for pypy and non-pypy installs <DFF> @@ -0,0 +1,6 @@ +coverage +mock +nose +psycopg2ct +python-coveralls +tornado
6
Update the requirements files breaking out for pypy and non-pypy installs
0
.pypy
pypy
bsd-3-clause
gmr/queries
1261
<NME> .travis.yml <BEF> sudo: false language: python env: global: - PATH=$HOME/.local/bin:$PATH - AWS_DEFAULT_REGION=us-east-1 - secure: "inURdx4ldkJqQXL1TyvKImC3EnL5TixC1DlNMBYi5ttygwAk+mSSSw8Yc7klB6D1m6q79xUlHRk06vbz23CsXTM4AClC5Emrk6XN2GlUKl5WI+z+A2skI59buEhLWe7e2KzhB/AVx2E3TfKa0oY7raM0UUnaOkpV1Cj+mHKPIT0=" - secure: "H32DV3713a6UUuEJujrG7SfUX4/5WrwQy/3DxeptC6L7YPlTYxHBdEsccTfN5z806EheIl4BdIoxoDtq7PU/tWQoG1Lp2ze60mpwrniHajhFnjk7zP6pHvkhGLr8flhSmAb6CQBreNFOHTLWBMGPfi7k1Q9Td9MHbRo/FsTxqsM=" stages: - test - name: coverage - name: deploy if: tag IS present services: - postgresql install: - pip install awscli - pip install -r requires/testing.txt - python setup.py develop script: nosetests after_success: - aws s3 cp .coverage "s3://com-gavinroy-travis/queries/$TRAVIS_BUILD_NUMBER/.coverage.${TRAVIS_PYTHON_VERSION}" jobs: include: - python: 2.7 - python: 3.4 - python: 3.5 - python: 3.6 - python: pypy - python: pypy3 - stage: coverage services: [] python: 3.7 install: - pip install awscli coverage codecov script: - mkdir coverage - aws s3 cp --recursive s3://com-gavinroy-travis/queries/$TRAVIS_BUILD_NUMBER/ coverage - cd coverage - coverage combine - cd .. - mv coverage/.coverage . - coverage report after_success: codecov - stage: deploy if: repo = gmr/queries python: 3.6 services: [] install: true script: true after_success: true deploy: distributions: sdist bdist_wheel provider: pypi user: crad on: tags: true all_branches: true password: secure: UWQWui+QhAL1cz6oW/vqjEEp6/EPn1YOlItNJcWHNOO/WMMOlaTVYVUuXp+y+m52B+8PtYZZCTHwKCUKe97Grh291FLxgd0RJCawA40f4v1gmOFYLNKyZFBGfbC69/amxvGCcDvOPtpChHAlTIeokS5EQneVcAhXg2jXct0HTfI= <MSG> Test in 3.7 <DFF> @@ -1,5 +1,6 @@ sudo: false language: python +dist: xenial env: global: @@ -33,6 +34,7 @@ jobs: - python: 3.4 - python: 3.5 - python: 3.6 + - python: 3.7 - python: pypy - python: pypy3 - stage: coverage
2
Test in 3.7
0
.yml
travis
bsd-3-clause
gmr/queries
1262
<NME> .travis.yml <BEF> sudo: false language: python env: global: - PATH=$HOME/.local/bin:$PATH - AWS_DEFAULT_REGION=us-east-1 - secure: "inURdx4ldkJqQXL1TyvKImC3EnL5TixC1DlNMBYi5ttygwAk+mSSSw8Yc7klB6D1m6q79xUlHRk06vbz23CsXTM4AClC5Emrk6XN2GlUKl5WI+z+A2skI59buEhLWe7e2KzhB/AVx2E3TfKa0oY7raM0UUnaOkpV1Cj+mHKPIT0=" - secure: "H32DV3713a6UUuEJujrG7SfUX4/5WrwQy/3DxeptC6L7YPlTYxHBdEsccTfN5z806EheIl4BdIoxoDtq7PU/tWQoG1Lp2ze60mpwrniHajhFnjk7zP6pHvkhGLr8flhSmAb6CQBreNFOHTLWBMGPfi7k1Q9Td9MHbRo/FsTxqsM=" stages: - test - name: coverage - name: deploy if: tag IS present services: - postgresql install: - pip install awscli - pip install -r requires/testing.txt - python setup.py develop script: nosetests after_success: - aws s3 cp .coverage "s3://com-gavinroy-travis/queries/$TRAVIS_BUILD_NUMBER/.coverage.${TRAVIS_PYTHON_VERSION}" jobs: include: - python: 2.7 - python: 3.4 - python: 3.5 - python: 3.6 - python: pypy - python: pypy3 - stage: coverage services: [] python: 3.7 install: - pip install awscli coverage codecov script: - mkdir coverage - aws s3 cp --recursive s3://com-gavinroy-travis/queries/$TRAVIS_BUILD_NUMBER/ coverage - cd coverage - coverage combine - cd .. - mv coverage/.coverage . - coverage report after_success: codecov - stage: deploy if: repo = gmr/queries python: 3.6 services: [] install: true script: true after_success: true deploy: distributions: sdist bdist_wheel provider: pypi user: crad on: tags: true all_branches: true password: secure: UWQWui+QhAL1cz6oW/vqjEEp6/EPn1YOlItNJcWHNOO/WMMOlaTVYVUuXp+y+m52B+8PtYZZCTHwKCUKe97Grh291FLxgd0RJCawA40f4v1gmOFYLNKyZFBGfbC69/amxvGCcDvOPtpChHAlTIeokS5EQneVcAhXg2jXct0HTfI= <MSG> Test in 3.7 <DFF> @@ -1,5 +1,6 @@ sudo: false language: python +dist: xenial env: global: @@ -33,6 +34,7 @@ jobs: - python: 3.4 - python: 3.5 - python: 3.6 + - python: 3.7 - python: pypy - python: pypy3 - stage: coverage
2
Test in 3.7
0
.yml
travis
bsd-3-clause
gmr/queries
1263
<NME> README.rst <BEF> Queries: PostgreSQL Simplified ======= PostgreSQL database access simplified. Queries is an opinionated wrapper for interfacing with PostgreSQL that offers caching of connections and support for PyPy via psycopg2ct. Queries supports Python versions 2.6+ and 3.2+. Additionally, Queries provides an asynchronous interface to PostgreSQL for Tornado_. The core `queries.Queries` class will automatically register support for UUIDs, Unicode and Unicode arrays. Without requiring any additional code, queries offers connection pooling that allows for multiple modules in the same interpreter to use the same PostgreSQL connection. |Version| |Downloads| |Status| - Automatically registered data-type support for UUIDs, Unicode and Unicode Arrays - Ability to directly access psycopg2 ``connection`` and ``cursor`` objects - Internal connection pooling |Version| |Status| |Coverage| |License| Documentation ------------- Documentation is available at https://queries.readthedocs.org pip install queries Requirements ------------ - psycopg2 (for cpython support) - psycopg2ct (for PyPy support) Examples -------- Executing a query and fetching data, connecting by default to `localhost` as the current user with a database matching the username: .. code:: python >>> queries.uri("server-name", 5432, "dbname", "user", "pass") 'postgresql://user:pass@server-name:5432/dbname' Environment Variables ^^^^^^^^^^^^^^^^^^^^^ {'id': 2, 'name': u'Mason'} {'id': 3, 'name': u'Ethan'} Calling a stored procedure, returning the iterator results as a list: .. code:: python ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ To execute queries or call stored procedures, you start by creating an instance of the ``queries.Session`` class. It can act as a context manager, meaning you can use it with the ``with`` keyword and it will take care of cleaning up after itself. For [{'now': datetime.datetime(2014, 4, 27, 15, 7, 18, 832480, tzinfo=psycopg2.tz.FixedOffsetTimezone(offset=-240, name=None))} Using the Session object as a context manager: .. code:: python ... for row in session.query('SELECT * FROM names'): ... pprint.pprint(row) ... {'id': 1, 'name': u'Jacob'} {'id': 2, 'name': u'Mason'} {'id': 3, 'name': u'Ethan'} **Using queries.Session.callproc** {'id': 2, 'name': u'Mason'} {'id': 3, 'name': u'Ethan'} Using in a Tornado RequestHandler: .. code:: python **Asynchronous Queries with Tornado** In addition to providing a Pythonic, synchronous client API for PostgreSQL, Queries provides a very similar asynchronous API for use with Tornado. The only major difference API difference between ``queries.TornadoSession`` and ``queries.Session`` is the ``TornadoSession.query`` and ``TornadoSession.callproc`` methods return the entire result set instead of acting as an iterator over the results. The following example uses ``TornadoSession.query`` in an asynchronous Tornado_ web application to send a JSON payload with the query result set. .. code:: python from tornado import gen, ioloop, web import queries class MainHandler(web.RequestHandler): def initialize(self): application.listen(8888) ioloop.IOLoop.instance().start() Inspiration ----------- Queries is inspired by `Kenneth Reitz's <https://github.com/kennethreitz/>`_ awesome (r"/", MainHandler), ]) if __name__ == "__main__": Queries is a fork and enhancement of pgsql_wrapper_, which can be found in the main GitHub repository of Queries as tags prior to version 1.2.0. .. _pgsql_wrapper: https://pypi.python.org/pypi/pgsql_wrapper .. _tornado: http://tornadoweb.org .. |Version| image:: https://badge.fury.io/py/queries.svg? :target: http://badge.fury.io/py/queries main GitHub repository of Queries as tags prior to version 1.2.0. .. _pypi: https://pypi.python.org/pypi/queries .. _psycopg2: https://pypi.python.org/pypi/psycopg2 .. _documentation: https://queries.readthedocs.org .. _URI: http://www.postgresql.org/docs/9.3/static/libpq-connect.html#LIBPQ-CONNSTRING .. _pgsql_wrapper: https://pypi.python.org/pypi/pgsql_wrapper .. _Tornado: http://tornadoweb.org .. _PEP343: http://legacy.python.org/dev/peps/pep-0343/ .. _psycopg2cffi: https://pypi.python.org/pypi/psycopg2cffi .. |Version| image:: https://img.shields.io/pypi/v/queries.svg? :target: https://pypi.python.org/pypi/queries .. |Status| image:: https://img.shields.io/travis/gmr/queries.svg? :target: https://travis-ci.org/gmr/queries .. |Coverage| image:: https://img.shields.io/codecov/c/github/gmr/queries.svg? :target: https://codecov.io/github/gmr/queries?branch=master .. |License| image:: https://img.shields.io/github/license/gmr/queries.svg? :target: https://github.com/gmr/queries <MSG> Update the README to include more detailed documentation and information <DFF> @@ -2,17 +2,18 @@ Queries ======= PostgreSQL database access simplified. -Queries is an opinionated wrapper for interfacing with PostgreSQL that offers -caching of connections and support for PyPy via psycopg2ct. Queries supports -Python versions 2.6+ and 3.2+. Additionally, Queries provides an asynchronous -interface to PostgreSQL for Tornado_. - -The core `queries.Queries` class will automatically register support for UUIDs, -Unicode and Unicode arrays. - -Without requiring any additional code, queries offers connection pooling that -allows for multiple modules in the same interpreter to use the same PostgreSQL -connection. +Queries is an opinionated wrapper of the psycopg2_ library for interfacing with +PostgreSQL. Key features include: + +- Simplified API +- Support of Python 2.6+ and 3.2+ +- PyPy support via psycopg2ct +- Internal connection pooling +- Asynchronous support for Tornado_ +- Automatic registration of UUIDs, Unicode and Unicode Arrays +- Ability to directly access psycopg2 `connection` and `cursor` objects +- Connection information provided by URI +- Query results delivered as a generator based iterators |Version| |Downloads| |Status| @@ -28,17 +29,34 @@ queries is available via pypi and can be installed with easy_install or pip: pip install queries -Requirements ------------- +Usage +----- +Queries provides both a session based API and a stripped-down simple API for +interacting with PostgreSQL. If you're writing applications that may only have +one or two queries, the simple API may be useful. Instead of creating a session +object when using the simple API methods (`queries.query()` and +`queries.callproc()`), this is done for you. Simply pass in your query and +the URIs_ of the PostgreSQL server to connect to: + +.. code:: python -- psycopg2 (for cpython support) -- psycopg2ct (for PyPy support) + queries.query("SELECT now()", "pgsql://postgres@localhost:5432/postgres") -Examples --------- +Queries built-in connection pooling will re-use connections when possible, +lowering the overhead of connecting and reconnecting. This is also true when +you're using Queries sessions in different parts of your application in the same +Python interpreter. -Executing a query and fetching data, connecting by default to `localhost` as -the current user with a database matching the username: +When specifying a URI, if you omit the username and database name to connect +with, Queries will use the current OS username for both. You can also omit the +URI when connecting to connect to localhost on port 5432 as the current OS user, +connecting to a database named for the current user. For example, if your +username is "fred" and you omit the URI when issuing `queries.query` the URI +that is constructed would be `pgsql://fred@localhost:5432/fred`. + +Here are a few examples of using the Queries simple API: + +1. Executing a query and fetching data using the default URI: .. code:: python @@ -52,7 +70,7 @@ the current user with a database matching the username: {'id': 2, 'name': u'Mason'} {'id': 3, 'name': u'Ethan'} -Calling a stored procedure, returning the iterator results as a list: +2. Calling a stored procedure, returning the iterator results as a list: .. code:: python @@ -63,7 +81,17 @@ Calling a stored procedure, returning the iterator results as a list: [{'now': datetime.datetime(2014, 4, 27, 15, 7, 18, 832480, tzinfo=psycopg2.tz.FixedOffsetTimezone(offset=-240, name=None))} -Using the Session object as a context manager: +If your application is going to be performing multiple operations, you should use +the `queries.Session` class. It can act as a context manager, meaning you can +use it with the `with` keyword and it will take care of cleaning up after itself. + +In addition to both the `Session.query()` and `Session.callproc()` methods that +are similar to the simple API methods, the `queries.Session` class provides +access to the psycopg2 connection and cursor objects. It also provides methods +for managing transactions and to the LISTEN/NOTIFY functionality provided by +PostgreSQL. For full documentation around the Session class, see the +documentation_. The following example shows how a `queries.Session` object can +be used as a context manager. .. code:: python @@ -78,7 +106,13 @@ Using the Session object as a context manager: {'id': 2, 'name': u'Mason'} {'id': 3, 'name': u'Ethan'} -Using in a Tornado RequestHandler: +In addition to providing a Pythonic, synchronous client API for PostgreSQL, +Queries provides a very similar asynchronous API for use with Tornado_ [*]_. +The only major difference API difference between `queries.TornadoSession` and +`queries.Session` is the `TornadoSession.query` and `TornadoSession.callproc` +methods return the entire result set instead of acting as an iterator over +the results. The following is an example of using Queries in a Tornado_ web +application. .. code:: python @@ -104,6 +138,9 @@ Using in a Tornado RequestHandler: application.listen(8888) ioloop.IOLoop.instance().start() +.. [*] The Queries simple API methods are synchronous only and should not be used +in an asynchronous Tornado application. + Inspiration ----------- Queries is inspired by `Kenneth Reitz's <https://github.com/kennethreitz/>`_ awesome @@ -114,9 +151,11 @@ History Queries is a fork and enhancement of pgsql_wrapper_, which can be found in the main GitHub repository of Queries as tags prior to version 1.2.0. +.. _psycopg2: https://pypi.python.org/pypi/psycopg2 +.. _documentation: https://queries.readthedocs.org +.. _URIs: http://www.postgresql.org/docs/9.3/static/libpq-connect.html#LIBPQ-CONNSTRING .. _pgsql_wrapper: https://pypi.python.org/pypi/pgsql_wrapper - -.. _tornado: http://tornadoweb.org +.. _Tornado: http://tornadoweb.org .. |Version| image:: https://badge.fury.io/py/queries.svg? :target: http://badge.fury.io/py/queries
63
Update the README to include more detailed documentation and information
24
.rst
rst
bsd-3-clause
gmr/queries
1264
<NME> README.rst <BEF> Queries: PostgreSQL Simplified ======= PostgreSQL database access simplified. Queries is an opinionated wrapper for interfacing with PostgreSQL that offers caching of connections and support for PyPy via psycopg2ct. Queries supports Python versions 2.6+ and 3.2+. Additionally, Queries provides an asynchronous interface to PostgreSQL for Tornado_. The core `queries.Queries` class will automatically register support for UUIDs, Unicode and Unicode arrays. Without requiring any additional code, queries offers connection pooling that allows for multiple modules in the same interpreter to use the same PostgreSQL connection. |Version| |Downloads| |Status| - Automatically registered data-type support for UUIDs, Unicode and Unicode Arrays - Ability to directly access psycopg2 ``connection`` and ``cursor`` objects - Internal connection pooling |Version| |Status| |Coverage| |License| Documentation ------------- Documentation is available at https://queries.readthedocs.org pip install queries Requirements ------------ - psycopg2 (for cpython support) - psycopg2ct (for PyPy support) Examples -------- Executing a query and fetching data, connecting by default to `localhost` as the current user with a database matching the username: .. code:: python >>> queries.uri("server-name", 5432, "dbname", "user", "pass") 'postgresql://user:pass@server-name:5432/dbname' Environment Variables ^^^^^^^^^^^^^^^^^^^^^ {'id': 2, 'name': u'Mason'} {'id': 3, 'name': u'Ethan'} Calling a stored procedure, returning the iterator results as a list: .. code:: python ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ To execute queries or call stored procedures, you start by creating an instance of the ``queries.Session`` class. It can act as a context manager, meaning you can use it with the ``with`` keyword and it will take care of cleaning up after itself. For [{'now': datetime.datetime(2014, 4, 27, 15, 7, 18, 832480, tzinfo=psycopg2.tz.FixedOffsetTimezone(offset=-240, name=None))} Using the Session object as a context manager: .. code:: python ... for row in session.query('SELECT * FROM names'): ... pprint.pprint(row) ... {'id': 1, 'name': u'Jacob'} {'id': 2, 'name': u'Mason'} {'id': 3, 'name': u'Ethan'} **Using queries.Session.callproc** {'id': 2, 'name': u'Mason'} {'id': 3, 'name': u'Ethan'} Using in a Tornado RequestHandler: .. code:: python **Asynchronous Queries with Tornado** In addition to providing a Pythonic, synchronous client API for PostgreSQL, Queries provides a very similar asynchronous API for use with Tornado. The only major difference API difference between ``queries.TornadoSession`` and ``queries.Session`` is the ``TornadoSession.query`` and ``TornadoSession.callproc`` methods return the entire result set instead of acting as an iterator over the results. The following example uses ``TornadoSession.query`` in an asynchronous Tornado_ web application to send a JSON payload with the query result set. .. code:: python from tornado import gen, ioloop, web import queries class MainHandler(web.RequestHandler): def initialize(self): application.listen(8888) ioloop.IOLoop.instance().start() Inspiration ----------- Queries is inspired by `Kenneth Reitz's <https://github.com/kennethreitz/>`_ awesome (r"/", MainHandler), ]) if __name__ == "__main__": Queries is a fork and enhancement of pgsql_wrapper_, which can be found in the main GitHub repository of Queries as tags prior to version 1.2.0. .. _pgsql_wrapper: https://pypi.python.org/pypi/pgsql_wrapper .. _tornado: http://tornadoweb.org .. |Version| image:: https://badge.fury.io/py/queries.svg? :target: http://badge.fury.io/py/queries main GitHub repository of Queries as tags prior to version 1.2.0. .. _pypi: https://pypi.python.org/pypi/queries .. _psycopg2: https://pypi.python.org/pypi/psycopg2 .. _documentation: https://queries.readthedocs.org .. _URI: http://www.postgresql.org/docs/9.3/static/libpq-connect.html#LIBPQ-CONNSTRING .. _pgsql_wrapper: https://pypi.python.org/pypi/pgsql_wrapper .. _Tornado: http://tornadoweb.org .. _PEP343: http://legacy.python.org/dev/peps/pep-0343/ .. _psycopg2cffi: https://pypi.python.org/pypi/psycopg2cffi .. |Version| image:: https://img.shields.io/pypi/v/queries.svg? :target: https://pypi.python.org/pypi/queries .. |Status| image:: https://img.shields.io/travis/gmr/queries.svg? :target: https://travis-ci.org/gmr/queries .. |Coverage| image:: https://img.shields.io/codecov/c/github/gmr/queries.svg? :target: https://codecov.io/github/gmr/queries?branch=master .. |License| image:: https://img.shields.io/github/license/gmr/queries.svg? :target: https://github.com/gmr/queries <MSG> Update the README to include more detailed documentation and information <DFF> @@ -2,17 +2,18 @@ Queries ======= PostgreSQL database access simplified. -Queries is an opinionated wrapper for interfacing with PostgreSQL that offers -caching of connections and support for PyPy via psycopg2ct. Queries supports -Python versions 2.6+ and 3.2+. Additionally, Queries provides an asynchronous -interface to PostgreSQL for Tornado_. - -The core `queries.Queries` class will automatically register support for UUIDs, -Unicode and Unicode arrays. - -Without requiring any additional code, queries offers connection pooling that -allows for multiple modules in the same interpreter to use the same PostgreSQL -connection. +Queries is an opinionated wrapper of the psycopg2_ library for interfacing with +PostgreSQL. Key features include: + +- Simplified API +- Support of Python 2.6+ and 3.2+ +- PyPy support via psycopg2ct +- Internal connection pooling +- Asynchronous support for Tornado_ +- Automatic registration of UUIDs, Unicode and Unicode Arrays +- Ability to directly access psycopg2 `connection` and `cursor` objects +- Connection information provided by URI +- Query results delivered as a generator based iterators |Version| |Downloads| |Status| @@ -28,17 +29,34 @@ queries is available via pypi and can be installed with easy_install or pip: pip install queries -Requirements ------------- +Usage +----- +Queries provides both a session based API and a stripped-down simple API for +interacting with PostgreSQL. If you're writing applications that may only have +one or two queries, the simple API may be useful. Instead of creating a session +object when using the simple API methods (`queries.query()` and +`queries.callproc()`), this is done for you. Simply pass in your query and +the URIs_ of the PostgreSQL server to connect to: + +.. code:: python -- psycopg2 (for cpython support) -- psycopg2ct (for PyPy support) + queries.query("SELECT now()", "pgsql://postgres@localhost:5432/postgres") -Examples --------- +Queries built-in connection pooling will re-use connections when possible, +lowering the overhead of connecting and reconnecting. This is also true when +you're using Queries sessions in different parts of your application in the same +Python interpreter. -Executing a query and fetching data, connecting by default to `localhost` as -the current user with a database matching the username: +When specifying a URI, if you omit the username and database name to connect +with, Queries will use the current OS username for both. You can also omit the +URI when connecting to connect to localhost on port 5432 as the current OS user, +connecting to a database named for the current user. For example, if your +username is "fred" and you omit the URI when issuing `queries.query` the URI +that is constructed would be `pgsql://fred@localhost:5432/fred`. + +Here are a few examples of using the Queries simple API: + +1. Executing a query and fetching data using the default URI: .. code:: python @@ -52,7 +70,7 @@ the current user with a database matching the username: {'id': 2, 'name': u'Mason'} {'id': 3, 'name': u'Ethan'} -Calling a stored procedure, returning the iterator results as a list: +2. Calling a stored procedure, returning the iterator results as a list: .. code:: python @@ -63,7 +81,17 @@ Calling a stored procedure, returning the iterator results as a list: [{'now': datetime.datetime(2014, 4, 27, 15, 7, 18, 832480, tzinfo=psycopg2.tz.FixedOffsetTimezone(offset=-240, name=None))} -Using the Session object as a context manager: +If your application is going to be performing multiple operations, you should use +the `queries.Session` class. It can act as a context manager, meaning you can +use it with the `with` keyword and it will take care of cleaning up after itself. + +In addition to both the `Session.query()` and `Session.callproc()` methods that +are similar to the simple API methods, the `queries.Session` class provides +access to the psycopg2 connection and cursor objects. It also provides methods +for managing transactions and to the LISTEN/NOTIFY functionality provided by +PostgreSQL. For full documentation around the Session class, see the +documentation_. The following example shows how a `queries.Session` object can +be used as a context manager. .. code:: python @@ -78,7 +106,13 @@ Using the Session object as a context manager: {'id': 2, 'name': u'Mason'} {'id': 3, 'name': u'Ethan'} -Using in a Tornado RequestHandler: +In addition to providing a Pythonic, synchronous client API for PostgreSQL, +Queries provides a very similar asynchronous API for use with Tornado_ [*]_. +The only major difference API difference between `queries.TornadoSession` and +`queries.Session` is the `TornadoSession.query` and `TornadoSession.callproc` +methods return the entire result set instead of acting as an iterator over +the results. The following is an example of using Queries in a Tornado_ web +application. .. code:: python @@ -104,6 +138,9 @@ Using in a Tornado RequestHandler: application.listen(8888) ioloop.IOLoop.instance().start() +.. [*] The Queries simple API methods are synchronous only and should not be used +in an asynchronous Tornado application. + Inspiration ----------- Queries is inspired by `Kenneth Reitz's <https://github.com/kennethreitz/>`_ awesome @@ -114,9 +151,11 @@ History Queries is a fork and enhancement of pgsql_wrapper_, which can be found in the main GitHub repository of Queries as tags prior to version 1.2.0. +.. _psycopg2: https://pypi.python.org/pypi/psycopg2 +.. _documentation: https://queries.readthedocs.org +.. _URIs: http://www.postgresql.org/docs/9.3/static/libpq-connect.html#LIBPQ-CONNSTRING .. _pgsql_wrapper: https://pypi.python.org/pypi/pgsql_wrapper - -.. _tornado: http://tornadoweb.org +.. _Tornado: http://tornadoweb.org .. |Version| image:: https://badge.fury.io/py/queries.svg? :target: http://badge.fury.io/py/queries
63
Update the README to include more detailed documentation and information
24
.rst
rst
bsd-3-clause
gmr/queries
1265
<NME> results.py <BEF> """ query or callproc Results """ import logging import psycopg2 LOGGER = logging.getLogger(__name__) class Results(object): """The :py:class:`Results` class contains the results returned from :py:meth:`Session.query <queries.Session.query>` and :py:meth:`Session.callproc <queries.Session.callproc>`. It is able to act as an iterator and provides many different methods for accessing the information about and results from a query. :param psycopg2.extensions.cursor cursor: The cursor for the results """ def __init__(self, cursor): self.cursor = cursor def __getitem__(self, item): """Fetch an individual row from the result set :rtype: mixed :raises: IndexError """ try: self.cursor.scroll(item, 'absolute') except psycopg2.ProgrammingError: raise IndexError('No such row') else: return self.cursor.fetchone() def __iter__(self): """Iterate through the result set :rtype: mixed """ if self.cursor.rowcount: self._rewind() for row in self.cursor: yield row def __len__(self): """Return the number of rows that were returned from the query :rtype: int """ return self.cursor.rowcount def __nonzero__(self): return bool(self.cursor.rowcount) return bool(self.cursor.rowcount) def __bool__(self): return self.__nonzero__() def __repr__(self): return '<queries.%s rows=%s>' % (self.__class__.__name__, len(self)) def as_dict(self): """Return a single row result as a dictionary. If the results contain multiple rows, a :py:class:`ValueError` will be raised. :return: dict :raises: ValueError """ if not self.cursor.rowcount: return {} self._rewind() if self.cursor.rowcount == 1: return dict(self.cursor.fetchone()) else: raise ValueError('More than one row') def count(self): """Return the number of rows that were returned from the query :rtype: int """ return self.cursor.rowcount def free(self): """Used in asynchronous sessions for freeing results and their locked connections. """ LOGGER.debug('Invoking synchronous free has no effect') def items(self): """Return all of the rows that are in the result set. :rtype: list """ if not self.cursor.rowcount: return [] self.cursor.scroll(0, 'absolute') return self.cursor.fetchall() @property def rownumber(self): """Return the current offset of the result set :rtype: int """ return self.cursor.rownumber @property def query(self): """Return a read-only value of the query that was submitted to PostgreSQL. :rtype: str """ return self.cursor.query @property def status(self): """Return the status message returned by PostgreSQL after the query was executed. :rtype: str """ return self.cursor.statusmessage def _rewind(self): """Rewind the cursor to the first row""" self.cursor.scroll(0, 'absolute') <MSG> Dont error on negative rowcount in __repr__ <DFF> @@ -51,7 +51,7 @@ class Results(object): :rtype: int """ - return self.cursor.rowcount + return self.cursor.rowcount if self.cursor.rowcount >= 0 else 0 def __nonzero__(self): return bool(self.cursor.rowcount)
1
Dont error on negative rowcount in __repr__
1
.py
py
bsd-3-clause
gmr/queries
1266
<NME> results.py <BEF> """ query or callproc Results """ import logging import psycopg2 LOGGER = logging.getLogger(__name__) class Results(object): """The :py:class:`Results` class contains the results returned from :py:meth:`Session.query <queries.Session.query>` and :py:meth:`Session.callproc <queries.Session.callproc>`. It is able to act as an iterator and provides many different methods for accessing the information about and results from a query. :param psycopg2.extensions.cursor cursor: The cursor for the results """ def __init__(self, cursor): self.cursor = cursor def __getitem__(self, item): """Fetch an individual row from the result set :rtype: mixed :raises: IndexError """ try: self.cursor.scroll(item, 'absolute') except psycopg2.ProgrammingError: raise IndexError('No such row') else: return self.cursor.fetchone() def __iter__(self): """Iterate through the result set :rtype: mixed """ if self.cursor.rowcount: self._rewind() for row in self.cursor: yield row def __len__(self): """Return the number of rows that were returned from the query :rtype: int """ return self.cursor.rowcount def __nonzero__(self): return bool(self.cursor.rowcount) return bool(self.cursor.rowcount) def __bool__(self): return self.__nonzero__() def __repr__(self): return '<queries.%s rows=%s>' % (self.__class__.__name__, len(self)) def as_dict(self): """Return a single row result as a dictionary. If the results contain multiple rows, a :py:class:`ValueError` will be raised. :return: dict :raises: ValueError """ if not self.cursor.rowcount: return {} self._rewind() if self.cursor.rowcount == 1: return dict(self.cursor.fetchone()) else: raise ValueError('More than one row') def count(self): """Return the number of rows that were returned from the query :rtype: int """ return self.cursor.rowcount def free(self): """Used in asynchronous sessions for freeing results and their locked connections. """ LOGGER.debug('Invoking synchronous free has no effect') def items(self): """Return all of the rows that are in the result set. :rtype: list """ if not self.cursor.rowcount: return [] self.cursor.scroll(0, 'absolute') return self.cursor.fetchall() @property def rownumber(self): """Return the current offset of the result set :rtype: int """ return self.cursor.rownumber @property def query(self): """Return a read-only value of the query that was submitted to PostgreSQL. :rtype: str """ return self.cursor.query @property def status(self): """Return the status message returned by PostgreSQL after the query was executed. :rtype: str """ return self.cursor.statusmessage def _rewind(self): """Rewind the cursor to the first row""" self.cursor.scroll(0, 'absolute') <MSG> Dont error on negative rowcount in __repr__ <DFF> @@ -51,7 +51,7 @@ class Results(object): :rtype: int """ - return self.cursor.rowcount + return self.cursor.rowcount if self.cursor.rowcount >= 0 else 0 def __nonzero__(self): return bool(self.cursor.rowcount)
1
Dont error on negative rowcount in __repr__
1
.py
py
bsd-3-clause
gmr/queries
1267
<NME> tornado_basic.rst <BEF> Basic TornadoSession Usage ========================== The following example implements a very basic RESTful API. The following DDL will create the table used by the API: .. code:: sql CREATE TABLE widgets (sku varchar(10) NOT NULL PRIMARY KEY, name text NOT NULL, qty integer NOT NULL); The Tornado application provides two endpoints: /widget(/sku-value) and /widgets. SKUs are set to be a 10 character value with the regex of ``[a-z0-9]{10}``. To add a widget, call PUT on /widget, to update a widget call POST on /widget/[SKU]. .. code:: python from tornado import gen, ioloop, web import queries class WidgetRequestHandler(web.RequestHandler): """Handle the CRUD methods for a widget""" def initialize(self): """Setup a queries.TornadoSession object to use when the RequestHandler is first initialized. """ self.session = queries.TornadoSession() def options(self, *args, **kwargs): """Let the caller know what methods are supported :param list args: URI path arguments passed in by Tornado :param list args: URI path keyword arguments passed in by Tornado """ self.set_header('Allow', ', '.join(['DELETE', 'GET', 'POST', 'PUT'])) self.set_status(204) # Successful request, but no data returned self.finish() @gen.coroutine def delete(self, *args, **kwargs): """Delete a widget from the database :param list args: URI path arguments passed in by Tornado :param list args: URI path keyword arguments passed in by Tornado """ # We need a SKU, if it wasn't passed in the URL, return an error if 'sku' not in kwargs: self.set_status(403) self.finish({'error': 'missing required value: sku'}) # Delete the widget from the database by SKU else: results = yield self.session.query("DELETE FROM widgets WHERE sku=%(sku)s", {'sku': kwargs['sku']}) if not results: self.set_status(404) self.finish({'error': 'SKU not found in system'}) else: self.set_status(204) # Success, but no data returned self.finish() # Free the results and release the connection lock from session.query results.free() """Fetch a widget from the database :param list args: URI path arguments passed in by Tornado :param list args: URI path keyword arguments passed in by Tornado """ # We need a SKU, if it wasn't passed in the URL, return an error """ # We need a SKU, if it wasn't passed in the URL, return an error if 'sku' not in kwargs: self.set_status(403) self.finish({'error': 'missing required value: sku'}) # Fetch a row from the database for the SKU else: results = yield self.session.query("SELECT * FROM widgets WHERE sku=%(sku)s", {'sku': kwargs['sku']}) # No rows returned, send a 404 with a JSON error payload if not results: self.set_status(404) self.finish({'error': 'SKU not found in system'}) # Send back the row as a JSON object else: self.finish(results.as_dict()) """Update a widget in the database :param list args: URI path arguments passed in by Tornado :param list args: URI path keyword arguments passed in by Tornado """ # We need a SKU, if it wasn't passed in the URL, return an error :param list args: URI path arguments passed in by Tornado :param list args: URI path keyword arguments passed in by Tornado """ # We need a SKU, if it wasn't passed in the URL, return an error if 'sku' not in kwargs: self.set_status(403) self.finish({'error': 'missing required value: sku'}) # Update the widget in the database by SKU else: sql = "UPDATE widgets SET name=%(name)s, qty=%(qty)s WHERE sku=%(sku)s" try: results = yield self.session.query(sql, {'sku': kwargs['sku'], 'name': self.get_argument('name'), 'qty': self.get_argument('qty')}) # Free the results and release the connection lock from session.query results.free() # DataError is raised when there's a problem with the data passed in except queries.DataError as error: self.set_status(409) self.finish({'error': {'error': error.pgerror.split('\n')[0][8:]}}) else: # No rows means there was no record updated if not results: self.set_status(404) """Add a widget to the database :param list args: URI path arguments passed in by Tornado :param list args: URI path keyword arguments passed in by Tornado """ try: @gen.coroutine def put(self, *args, **kwargs): """Add a widget to the database :param list args: URI path arguments passed in by Tornado :param list args: URI path keyword arguments passed in by Tornado """ try: results = yield self.session.query("INSERT INTO widgets VALUES (%s, %s, %s)", [self.get_argument('sku'), self.get_argument('name'), self.get_argument('qty')]) # Free the results and release the connection lock from session.query results.free() except (queries.DataError, queries.IntegrityError) as error: self.set_status(409) self.finish({'error': {'error': error.pgerror.split('\n')[0][8:]}}) else: self.set_status(201) self.finish() """Let the caller know what methods are supported :param list args: URI path arguments passed in by Tornado :param list args: URI path keyword arguments passed in by Tornado """ self.set_header('Allow', ', '.join(['GET'])) """ self.session = queries.TornadoSession() def options(self, *args, **kwargs): """Get a list of all the widgets from the database :param list args: URI path arguments passed in by Tornado :param list args: URI path keyword arguments passed in by Tornado """ rows, data = yield self.session.query("SELECT * FROM widgets ORDER BY sku") self.set_status(204) self.finish() @gen.coroutine def get(self, *args, **kwargs): """Get a list of all the widgets from the database :param list args: URI path arguments passed in by Tornado :param list args: URI path keyword arguments passed in by Tornado """ results = yield self.session.query('SELECT * FROM widgets ORDER BY sku') # Tornado doesn't allow you to return a list as a JSON result by default self.finish({'widgets': results.items()}) # Free the results and release the connection lock from session.query results.free() if __name__ == "__main__": application = web.Application([ (r"/widget", WidgetRequestHandler), (r"/widget/(?P<sku>[a-zA-Z0-9]{10})", WidgetRequestHandler), (r"/widgets", WidgetsRequestHandler) ]).listen(8888) ioloop.IOLoop.instance().start() <MSG> Update docstrings [ci skip] <DFF> @@ -33,7 +33,7 @@ add a widget, call PUT on /widget, to update a widget call POST on /widget/[SKU] """Let the caller know what methods are supported :param list args: URI path arguments passed in by Tornado - :param list args: URI path keyword arguments passed in by Tornado + :param dict kwargs: URI path keyword arguments passed in by Tornado """ self.set_header('Allow', ', '.join(['DELETE', 'GET', 'POST', 'PUT'])) @@ -45,7 +45,7 @@ add a widget, call PUT on /widget, to update a widget call POST on /widget/[SKU] """Delete a widget from the database :param list args: URI path arguments passed in by Tornado - :param list args: URI path keyword arguments passed in by Tornado + :param dict kwargs: URI path keyword arguments passed in by Tornado """ # We need a SKU, if it wasn't passed in the URL, return an error @@ -69,7 +69,7 @@ add a widget, call PUT on /widget, to update a widget call POST on /widget/[SKU] """Fetch a widget from the database :param list args: URI path arguments passed in by Tornado - :param list args: URI path keyword arguments passed in by Tornado + :param dict kwargs: URI path keyword arguments passed in by Tornado """ # We need a SKU, if it wasn't passed in the URL, return an error @@ -96,7 +96,7 @@ add a widget, call PUT on /widget, to update a widget call POST on /widget/[SKU] """Update a widget in the database :param list args: URI path arguments passed in by Tornado - :param list args: URI path keyword arguments passed in by Tornado + :param dict kwargs: URI path keyword arguments passed in by Tornado """ # We need a SKU, if it wasn't passed in the URL, return an error @@ -135,7 +135,7 @@ add a widget, call PUT on /widget, to update a widget call POST on /widget/[SKU] """Add a widget to the database :param list args: URI path arguments passed in by Tornado - :param list args: URI path keyword arguments passed in by Tornado + :param dict kwargs: URI path keyword arguments passed in by Tornado """ try: @@ -166,7 +166,7 @@ add a widget, call PUT on /widget, to update a widget call POST on /widget/[SKU] """Let the caller know what methods are supported :param list args: URI path arguments passed in by Tornado - :param list args: URI path keyword arguments passed in by Tornado + :param dict kwargs: URI path keyword arguments passed in by Tornado """ self.set_header('Allow', ', '.join(['GET'])) @@ -178,7 +178,7 @@ add a widget, call PUT on /widget, to update a widget call POST on /widget/[SKU] """Get a list of all the widgets from the database :param list args: URI path arguments passed in by Tornado - :param list args: URI path keyword arguments passed in by Tornado + :param dict kwargs: URI path keyword arguments passed in by Tornado """ rows, data = yield self.session.query("SELECT * FROM widgets ORDER BY sku")
7
Update docstrings
7
.rst
rst
bsd-3-clause
gmr/queries
1268
<NME> tornado_basic.rst <BEF> Basic TornadoSession Usage ========================== The following example implements a very basic RESTful API. The following DDL will create the table used by the API: .. code:: sql CREATE TABLE widgets (sku varchar(10) NOT NULL PRIMARY KEY, name text NOT NULL, qty integer NOT NULL); The Tornado application provides two endpoints: /widget(/sku-value) and /widgets. SKUs are set to be a 10 character value with the regex of ``[a-z0-9]{10}``. To add a widget, call PUT on /widget, to update a widget call POST on /widget/[SKU]. .. code:: python from tornado import gen, ioloop, web import queries class WidgetRequestHandler(web.RequestHandler): """Handle the CRUD methods for a widget""" def initialize(self): """Setup a queries.TornadoSession object to use when the RequestHandler is first initialized. """ self.session = queries.TornadoSession() def options(self, *args, **kwargs): """Let the caller know what methods are supported :param list args: URI path arguments passed in by Tornado :param list args: URI path keyword arguments passed in by Tornado """ self.set_header('Allow', ', '.join(['DELETE', 'GET', 'POST', 'PUT'])) self.set_status(204) # Successful request, but no data returned self.finish() @gen.coroutine def delete(self, *args, **kwargs): """Delete a widget from the database :param list args: URI path arguments passed in by Tornado :param list args: URI path keyword arguments passed in by Tornado """ # We need a SKU, if it wasn't passed in the URL, return an error if 'sku' not in kwargs: self.set_status(403) self.finish({'error': 'missing required value: sku'}) # Delete the widget from the database by SKU else: results = yield self.session.query("DELETE FROM widgets WHERE sku=%(sku)s", {'sku': kwargs['sku']}) if not results: self.set_status(404) self.finish({'error': 'SKU not found in system'}) else: self.set_status(204) # Success, but no data returned self.finish() # Free the results and release the connection lock from session.query results.free() """Fetch a widget from the database :param list args: URI path arguments passed in by Tornado :param list args: URI path keyword arguments passed in by Tornado """ # We need a SKU, if it wasn't passed in the URL, return an error """ # We need a SKU, if it wasn't passed in the URL, return an error if 'sku' not in kwargs: self.set_status(403) self.finish({'error': 'missing required value: sku'}) # Fetch a row from the database for the SKU else: results = yield self.session.query("SELECT * FROM widgets WHERE sku=%(sku)s", {'sku': kwargs['sku']}) # No rows returned, send a 404 with a JSON error payload if not results: self.set_status(404) self.finish({'error': 'SKU not found in system'}) # Send back the row as a JSON object else: self.finish(results.as_dict()) """Update a widget in the database :param list args: URI path arguments passed in by Tornado :param list args: URI path keyword arguments passed in by Tornado """ # We need a SKU, if it wasn't passed in the URL, return an error :param list args: URI path arguments passed in by Tornado :param list args: URI path keyword arguments passed in by Tornado """ # We need a SKU, if it wasn't passed in the URL, return an error if 'sku' not in kwargs: self.set_status(403) self.finish({'error': 'missing required value: sku'}) # Update the widget in the database by SKU else: sql = "UPDATE widgets SET name=%(name)s, qty=%(qty)s WHERE sku=%(sku)s" try: results = yield self.session.query(sql, {'sku': kwargs['sku'], 'name': self.get_argument('name'), 'qty': self.get_argument('qty')}) # Free the results and release the connection lock from session.query results.free() # DataError is raised when there's a problem with the data passed in except queries.DataError as error: self.set_status(409) self.finish({'error': {'error': error.pgerror.split('\n')[0][8:]}}) else: # No rows means there was no record updated if not results: self.set_status(404) """Add a widget to the database :param list args: URI path arguments passed in by Tornado :param list args: URI path keyword arguments passed in by Tornado """ try: @gen.coroutine def put(self, *args, **kwargs): """Add a widget to the database :param list args: URI path arguments passed in by Tornado :param list args: URI path keyword arguments passed in by Tornado """ try: results = yield self.session.query("INSERT INTO widgets VALUES (%s, %s, %s)", [self.get_argument('sku'), self.get_argument('name'), self.get_argument('qty')]) # Free the results and release the connection lock from session.query results.free() except (queries.DataError, queries.IntegrityError) as error: self.set_status(409) self.finish({'error': {'error': error.pgerror.split('\n')[0][8:]}}) else: self.set_status(201) self.finish() """Let the caller know what methods are supported :param list args: URI path arguments passed in by Tornado :param list args: URI path keyword arguments passed in by Tornado """ self.set_header('Allow', ', '.join(['GET'])) """ self.session = queries.TornadoSession() def options(self, *args, **kwargs): """Get a list of all the widgets from the database :param list args: URI path arguments passed in by Tornado :param list args: URI path keyword arguments passed in by Tornado """ rows, data = yield self.session.query("SELECT * FROM widgets ORDER BY sku") self.set_status(204) self.finish() @gen.coroutine def get(self, *args, **kwargs): """Get a list of all the widgets from the database :param list args: URI path arguments passed in by Tornado :param list args: URI path keyword arguments passed in by Tornado """ results = yield self.session.query('SELECT * FROM widgets ORDER BY sku') # Tornado doesn't allow you to return a list as a JSON result by default self.finish({'widgets': results.items()}) # Free the results and release the connection lock from session.query results.free() if __name__ == "__main__": application = web.Application([ (r"/widget", WidgetRequestHandler), (r"/widget/(?P<sku>[a-zA-Z0-9]{10})", WidgetRequestHandler), (r"/widgets", WidgetsRequestHandler) ]).listen(8888) ioloop.IOLoop.instance().start() <MSG> Update docstrings [ci skip] <DFF> @@ -33,7 +33,7 @@ add a widget, call PUT on /widget, to update a widget call POST on /widget/[SKU] """Let the caller know what methods are supported :param list args: URI path arguments passed in by Tornado - :param list args: URI path keyword arguments passed in by Tornado + :param dict kwargs: URI path keyword arguments passed in by Tornado """ self.set_header('Allow', ', '.join(['DELETE', 'GET', 'POST', 'PUT'])) @@ -45,7 +45,7 @@ add a widget, call PUT on /widget, to update a widget call POST on /widget/[SKU] """Delete a widget from the database :param list args: URI path arguments passed in by Tornado - :param list args: URI path keyword arguments passed in by Tornado + :param dict kwargs: URI path keyword arguments passed in by Tornado """ # We need a SKU, if it wasn't passed in the URL, return an error @@ -69,7 +69,7 @@ add a widget, call PUT on /widget, to update a widget call POST on /widget/[SKU] """Fetch a widget from the database :param list args: URI path arguments passed in by Tornado - :param list args: URI path keyword arguments passed in by Tornado + :param dict kwargs: URI path keyword arguments passed in by Tornado """ # We need a SKU, if it wasn't passed in the URL, return an error @@ -96,7 +96,7 @@ add a widget, call PUT on /widget, to update a widget call POST on /widget/[SKU] """Update a widget in the database :param list args: URI path arguments passed in by Tornado - :param list args: URI path keyword arguments passed in by Tornado + :param dict kwargs: URI path keyword arguments passed in by Tornado """ # We need a SKU, if it wasn't passed in the URL, return an error @@ -135,7 +135,7 @@ add a widget, call PUT on /widget, to update a widget call POST on /widget/[SKU] """Add a widget to the database :param list args: URI path arguments passed in by Tornado - :param list args: URI path keyword arguments passed in by Tornado + :param dict kwargs: URI path keyword arguments passed in by Tornado """ try: @@ -166,7 +166,7 @@ add a widget, call PUT on /widget, to update a widget call POST on /widget/[SKU] """Let the caller know what methods are supported :param list args: URI path arguments passed in by Tornado - :param list args: URI path keyword arguments passed in by Tornado + :param dict kwargs: URI path keyword arguments passed in by Tornado """ self.set_header('Allow', ', '.join(['GET'])) @@ -178,7 +178,7 @@ add a widget, call PUT on /widget, to update a widget call POST on /widget/[SKU] """Get a list of all the widgets from the database :param list args: URI path arguments passed in by Tornado - :param list args: URI path keyword arguments passed in by Tornado + :param dict kwargs: URI path keyword arguments passed in by Tornado """ rows, data = yield self.session.query("SELECT * FROM widgets ORDER BY sku")
7
Update docstrings
7
.rst
rst
bsd-3-clause
gmr/queries
1269
<NME> tornado_session.py <BEF> """ Tornado Session Adapter Use Queries asynchronously within the Tornado framework. Example Use: .. code:: python class NameListHandler(web.RequestHandler): def initialize(self): self.session = queries.TornadoSession(pool_max_size=60) @gen.coroutine def get(self): data = yield self.session.query('SELECT * FROM names') if data: self.finish({'names': data.items()}) data.free() else: self.set_status(500, 'Error querying the data') """ import logging import socket import warnings from tornado import concurrent, ioloop from psycopg2 import extras, extensions import psycopg2 from queries import pool, results, session, utils LOGGER = logging.getLogger(__name__) DEFAULT_MAX_POOL_SIZE = 25 class Results(results.Results): """A TornadoSession specific :py:class:`queries.Results` class that adds the :py:meth:`Results.free <queries.tornado_session.Results.free>` method. The :py:meth:`Results.free <queries.tornado_session.Results.free>` method **must** be called to free the connection that the results were generated on. `Results` objects that are not freed will cause the connections to remain locked and your application will eventually run out of connections in the pool. The following examples illustrate the various behaviors that the ::py:class:`queries.Results <queries.tornado_session.Requests>` class implements: **Using Results as an Iterator** .. code:: python results = yield session.query('SELECT * FROM foo') for row in results print row results.free() **Accessing an individual row by index** .. code:: python results = yield session.query('SELECT * FROM foo') print results[1] # Access the second row of the results results.free() **Casting single row results as a dict** .. code:: python results = yield session.query('SELECT * FROM foo LIMIT 1') print results.as_dict() results.free() **Checking to see if a query was successful** .. code:: python sql = "UPDATE foo SET bar='baz' WHERE qux='corgie'" results = yield session.query(sql) if results: print 'Success' results.free() **Checking the number of rows by using len(Results)** .. code:: python results = yield session.query('SELECT * FROM foo') print '%i rows' % len(results) results.free() """ def __init__(self, cursor, cleanup, fd): self.cursor = cursor self._cleanup = cleanup self._fd = fd self._freed = False def free(self): """Release the results and connection lock from the TornadoSession object. This **must** be called after you finish processing the results from :py:meth:`TornadoSession.query <queries.TornadoSession.query>` or :py:meth:`TornadoSession.callproc <queries.TornadoSession.callproc>` or the connection will not be able to be reused by other asynchronous requests. """ self._freed = True self._cleanup(self.cursor, self._fd) def __del__(self): if not self._freed: LOGGER.warning('Auto-freeing result on deletion') requests. """ yield self._cleanup(self.cursor, self._fd) class TornadoSession(session.Session): Utilizes connection pooling to ensure that multiple concurrent asynchronous queries do not block each other. Heavily trafficked services will require a higher ``max_pool_size`` to allow for greater connection concurrency. :py:meth:`TornadoSession.query <queries.TornadoSession.query>` and :py:meth:`TornadoSession.callproc <queries.TornadoSession.callproc>` must call :py:meth:`Results.free <queries.tornado_session.Results.free>` :param str uri: PostgreSQL connection URI :param psycopg2.extensions.cursor: The cursor type to use :param int pool_idle_ttl: How long idle pools keep connections open :param int pool_max_size: The maximum size of the pool to use """ def __init__(self, uri=session.DEFAULT_URI, cursor_factory=extras.RealDictCursor, pool_idle_ttl=pool.DEFAULT_IDLE_TTL, pool_max_size=DEFAULT_MAX_POOL_SIZE, io_loop=None): """Connect to a PostgreSQL server using the module wide connection and set the isolation level. :param str uri: PostgreSQL connection URI :param psycopg2.extensions.cursor: The cursor type to use :param int pool_idle_ttl: How long idle pools keep connections open :param int pool_max_size: The maximum size of the pool to use :param tornado.ioloop.IOLoop io_loop: IOLoop instance to use """ self._connections = dict() self._cleanup_callback = None self._cursor_factory = cursor_factory self._futures = dict() self._ioloop = io_loop or ioloop.IOLoop.current() self._pool_manager = pool.PoolManager.instance() self._pool_max_size = pool_max_size self._pool_idle_ttl = pool_idle_ttl self._uri = uri self._ensure_pool_exists() def _ensure_pool_exists(self): """Create the pool in the pool manager if it does not exist.""" if self.pid not in self._pool_manager: self._pool_manager.create(self.pid, self._pool_idle_ttl, self._pool_max_size, self._ioloop.time) @property def connection(self): """Do not use this directly with Tornado applications :return: """ return None @property def cursor(self): return None def callproc(self, name, args=None): """Call a stored procedure asynchronously on the server, passing in the arguments to be passed to the stored procedure, yielding the results as a :py:class:`Results <queries.tornado_session.Results>` object. You **must** free the results that are returned by this method to unlock the connection used to perform the query. Failure to do so will cause your Tornado application to run out of connections. :param str name: The stored procedure name :param list args: An optional list of procedure arguments :rtype: Results :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError :raises: queries.InternalError :raises: queries.InterfaceError :raises: queries.NotSupportedError :raises: queries.OperationalError :raises: queries.ProgrammingError """ return self._execute('callproc', name, args) def query(self, sql, parameters=None): """Issue a query asynchronously on the server, mogrifying the parameters against the sql statement and yielding the results as a :py:class:`Results <queries.tornado_session.Results>` object. You **must** free the results that are returned by this method to unlock the connection used to perform the query. Failure to do so will cause your Tornado application to run out of connections. :param str sql: The SQL statement :param dict parameters: A dictionary of query parameters :rtype: Results :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError :raises: queries.InternalError :raises: queries.InterfaceError :raises: queries.NotSupportedError :raises: queries.OperationalError :raises: queries.ProgrammingError """ return self._execute('execute', sql, parameters) def validate(self): """Validate the session can connect or has open connections to PostgreSQL. As of ``1.10.3`` .. deprecated:: 1.10.3 As of 1.10.3, this method only warns about Deprecation :rtype: bool """ warnings.warn( 'All functionality removed from this method', DeprecationWarning) def _connect(self): """Connect to PostgreSQL, either by reusing a connection from the pool if possible, or by creating the new connection. :rtype: psycopg2.extensions.connection :raises: pool.NoIdleConnectionsError """ future = concurrent.Future() # Attempt to get a cached connection from the connection pool try: connection = self._pool_manager.get(self.pid, self) self._connections[connection.fileno()] = connection future.set_result(connection) # Add the connection to the IOLoop self._ioloop.add_handler(connection.fileno(), self._on_io_events, ioloop.IOLoop.WRITE) except pool.NoIdleConnectionsError: self._create_connection(future) return future def _create_connection(self, future): """Create a new PostgreSQL connection :param tornado.concurrent.Future future: future for new conn result """ LOGGER.debug('Creating a new connection for %s', self.pid) # Create a new PostgreSQL connection kwargs = utils.uri_to_kwargs(self._uri) try: connection = self._psycopg2_connect(kwargs) except (psycopg2.Error, OSError, socket.error) as error: future.set_exception(error) return # Add the connection for use in _poll_connection fd = connection.fileno() self._connections[fd] = connection def on_connected(cf): """Invoked by the IOLoop when the future is complete for the connection :param Future cf: The future for the initial connection """ if cf.exception(): self._cleanup_fd(fd, True) future.set_exception(cf.exception()) else: try: # Add the connection to the pool LOGGER.debug('Connection established for %s', self.pid) self._pool_manager.add(self.pid, connection) except (ValueError, pool.PoolException) as err: LOGGER.exception('Failed to add %r to the pool', self.pid) self._cleanup_fd(fd) future.set_exception(err) return self._pool_manager.lock(self.pid, connection, self) # Added in because psycopg2cffi connects and leaves the # connection in a weird state: consts.STATUS_DATESTYLE, # returning from Connection._setup without setting the state # as const.STATUS_OK if utils.PYPY: connection.status = extensions.STATUS_READY # Register the custom data types self._register_unicode(connection) self._register_uuid(connection) # Set the future result future.set_result(connection) # Add a future that fires once connected self._futures[fd] = concurrent.Future() self._ioloop.add_future(self._futures[fd], on_connected) # Add the connection to the IOLoop self._ioloop.add_handler(connection.fileno(), self._on_io_events, ioloop.IOLoop.WRITE) def _execute(self, method, query, parameters=None): """Issue a query asynchronously on the server, mogrifying the parameters against the sql statement and yielding the results as a :py:class:`Results <queries.tornado_session.Results>` object. This function reduces duplicate code for callproc and query by getting the class attribute for the method passed in as the function to call. :param str method: The method attribute to use :param str query: The SQL statement or Stored Procedure name :param list|dict parameters: A dictionary of query parameters :rtype: Results :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError :raises: queries.InternalError :raises: queries.InterfaceError :raises: queries.NotSupportedError :raises: queries.OperationalError :raises: queries.ProgrammingError """ future = concurrent.Future() def on_connected(cf): """Invoked by the future returned by self._connect""" if cf.exception(): future.set_exception(cf.exception()) return # Get the psycopg2 connection object and cursor conn = cf.result() cursor = self._get_cursor(conn) def completed(qf): """Invoked by the IOLoop when the future has completed""" if qf.exception(): self._incr_exceptions(conn) err = qf.exception() LOGGER.debug('Cleaning cursor due to exception: %r', err) self._exec_cleanup(cursor, conn.fileno()) future.set_exception(err) else: self._incr_executions(conn) value = Results(cursor, self._exec_cleanup, conn.fileno()) future.set_result(value) # Setup a callback to wait on the query result self._futures[conn.fileno()] = concurrent.Future() # Add the future to the IOLoop self._ioloop.add_future(self._futures[conn.fileno()], completed) # Get the cursor, execute the query func = getattr(cursor, method) try: func(query, parameters) except Exception as error: future.set_exception(error) # Ensure the pool exists for the connection self._ensure_pool_exists() # Grab a connection to PostgreSQL self._ioloop.add_future(self._connect(), on_connected) # Return the future for the query result return future def _exec_cleanup(self, cursor, fd): """Close the cursor, remove any references to the fd in internal state and remove the fd from the ioloop. :param psycopg2.extensions.cursor cursor: The cursor to close :param int fd: The connection file descriptor """ LOGGER.debug('Closing cursor and cleaning %s', fd) try: cursor.close() except (psycopg2.Error, psycopg2.Warning) as error: LOGGER.debug('Error closing the cursor: %s', error) self._cleanup_fd(fd) # If the cleanup callback exists, remove it if self._cleanup_callback: self._ioloop.remove_timeout(self._cleanup_callback) # Create a new cleanup callback to clean the pool of idle connections self._cleanup_callback = self._ioloop.add_timeout( self._ioloop.time() + self._pool_idle_ttl + 1, self._pool_manager.clean, self.pid) def _cleanup_fd(self, fd, close=False): """Ensure the socket socket is removed from the IOLoop, the connection stack, and futures stack. :param int fd: The fd # to cleanup """ self._ioloop.remove_handler(fd) if fd in self._connections: try: self._pool_manager.free(self.pid, self._connections[fd]) except pool.ConnectionNotFoundError: pass if close: self._connections[fd].close() del self._connections[fd] if fd in self._futures: del self._futures[fd] def _incr_exceptions(self, conn): """Increment the number of exceptions for the current connection. :param psycopg2.extensions.connection conn: the psycopg2 connection """ self._pool_manager.get_connection(self.pid, conn).exceptions += 1 def _incr_executions(self, conn): """Increment the number of executions for the current connection. :param psycopg2.extensions.connection conn: the psycopg2 connection """ self._pool_manager.get_connection(self.pid, conn).executions += 1 def _on_io_events(self, fd=None, _events=None): """Invoked by Tornado's IOLoop when there are events for the fd :param int fd: The file descriptor for the event :param int _events: The events raised """ if fd not in self._connections: LOGGER.warning('Received IO event for non-existing connection') return self._poll_connection(fd) def _poll_connection(self, fd): """Check with psycopg2 to see what action to take. If the state is POLL_OK, we should have a pending callback for that fd. :param int fd: The socket fd for the postgresql connection """ try: state = self._connections[fd].poll() except (OSError, socket.error) as error: self._ioloop.remove_handler(fd) if fd in self._futures and not self._futures[fd].done(): self._futures[fd].set_exception( psycopg2.OperationalError('Connection error (%s)' % error) ) except (psycopg2.Error, psycopg2.Warning) as error: if fd in self._futures and not self._futures[fd].done(): self._futures[fd].set_exception(error) else: if state == extensions.POLL_OK: if fd in self._futures and not self._futures[fd].done(): self._futures[fd].set_result(True) elif state == extensions.POLL_WRITE: self._ioloop.update_handler(fd, ioloop.IOLoop.WRITE) elif state == extensions.POLL_READ: self._ioloop.update_handler(fd, ioloop.IOLoop.READ) elif state == extensions.POLL_ERROR: self._ioloop.remove_handler(fd) if fd in self._futures and not self._futures[fd].done(): self._futures[fd].set_exception( psycopg2.Error('Poll Error')) def _psycopg2_connect(self, kwargs): """Return a psycopg2 connection for the specified kwargs. Extend for use in async session adapters. :param dict kwargs: Keyword connection args :rtype: psycopg2.extensions.connection """ kwargs['async'] = True return psycopg2.connect(**kwargs) <MSG> Merge pull request #8 from orenitamar/patch-1 Update tornado_session.py <DFF> @@ -118,7 +118,7 @@ class Results(results.Results): requests. """ - yield self._cleanup(self.cursor, self._fd) + self._cleanup(self.cursor, self._fd) class TornadoSession(session.Session):
1
Merge pull request #8 from orenitamar/patch-1
1
.py
py
bsd-3-clause
gmr/queries
1270
<NME> tornado_session.py <BEF> """ Tornado Session Adapter Use Queries asynchronously within the Tornado framework. Example Use: .. code:: python class NameListHandler(web.RequestHandler): def initialize(self): self.session = queries.TornadoSession(pool_max_size=60) @gen.coroutine def get(self): data = yield self.session.query('SELECT * FROM names') if data: self.finish({'names': data.items()}) data.free() else: self.set_status(500, 'Error querying the data') """ import logging import socket import warnings from tornado import concurrent, ioloop from psycopg2 import extras, extensions import psycopg2 from queries import pool, results, session, utils LOGGER = logging.getLogger(__name__) DEFAULT_MAX_POOL_SIZE = 25 class Results(results.Results): """A TornadoSession specific :py:class:`queries.Results` class that adds the :py:meth:`Results.free <queries.tornado_session.Results.free>` method. The :py:meth:`Results.free <queries.tornado_session.Results.free>` method **must** be called to free the connection that the results were generated on. `Results` objects that are not freed will cause the connections to remain locked and your application will eventually run out of connections in the pool. The following examples illustrate the various behaviors that the ::py:class:`queries.Results <queries.tornado_session.Requests>` class implements: **Using Results as an Iterator** .. code:: python results = yield session.query('SELECT * FROM foo') for row in results print row results.free() **Accessing an individual row by index** .. code:: python results = yield session.query('SELECT * FROM foo') print results[1] # Access the second row of the results results.free() **Casting single row results as a dict** .. code:: python results = yield session.query('SELECT * FROM foo LIMIT 1') print results.as_dict() results.free() **Checking to see if a query was successful** .. code:: python sql = "UPDATE foo SET bar='baz' WHERE qux='corgie'" results = yield session.query(sql) if results: print 'Success' results.free() **Checking the number of rows by using len(Results)** .. code:: python results = yield session.query('SELECT * FROM foo') print '%i rows' % len(results) results.free() """ def __init__(self, cursor, cleanup, fd): self.cursor = cursor self._cleanup = cleanup self._fd = fd self._freed = False def free(self): """Release the results and connection lock from the TornadoSession object. This **must** be called after you finish processing the results from :py:meth:`TornadoSession.query <queries.TornadoSession.query>` or :py:meth:`TornadoSession.callproc <queries.TornadoSession.callproc>` or the connection will not be able to be reused by other asynchronous requests. """ self._freed = True self._cleanup(self.cursor, self._fd) def __del__(self): if not self._freed: LOGGER.warning('Auto-freeing result on deletion') requests. """ yield self._cleanup(self.cursor, self._fd) class TornadoSession(session.Session): Utilizes connection pooling to ensure that multiple concurrent asynchronous queries do not block each other. Heavily trafficked services will require a higher ``max_pool_size`` to allow for greater connection concurrency. :py:meth:`TornadoSession.query <queries.TornadoSession.query>` and :py:meth:`TornadoSession.callproc <queries.TornadoSession.callproc>` must call :py:meth:`Results.free <queries.tornado_session.Results.free>` :param str uri: PostgreSQL connection URI :param psycopg2.extensions.cursor: The cursor type to use :param int pool_idle_ttl: How long idle pools keep connections open :param int pool_max_size: The maximum size of the pool to use """ def __init__(self, uri=session.DEFAULT_URI, cursor_factory=extras.RealDictCursor, pool_idle_ttl=pool.DEFAULT_IDLE_TTL, pool_max_size=DEFAULT_MAX_POOL_SIZE, io_loop=None): """Connect to a PostgreSQL server using the module wide connection and set the isolation level. :param str uri: PostgreSQL connection URI :param psycopg2.extensions.cursor: The cursor type to use :param int pool_idle_ttl: How long idle pools keep connections open :param int pool_max_size: The maximum size of the pool to use :param tornado.ioloop.IOLoop io_loop: IOLoop instance to use """ self._connections = dict() self._cleanup_callback = None self._cursor_factory = cursor_factory self._futures = dict() self._ioloop = io_loop or ioloop.IOLoop.current() self._pool_manager = pool.PoolManager.instance() self._pool_max_size = pool_max_size self._pool_idle_ttl = pool_idle_ttl self._uri = uri self._ensure_pool_exists() def _ensure_pool_exists(self): """Create the pool in the pool manager if it does not exist.""" if self.pid not in self._pool_manager: self._pool_manager.create(self.pid, self._pool_idle_ttl, self._pool_max_size, self._ioloop.time) @property def connection(self): """Do not use this directly with Tornado applications :return: """ return None @property def cursor(self): return None def callproc(self, name, args=None): """Call a stored procedure asynchronously on the server, passing in the arguments to be passed to the stored procedure, yielding the results as a :py:class:`Results <queries.tornado_session.Results>` object. You **must** free the results that are returned by this method to unlock the connection used to perform the query. Failure to do so will cause your Tornado application to run out of connections. :param str name: The stored procedure name :param list args: An optional list of procedure arguments :rtype: Results :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError :raises: queries.InternalError :raises: queries.InterfaceError :raises: queries.NotSupportedError :raises: queries.OperationalError :raises: queries.ProgrammingError """ return self._execute('callproc', name, args) def query(self, sql, parameters=None): """Issue a query asynchronously on the server, mogrifying the parameters against the sql statement and yielding the results as a :py:class:`Results <queries.tornado_session.Results>` object. You **must** free the results that are returned by this method to unlock the connection used to perform the query. Failure to do so will cause your Tornado application to run out of connections. :param str sql: The SQL statement :param dict parameters: A dictionary of query parameters :rtype: Results :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError :raises: queries.InternalError :raises: queries.InterfaceError :raises: queries.NotSupportedError :raises: queries.OperationalError :raises: queries.ProgrammingError """ return self._execute('execute', sql, parameters) def validate(self): """Validate the session can connect or has open connections to PostgreSQL. As of ``1.10.3`` .. deprecated:: 1.10.3 As of 1.10.3, this method only warns about Deprecation :rtype: bool """ warnings.warn( 'All functionality removed from this method', DeprecationWarning) def _connect(self): """Connect to PostgreSQL, either by reusing a connection from the pool if possible, or by creating the new connection. :rtype: psycopg2.extensions.connection :raises: pool.NoIdleConnectionsError """ future = concurrent.Future() # Attempt to get a cached connection from the connection pool try: connection = self._pool_manager.get(self.pid, self) self._connections[connection.fileno()] = connection future.set_result(connection) # Add the connection to the IOLoop self._ioloop.add_handler(connection.fileno(), self._on_io_events, ioloop.IOLoop.WRITE) except pool.NoIdleConnectionsError: self._create_connection(future) return future def _create_connection(self, future): """Create a new PostgreSQL connection :param tornado.concurrent.Future future: future for new conn result """ LOGGER.debug('Creating a new connection for %s', self.pid) # Create a new PostgreSQL connection kwargs = utils.uri_to_kwargs(self._uri) try: connection = self._psycopg2_connect(kwargs) except (psycopg2.Error, OSError, socket.error) as error: future.set_exception(error) return # Add the connection for use in _poll_connection fd = connection.fileno() self._connections[fd] = connection def on_connected(cf): """Invoked by the IOLoop when the future is complete for the connection :param Future cf: The future for the initial connection """ if cf.exception(): self._cleanup_fd(fd, True) future.set_exception(cf.exception()) else: try: # Add the connection to the pool LOGGER.debug('Connection established for %s', self.pid) self._pool_manager.add(self.pid, connection) except (ValueError, pool.PoolException) as err: LOGGER.exception('Failed to add %r to the pool', self.pid) self._cleanup_fd(fd) future.set_exception(err) return self._pool_manager.lock(self.pid, connection, self) # Added in because psycopg2cffi connects and leaves the # connection in a weird state: consts.STATUS_DATESTYLE, # returning from Connection._setup without setting the state # as const.STATUS_OK if utils.PYPY: connection.status = extensions.STATUS_READY # Register the custom data types self._register_unicode(connection) self._register_uuid(connection) # Set the future result future.set_result(connection) # Add a future that fires once connected self._futures[fd] = concurrent.Future() self._ioloop.add_future(self._futures[fd], on_connected) # Add the connection to the IOLoop self._ioloop.add_handler(connection.fileno(), self._on_io_events, ioloop.IOLoop.WRITE) def _execute(self, method, query, parameters=None): """Issue a query asynchronously on the server, mogrifying the parameters against the sql statement and yielding the results as a :py:class:`Results <queries.tornado_session.Results>` object. This function reduces duplicate code for callproc and query by getting the class attribute for the method passed in as the function to call. :param str method: The method attribute to use :param str query: The SQL statement or Stored Procedure name :param list|dict parameters: A dictionary of query parameters :rtype: Results :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError :raises: queries.InternalError :raises: queries.InterfaceError :raises: queries.NotSupportedError :raises: queries.OperationalError :raises: queries.ProgrammingError """ future = concurrent.Future() def on_connected(cf): """Invoked by the future returned by self._connect""" if cf.exception(): future.set_exception(cf.exception()) return # Get the psycopg2 connection object and cursor conn = cf.result() cursor = self._get_cursor(conn) def completed(qf): """Invoked by the IOLoop when the future has completed""" if qf.exception(): self._incr_exceptions(conn) err = qf.exception() LOGGER.debug('Cleaning cursor due to exception: %r', err) self._exec_cleanup(cursor, conn.fileno()) future.set_exception(err) else: self._incr_executions(conn) value = Results(cursor, self._exec_cleanup, conn.fileno()) future.set_result(value) # Setup a callback to wait on the query result self._futures[conn.fileno()] = concurrent.Future() # Add the future to the IOLoop self._ioloop.add_future(self._futures[conn.fileno()], completed) # Get the cursor, execute the query func = getattr(cursor, method) try: func(query, parameters) except Exception as error: future.set_exception(error) # Ensure the pool exists for the connection self._ensure_pool_exists() # Grab a connection to PostgreSQL self._ioloop.add_future(self._connect(), on_connected) # Return the future for the query result return future def _exec_cleanup(self, cursor, fd): """Close the cursor, remove any references to the fd in internal state and remove the fd from the ioloop. :param psycopg2.extensions.cursor cursor: The cursor to close :param int fd: The connection file descriptor """ LOGGER.debug('Closing cursor and cleaning %s', fd) try: cursor.close() except (psycopg2.Error, psycopg2.Warning) as error: LOGGER.debug('Error closing the cursor: %s', error) self._cleanup_fd(fd) # If the cleanup callback exists, remove it if self._cleanup_callback: self._ioloop.remove_timeout(self._cleanup_callback) # Create a new cleanup callback to clean the pool of idle connections self._cleanup_callback = self._ioloop.add_timeout( self._ioloop.time() + self._pool_idle_ttl + 1, self._pool_manager.clean, self.pid) def _cleanup_fd(self, fd, close=False): """Ensure the socket socket is removed from the IOLoop, the connection stack, and futures stack. :param int fd: The fd # to cleanup """ self._ioloop.remove_handler(fd) if fd in self._connections: try: self._pool_manager.free(self.pid, self._connections[fd]) except pool.ConnectionNotFoundError: pass if close: self._connections[fd].close() del self._connections[fd] if fd in self._futures: del self._futures[fd] def _incr_exceptions(self, conn): """Increment the number of exceptions for the current connection. :param psycopg2.extensions.connection conn: the psycopg2 connection """ self._pool_manager.get_connection(self.pid, conn).exceptions += 1 def _incr_executions(self, conn): """Increment the number of executions for the current connection. :param psycopg2.extensions.connection conn: the psycopg2 connection """ self._pool_manager.get_connection(self.pid, conn).executions += 1 def _on_io_events(self, fd=None, _events=None): """Invoked by Tornado's IOLoop when there are events for the fd :param int fd: The file descriptor for the event :param int _events: The events raised """ if fd not in self._connections: LOGGER.warning('Received IO event for non-existing connection') return self._poll_connection(fd) def _poll_connection(self, fd): """Check with psycopg2 to see what action to take. If the state is POLL_OK, we should have a pending callback for that fd. :param int fd: The socket fd for the postgresql connection """ try: state = self._connections[fd].poll() except (OSError, socket.error) as error: self._ioloop.remove_handler(fd) if fd in self._futures and not self._futures[fd].done(): self._futures[fd].set_exception( psycopg2.OperationalError('Connection error (%s)' % error) ) except (psycopg2.Error, psycopg2.Warning) as error: if fd in self._futures and not self._futures[fd].done(): self._futures[fd].set_exception(error) else: if state == extensions.POLL_OK: if fd in self._futures and not self._futures[fd].done(): self._futures[fd].set_result(True) elif state == extensions.POLL_WRITE: self._ioloop.update_handler(fd, ioloop.IOLoop.WRITE) elif state == extensions.POLL_READ: self._ioloop.update_handler(fd, ioloop.IOLoop.READ) elif state == extensions.POLL_ERROR: self._ioloop.remove_handler(fd) if fd in self._futures and not self._futures[fd].done(): self._futures[fd].set_exception( psycopg2.Error('Poll Error')) def _psycopg2_connect(self, kwargs): """Return a psycopg2 connection for the specified kwargs. Extend for use in async session adapters. :param dict kwargs: Keyword connection args :rtype: psycopg2.extensions.connection """ kwargs['async'] = True return psycopg2.connect(**kwargs) <MSG> Merge pull request #8 from orenitamar/patch-1 Update tornado_session.py <DFF> @@ -118,7 +118,7 @@ class Results(results.Results): requests. """ - yield self._cleanup(self.cursor, self._fd) + self._cleanup(self.cursor, self._fd) class TornadoSession(session.Session):
1
Merge pull request #8 from orenitamar/patch-1
1
.py
py
bsd-3-clause
gmr/queries
1271
<NME> README.rst <BEF> Queries: PostgreSQL Simplified ============================== *Queries* is a BSD licensed opinionated wrapper of the psycopg2_ library for interacting with PostgreSQL. The popular psycopg2_ package is a full-featured python client. Unfortunately as a developer, you're often repeating the same steps to get started with your applications that use it. Queries aims to reduce the complexity of psycopg2 while adding additional features to make writing PostgreSQL client applications both fast and easy. Check out the `Usage`_ section below to see how easy it can be. - Internal connection pooling - Asynchronous support for Tornado_ - Automatic registration of UUIDs, Unicode and Unicode Arrays - Ability to directly access psycopg2 `connection` and `cursor` objects - Connection information provided by URI - Query results delivered as a generator based iterators - Connection information provided by URI - Query results delivered as a generator based iterators - Automatically registered data-type support for UUIDs, Unicode and Unicode Arrays - Ability to directly access psycopg2 ``connection`` and ``cursor`` objects - Internal connection pooling |Version| |Status| |Coverage| |License| Documentation ------------- Documentation is available at https://queries.readthedocs.org Installation ------------ Usage ----- Queries provides both a session based API and a stripped-down simple API for interacting with PostgreSQL. If you're writing applications that may only have one or two queries, the simple API may be useful. Instead of creating a session object when using the simple API methods (`queries.query()` and `queries.callproc()`), this is done for you. Simply pass in your query and the URIs_ of the PostgreSQL server to connect to: .. code:: python a session: .. code:: python session = queries.Session("postgresql://postgres@localhost:5432/postgres") Queries built-in connection pooling will re-use connections when possible, lowering the overhead of connecting and reconnecting. with, Queries will use the current OS username for both. You can also omit the URI when connecting to connect to localhost on port 5432 as the current OS user, connecting to a database named for the current user. For example, if your username is "fred" and you omit the URI when issuing `queries.query` the URI that is constructed would be `pgsql://fred@localhost:5432/fred`. Here are a few examples of using the Queries simple API: method provides a quick and easy way to create a URI to pass into the various methods. .. code:: python >>> queries.uri("server-name", 5432, "dbname", "user", "pass") 'postgresql://user:pass@server-name:5432/dbname' Environment Variables ^^^^^^^^^^^^^^^^^^^^^ Currently Queries uses the following environment variables for tweaking various configuration values. The supported ones are: * ``QUERIES_MAX_POOL_SIZE`` - Modify the maximum size of the connection pool (default: 1) Using the queries.Session class ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ To execute queries or call stored procedures, you start by creating an instance of the ``queries.Session`` class. It can act as a context manager, meaning you can use it with the ``with`` keyword and it will take care of cleaning up after itself. For more information on the ``with`` keyword and context managers, see PEP343_. tzinfo=psycopg2.tz.FixedOffsetTimezone(offset=-240, name=None))} If your application is going to be performing multiple operations, you should use the `queries.Session` class. It can act as a context manager, meaning you can use it with the `with` keyword and it will take care of cleaning up after itself. In addition to both the `Session.query()` and `Session.callproc()` methods that are similar to the simple API methods, the `queries.Session` class provides access to the psycopg2 connection and cursor objects. It also provides methods for managing transactions and to the LISTEN/NOTIFY functionality provided by PostgreSQL. For full documentation around the Session class, see the documentation_. The following example shows how a `queries.Session` object can be used as a context manager. .. code:: python >>> with queries.Session() as session: ... for row in session.query('SELECT * FROM names'): ... pprint.pprint(row) ... {'id': 1, 'name': u'Jacob'} {'id': 2, 'name': u'Mason'} {'id': 3, 'name': u'Ethan'} **Using queries.Session.callproc** This example uses ``queries.Session.callproc`` to execute a stored In addition to providing a Pythonic, synchronous client API for PostgreSQL, Queries provides a very similar asynchronous API for use with Tornado_ [*]_. The only major difference API difference between `queries.TornadoSession` and `queries.Session` is the `TornadoSession.query` and `TornadoSession.callproc` methods return the entire result set instead of acting as an iterator over the results. The following is an example of using Queries in a Tornado_ web application. ... pprint.pprint(results.as_dict()) ... {'chr': u'A'} **Asynchronous Queries with Tornado** In addition to providing a Pythonic, synchronous client API for PostgreSQL, Queries provides a very similar asynchronous API for use with Tornado. The only major difference API difference between ``queries.TornadoSession`` and ``queries.Session`` is the ``TornadoSession.query`` and ``TornadoSession.callproc`` methods return the entire result set instead of acting as an iterator over the results. The following example uses ``TornadoSession.query`` in an asynchronous Tornado_ web application to send a JSON payload with the query result set. .. code:: python from tornado import gen, ioloop, web import queries class MainHandler(web.RequestHandler): def initialize(self): self.session = queries.TornadoSession() @gen.coroutine def get(self): results = yield self.session.query('SELECT * FROM names') self.finish({'data': results.items()}) results.free() application = web.Application([ (r"/", MainHandler), ]) if __name__ == "__main__": application.listen(8888) ioloop.IOLoop.instance().start() Inspiration ----------- Queries is inspired by `Kenneth Reitz's <https://github.com/kennethreitz/>`_ awesome work on `requests <http://docs.python-requests.org/en/latest/>`_. History ------- Queries is a fork and enhancement of pgsql_wrapper_, which can be found in the main GitHub repository of Queries as tags prior to version 1.2.0. .. _pypi: https://pypi.python.org/pypi/queries .. _psycopg2: https://pypi.python.org/pypi/psycopg2 .. _documentation: https://queries.readthedocs.org .. _URI: http://www.postgresql.org/docs/9.3/static/libpq-connect.html#LIBPQ-CONNSTRING .. _pgsql_wrapper: https://pypi.python.org/pypi/pgsql_wrapper .. _Tornado: http://tornadoweb.org .. _PEP343: http://legacy.python.org/dev/peps/pep-0343/ .. _psycopg2cffi: https://pypi.python.org/pypi/psycopg2cffi .. |Version| image:: https://img.shields.io/pypi/v/queries.svg? :target: https://pypi.python.org/pypi/queries .. |Status| image:: https://img.shields.io/travis/gmr/queries.svg? :target: https://travis-ci.org/gmr/queries .. |Coverage| image:: https://img.shields.io/codecov/c/github/gmr/queries.svg? :target: https://codecov.io/github/gmr/queries?branch=master .. |License| image:: https://img.shields.io/github/license/gmr/queries.svg? :target: https://github.com/gmr/queries <MSG> Fix the backtick quoting <DFF> @@ -11,7 +11,7 @@ PostgreSQL. Key features include: - Internal connection pooling - Asynchronous support for Tornado_ - Automatic registration of UUIDs, Unicode and Unicode Arrays -- Ability to directly access psycopg2 `connection` and `cursor` objects +- Ability to directly access psycopg2 ``connection`` and ``cursor`` objects - Connection information provided by URI - Query results delivered as a generator based iterators @@ -32,10 +32,10 @@ queries is available via pypi and can be installed with easy_install or pip: Usage ----- Queries provides both a session based API and a stripped-down simple API for -interacting with PostgreSQL. If you're writing applications that may only have +interacting with PostgreSQL. If you're writing applications that will only have one or two queries, the simple API may be useful. Instead of creating a session -object when using the simple API methods (`queries.query()` and -`queries.callproc()`), this is done for you. Simply pass in your query and +object when using the simple API methods (``queries.query()`` and +``queries.callproc()``), this is done for you. Simply pass in your query and the URIs_ of the PostgreSQL server to connect to: .. code:: python @@ -51,8 +51,8 @@ When specifying a URI, if you omit the username and database name to connect with, Queries will use the current OS username for both. You can also omit the URI when connecting to connect to localhost on port 5432 as the current OS user, connecting to a database named for the current user. For example, if your -username is "fred" and you omit the URI when issuing `queries.query` the URI -that is constructed would be `pgsql://fred@localhost:5432/fred`. +username is "fred" and you omit the URI when issuing ``queries.query`` the URI +that is constructed would be ``pgsql://fred@localhost:5432/fred``. Here are a few examples of using the Queries simple API: @@ -82,15 +82,15 @@ Here are a few examples of using the Queries simple API: tzinfo=psycopg2.tz.FixedOffsetTimezone(offset=-240, name=None))} If your application is going to be performing multiple operations, you should use -the `queries.Session` class. It can act as a context manager, meaning you can -use it with the `with` keyword and it will take care of cleaning up after itself. +the ``queries.Session`` class. It can act as a context manager, meaning you can +use it with the ``with`` keyword and it will take care of cleaning up after itself. -In addition to both the `Session.query()` and `Session.callproc()` methods that -are similar to the simple API methods, the `queries.Session` class provides +In addition to both the ``Session.query()`` and ``Session.callproc()`` methods that +are similar to the simple API methods, the ``queries.Session`` class provides access to the psycopg2 connection and cursor objects. It also provides methods for managing transactions and to the LISTEN/NOTIFY functionality provided by PostgreSQL. For full documentation around the Session class, see the -documentation_. The following example shows how a `queries.Session` object can +documentation_. The following example shows how a ``queries.Session`` object can be used as a context manager. .. code:: python @@ -108,8 +108,8 @@ be used as a context manager. In addition to providing a Pythonic, synchronous client API for PostgreSQL, Queries provides a very similar asynchronous API for use with Tornado_ [*]_. -The only major difference API difference between `queries.TornadoSession` and -`queries.Session` is the `TornadoSession.query` and `TornadoSession.callproc` +The only major difference API difference between ``queries.TornadoSession`` and +``queries.Session`` is the ``TornadoSession.query`` and ``TornadoSession.callproc`` methods return the entire result set instead of acting as an iterator over the results. The following is an example of using Queries in a Tornado_ web application.
13
Fix the backtick quoting
13
.rst
rst
bsd-3-clause
gmr/queries
1272
<NME> README.rst <BEF> Queries: PostgreSQL Simplified ============================== *Queries* is a BSD licensed opinionated wrapper of the psycopg2_ library for interacting with PostgreSQL. The popular psycopg2_ package is a full-featured python client. Unfortunately as a developer, you're often repeating the same steps to get started with your applications that use it. Queries aims to reduce the complexity of psycopg2 while adding additional features to make writing PostgreSQL client applications both fast and easy. Check out the `Usage`_ section below to see how easy it can be. - Internal connection pooling - Asynchronous support for Tornado_ - Automatic registration of UUIDs, Unicode and Unicode Arrays - Ability to directly access psycopg2 `connection` and `cursor` objects - Connection information provided by URI - Query results delivered as a generator based iterators - Connection information provided by URI - Query results delivered as a generator based iterators - Automatically registered data-type support for UUIDs, Unicode and Unicode Arrays - Ability to directly access psycopg2 ``connection`` and ``cursor`` objects - Internal connection pooling |Version| |Status| |Coverage| |License| Documentation ------------- Documentation is available at https://queries.readthedocs.org Installation ------------ Usage ----- Queries provides both a session based API and a stripped-down simple API for interacting with PostgreSQL. If you're writing applications that may only have one or two queries, the simple API may be useful. Instead of creating a session object when using the simple API methods (`queries.query()` and `queries.callproc()`), this is done for you. Simply pass in your query and the URIs_ of the PostgreSQL server to connect to: .. code:: python a session: .. code:: python session = queries.Session("postgresql://postgres@localhost:5432/postgres") Queries built-in connection pooling will re-use connections when possible, lowering the overhead of connecting and reconnecting. with, Queries will use the current OS username for both. You can also omit the URI when connecting to connect to localhost on port 5432 as the current OS user, connecting to a database named for the current user. For example, if your username is "fred" and you omit the URI when issuing `queries.query` the URI that is constructed would be `pgsql://fred@localhost:5432/fred`. Here are a few examples of using the Queries simple API: method provides a quick and easy way to create a URI to pass into the various methods. .. code:: python >>> queries.uri("server-name", 5432, "dbname", "user", "pass") 'postgresql://user:pass@server-name:5432/dbname' Environment Variables ^^^^^^^^^^^^^^^^^^^^^ Currently Queries uses the following environment variables for tweaking various configuration values. The supported ones are: * ``QUERIES_MAX_POOL_SIZE`` - Modify the maximum size of the connection pool (default: 1) Using the queries.Session class ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ To execute queries or call stored procedures, you start by creating an instance of the ``queries.Session`` class. It can act as a context manager, meaning you can use it with the ``with`` keyword and it will take care of cleaning up after itself. For more information on the ``with`` keyword and context managers, see PEP343_. tzinfo=psycopg2.tz.FixedOffsetTimezone(offset=-240, name=None))} If your application is going to be performing multiple operations, you should use the `queries.Session` class. It can act as a context manager, meaning you can use it with the `with` keyword and it will take care of cleaning up after itself. In addition to both the `Session.query()` and `Session.callproc()` methods that are similar to the simple API methods, the `queries.Session` class provides access to the psycopg2 connection and cursor objects. It also provides methods for managing transactions and to the LISTEN/NOTIFY functionality provided by PostgreSQL. For full documentation around the Session class, see the documentation_. The following example shows how a `queries.Session` object can be used as a context manager. .. code:: python >>> with queries.Session() as session: ... for row in session.query('SELECT * FROM names'): ... pprint.pprint(row) ... {'id': 1, 'name': u'Jacob'} {'id': 2, 'name': u'Mason'} {'id': 3, 'name': u'Ethan'} **Using queries.Session.callproc** This example uses ``queries.Session.callproc`` to execute a stored In addition to providing a Pythonic, synchronous client API for PostgreSQL, Queries provides a very similar asynchronous API for use with Tornado_ [*]_. The only major difference API difference between `queries.TornadoSession` and `queries.Session` is the `TornadoSession.query` and `TornadoSession.callproc` methods return the entire result set instead of acting as an iterator over the results. The following is an example of using Queries in a Tornado_ web application. ... pprint.pprint(results.as_dict()) ... {'chr': u'A'} **Asynchronous Queries with Tornado** In addition to providing a Pythonic, synchronous client API for PostgreSQL, Queries provides a very similar asynchronous API for use with Tornado. The only major difference API difference between ``queries.TornadoSession`` and ``queries.Session`` is the ``TornadoSession.query`` and ``TornadoSession.callproc`` methods return the entire result set instead of acting as an iterator over the results. The following example uses ``TornadoSession.query`` in an asynchronous Tornado_ web application to send a JSON payload with the query result set. .. code:: python from tornado import gen, ioloop, web import queries class MainHandler(web.RequestHandler): def initialize(self): self.session = queries.TornadoSession() @gen.coroutine def get(self): results = yield self.session.query('SELECT * FROM names') self.finish({'data': results.items()}) results.free() application = web.Application([ (r"/", MainHandler), ]) if __name__ == "__main__": application.listen(8888) ioloop.IOLoop.instance().start() Inspiration ----------- Queries is inspired by `Kenneth Reitz's <https://github.com/kennethreitz/>`_ awesome work on `requests <http://docs.python-requests.org/en/latest/>`_. History ------- Queries is a fork and enhancement of pgsql_wrapper_, which can be found in the main GitHub repository of Queries as tags prior to version 1.2.0. .. _pypi: https://pypi.python.org/pypi/queries .. _psycopg2: https://pypi.python.org/pypi/psycopg2 .. _documentation: https://queries.readthedocs.org .. _URI: http://www.postgresql.org/docs/9.3/static/libpq-connect.html#LIBPQ-CONNSTRING .. _pgsql_wrapper: https://pypi.python.org/pypi/pgsql_wrapper .. _Tornado: http://tornadoweb.org .. _PEP343: http://legacy.python.org/dev/peps/pep-0343/ .. _psycopg2cffi: https://pypi.python.org/pypi/psycopg2cffi .. |Version| image:: https://img.shields.io/pypi/v/queries.svg? :target: https://pypi.python.org/pypi/queries .. |Status| image:: https://img.shields.io/travis/gmr/queries.svg? :target: https://travis-ci.org/gmr/queries .. |Coverage| image:: https://img.shields.io/codecov/c/github/gmr/queries.svg? :target: https://codecov.io/github/gmr/queries?branch=master .. |License| image:: https://img.shields.io/github/license/gmr/queries.svg? :target: https://github.com/gmr/queries <MSG> Fix the backtick quoting <DFF> @@ -11,7 +11,7 @@ PostgreSQL. Key features include: - Internal connection pooling - Asynchronous support for Tornado_ - Automatic registration of UUIDs, Unicode and Unicode Arrays -- Ability to directly access psycopg2 `connection` and `cursor` objects +- Ability to directly access psycopg2 ``connection`` and ``cursor`` objects - Connection information provided by URI - Query results delivered as a generator based iterators @@ -32,10 +32,10 @@ queries is available via pypi and can be installed with easy_install or pip: Usage ----- Queries provides both a session based API and a stripped-down simple API for -interacting with PostgreSQL. If you're writing applications that may only have +interacting with PostgreSQL. If you're writing applications that will only have one or two queries, the simple API may be useful. Instead of creating a session -object when using the simple API methods (`queries.query()` and -`queries.callproc()`), this is done for you. Simply pass in your query and +object when using the simple API methods (``queries.query()`` and +``queries.callproc()``), this is done for you. Simply pass in your query and the URIs_ of the PostgreSQL server to connect to: .. code:: python @@ -51,8 +51,8 @@ When specifying a URI, if you omit the username and database name to connect with, Queries will use the current OS username for both. You can also omit the URI when connecting to connect to localhost on port 5432 as the current OS user, connecting to a database named for the current user. For example, if your -username is "fred" and you omit the URI when issuing `queries.query` the URI -that is constructed would be `pgsql://fred@localhost:5432/fred`. +username is "fred" and you omit the URI when issuing ``queries.query`` the URI +that is constructed would be ``pgsql://fred@localhost:5432/fred``. Here are a few examples of using the Queries simple API: @@ -82,15 +82,15 @@ Here are a few examples of using the Queries simple API: tzinfo=psycopg2.tz.FixedOffsetTimezone(offset=-240, name=None))} If your application is going to be performing multiple operations, you should use -the `queries.Session` class. It can act as a context manager, meaning you can -use it with the `with` keyword and it will take care of cleaning up after itself. +the ``queries.Session`` class. It can act as a context manager, meaning you can +use it with the ``with`` keyword and it will take care of cleaning up after itself. -In addition to both the `Session.query()` and `Session.callproc()` methods that -are similar to the simple API methods, the `queries.Session` class provides +In addition to both the ``Session.query()`` and ``Session.callproc()`` methods that +are similar to the simple API methods, the ``queries.Session`` class provides access to the psycopg2 connection and cursor objects. It also provides methods for managing transactions and to the LISTEN/NOTIFY functionality provided by PostgreSQL. For full documentation around the Session class, see the -documentation_. The following example shows how a `queries.Session` object can +documentation_. The following example shows how a ``queries.Session`` object can be used as a context manager. .. code:: python @@ -108,8 +108,8 @@ be used as a context manager. In addition to providing a Pythonic, synchronous client API for PostgreSQL, Queries provides a very similar asynchronous API for use with Tornado_ [*]_. -The only major difference API difference between `queries.TornadoSession` and -`queries.Session` is the `TornadoSession.query` and `TornadoSession.callproc` +The only major difference API difference between ``queries.TornadoSession`` and +``queries.Session`` is the ``TornadoSession.query`` and ``TornadoSession.callproc`` methods return the entire result set instead of acting as an iterator over the results. The following is an example of using Queries in a Tornado_ web application.
13
Fix the backtick quoting
13
.rst
rst
bsd-3-clause
gmr/queries
1273
<NME> pool_manager_tests.py <BEF> """ Tests for Manager class in the pool module """ import unittest import uuid import mock from queries import pool def mock_connection(): conn = mock.MagicMock('psycopg2.extensions.connection') conn.close = mock.Mock() conn.closed = True conn.isexecuting = mock.Mock(return_value=False) return conn class ManagerTests(unittest.TestCase): def setUp(self): self.manager = pool.PoolManager.instance() def tearDown(self): self.manager.shutdown() def test_singleton_behavior(self): self.assertEqual(pool.PoolManager.instance(), self.manager) def test_has_pool_false(self): self.assertNotIn(mock.Mock(), self.manager) def test_has_pool_true(self): pid = str(uuid.uuid4()) self.manager.create(pid) self.assertIn(pid, self.manager) def test_adding_to_pool(self): pid = str(uuid.uuid4()) self.manager.create(pid) psycopg2_conn = mock_connection() self.manager.add(pid, psycopg2_conn) self.assertIn(psycopg2_conn, self.manager._pools[pid]) def test_adding_to_pool_ensures_pool_exists(self): pid = str(uuid.uuid4()) psycopg2_conn = mock.Mock() self.assertRaises(KeyError, self.manager.add, pid, psycopg2_conn) psycopg2_conn = mock.Mock() self.assertRaises(KeyError, self.manager.add, pid, psycopg2_conn) def test_clean_ensures_pool_exists(self): pid = str(uuid.uuid4()) self.assertRaises(KeyError, self.manager.clean, pid) def test_clean_invokes_pool_clean(self): pid = str(uuid.uuid4()) self.manager._pools[pid].clean = clean = mock.Mock() self.manager.clean(pid) clean.assert_called_once_with() def test_clean_removes_pool(self): pid = str(uuid.uuid4()) with mock.patch('queries.pool.Pool') as Pool: self.manager._pools[pid] = Pool() self.manager.clean(pid) self.assertNotIn(pid, self.manager._pools) def test_create_prevents_duplicate_pool_id(self): pid = str(uuid.uuid4()) with mock.patch('queries.pool.Pool'): self.manager.create(pid, 10, 10) self.assertRaises(KeyError, self.manager.create, pid, 10, 10) def test_create_passes_in_idle_ttl(self): pid = str(uuid.uuid4()) self.manager.create(pid, 12) self.assertEqual(self.manager._pools[pid].idle_ttl, 12) def test_create_passes_in_max_size(self): pid = str(uuid.uuid4()) self.manager.create(pid, 10, 16) self.assertEqual(self.manager._pools[pid].max_size, 16) def test_get_ensures_pool_exists(self): pid = str(uuid.uuid4()) session = mock.Mock() self.assertRaises(KeyError, self.manager.get, pid, session) def test_get_invokes_pool_get(self): pid = str(uuid.uuid4()) session = mock.Mock() self.manager.create(pid) self.manager._pools[pid].get = get = mock.Mock() self.manager.get(pid, session) get.assert_called_once_with(session) def test_free_ensures_pool_exists(self): pid = str(uuid.uuid4()) psycopg2_conn = mock_connection() self.assertRaises(KeyError, self.manager.free, pid, psycopg2_conn) def test_free_invokes_pool_free(self): pid = str(uuid.uuid4()) psycopg2_conn = mock_connection() self.manager.create(pid) self.manager._pools[pid].free = free = mock.Mock() self.manager.free(pid, psycopg2_conn) free.assert_called_once_with(psycopg2_conn) def test_has_connection_ensures_pool_exists(self): pid = str(uuid.uuid4()) self.assertRaises(KeyError, self.manager.has_connection, pid, None) def test_has_idle_connection_ensures_pool_exists(self): pid = str(uuid.uuid4()) self.assertRaises(KeyError, self.manager.has_idle_connection, pid) def test_has_connection_returns_false(self): pid = str(uuid.uuid4()) self.manager.create(pid) self.assertFalse(self.manager.has_connection(pid, mock.Mock())) def test_has_connection_returns_true(self): pid = str(uuid.uuid4()) self.manager.create(pid) psycopg2_conn = mock_connection() self.manager.add(pid, psycopg2_conn) self.assertTrue(self.manager.has_connection(pid, psycopg2_conn)) self.manager.remove(pid) def test_has_idle_connection_returns_false(self): pid = str(uuid.uuid4()) self.manager.create(pid) with mock.patch('queries.pool.Pool.idle_connections', new_callable=mock.PropertyMock) as idle_connections: idle_connections.return_value = 0 self.assertFalse(self.manager.has_idle_connection(pid)) def test_has_idle_connection_returns_true(self): pid = str(uuid.uuid4()) self.manager.create(pid) with mock.patch('queries.pool.Pool.idle_connections', new_callable=mock.PropertyMock) as idle_connections: idle_connections.return_value = 5 self.assertTrue(self.manager.has_idle_connection(pid)) def test_is_full_ensures_pool_exists(self): pid = str(uuid.uuid4()) self.assertRaises(KeyError, self.manager.is_full, pid) def test_is_full_invokes_pool_is_full(self): pid = str(uuid.uuid4()) self.manager.create(pid) with mock.patch('queries.pool.Pool.is_full', new_callable=mock.PropertyMock) as is_full: self.manager.is_full(pid) is_full.assert_called_once_with() def test_lock_ensures_pool_exists(self): pid = str(uuid.uuid4()) self.assertRaises(KeyError, self.manager.lock, pid, None, None) def test_lock_invokes_pool_lock(self): pid = str(uuid.uuid4()) self.manager.create(pid) self.manager._pools[pid].lock = lock = mock.Mock() psycopg2_conn = mock.Mock() session = mock.Mock() self.manager.lock(pid, psycopg2_conn, session) lock.assert_called_once_with(psycopg2_conn, session) def test_remove_ensures_pool_exists(self): pid = str(uuid.uuid4()) self.assertRaises(KeyError, self.manager.remove, pid) def test_remove_invokes_pool_close(self): pid = str(uuid.uuid4()) self.manager.create(pid) self.manager._pools[pid].close = method = mock.Mock() self.manager.remove(pid) method.assert_called_once_with() def test_remove_deletes_pool(self): pid = str(uuid.uuid4()) self.manager.create(pid) self.manager._pools[pid].close = mock.Mock() self.manager.remove(pid) self.assertNotIn(pid, self.manager._pools) def test_remove_connection_ensures_pool_exists(self): pid = str(uuid.uuid4()) self.assertRaises(KeyError, self.manager.remove_connection, pid, None) def test_remove_connection_invokes_pool_remove(self): pid = str(uuid.uuid4()) self.manager.create(pid) self.manager._pools[pid].remove = remove = mock.Mock() psycopg2_conn = mock.Mock() self.manager.remove_connection(pid, psycopg2_conn) remove.assert_called_once_with(psycopg2_conn) def test_size_ensures_pool_exists(self): pid = str(uuid.uuid4()) self.assertRaises(KeyError, self.manager.size, pid) def test_size_returns_pool_length(self): pid = str(uuid.uuid4()) self.manager.create(pid) self.assertEqual(self.manager.size(pid), len(self.manager._pools[pid])) def test_set_idle_ttl_ensures_pool_exists(self): pid = str(uuid.uuid4()) self.assertRaises(KeyError, self.manager.set_idle_ttl, pid, None) def test_set_idle_ttl_invokes_pool_set_idle_ttl(self): pid = str(uuid.uuid4()) self.manager.create(pid) self.manager._pools[pid].set_idle_ttl = set_idle_ttl = mock.Mock() self.manager.set_idle_ttl(pid, 256) set_idle_ttl.assert_called_once_with(256) def test_set_max_size_ensures_pool_exists(self): pid = str(uuid.uuid4()) self.assertRaises(KeyError, self.manager.set_idle_ttl, pid, None) def test_set_max_size_invokes_pool_set_max_size(self): pid = str(uuid.uuid4()) self.manager.create(pid) self.manager._pools[pid].set_max_size = set_max_size = mock.Mock() self.manager.set_max_size(pid, 128) set_max_size.assert_called_once_with(128) def test_shutdown_closes_all(self): pid1, pid2 = str(uuid.uuid4()), str(uuid.uuid4()) self.manager.create(pid1) self.manager._pools[pid1].shutdown = method1 = mock.Mock() self.manager.create(pid2) self.manager._pools[pid2].shutdown = method2 = mock.Mock() self.manager.shutdown() method1.assert_called_once_with() method2.assert_called_once_with() <MSG> Update tests <DFF> @@ -51,9 +51,13 @@ class ManagerTests(unittest.TestCase): psycopg2_conn = mock.Mock() self.assertRaises(KeyError, self.manager.add, pid, psycopg2_conn) - def test_clean_ensures_pool_exists(self): + def test_ensures_pool_exists_raises_key_error(self): pid = str(uuid.uuid4()) - self.assertRaises(KeyError, self.manager.clean, pid) + self.assertRaises(KeyError, self.manager._ensure_pool_exists, pid) + + def test_clean_ensures_pool_exists_catches_key_error(self): + pid = str(uuid.uuid4()) + self.assertIsNone(self.manager.clean(pid)) def test_clean_invokes_pool_clean(self): pid = str(uuid.uuid4())
6
Update tests
2
.py
py
bsd-3-clause
gmr/queries
1274
<NME> pool_manager_tests.py <BEF> """ Tests for Manager class in the pool module """ import unittest import uuid import mock from queries import pool def mock_connection(): conn = mock.MagicMock('psycopg2.extensions.connection') conn.close = mock.Mock() conn.closed = True conn.isexecuting = mock.Mock(return_value=False) return conn class ManagerTests(unittest.TestCase): def setUp(self): self.manager = pool.PoolManager.instance() def tearDown(self): self.manager.shutdown() def test_singleton_behavior(self): self.assertEqual(pool.PoolManager.instance(), self.manager) def test_has_pool_false(self): self.assertNotIn(mock.Mock(), self.manager) def test_has_pool_true(self): pid = str(uuid.uuid4()) self.manager.create(pid) self.assertIn(pid, self.manager) def test_adding_to_pool(self): pid = str(uuid.uuid4()) self.manager.create(pid) psycopg2_conn = mock_connection() self.manager.add(pid, psycopg2_conn) self.assertIn(psycopg2_conn, self.manager._pools[pid]) def test_adding_to_pool_ensures_pool_exists(self): pid = str(uuid.uuid4()) psycopg2_conn = mock.Mock() self.assertRaises(KeyError, self.manager.add, pid, psycopg2_conn) psycopg2_conn = mock.Mock() self.assertRaises(KeyError, self.manager.add, pid, psycopg2_conn) def test_clean_ensures_pool_exists(self): pid = str(uuid.uuid4()) self.assertRaises(KeyError, self.manager.clean, pid) def test_clean_invokes_pool_clean(self): pid = str(uuid.uuid4()) self.manager._pools[pid].clean = clean = mock.Mock() self.manager.clean(pid) clean.assert_called_once_with() def test_clean_removes_pool(self): pid = str(uuid.uuid4()) with mock.patch('queries.pool.Pool') as Pool: self.manager._pools[pid] = Pool() self.manager.clean(pid) self.assertNotIn(pid, self.manager._pools) def test_create_prevents_duplicate_pool_id(self): pid = str(uuid.uuid4()) with mock.patch('queries.pool.Pool'): self.manager.create(pid, 10, 10) self.assertRaises(KeyError, self.manager.create, pid, 10, 10) def test_create_passes_in_idle_ttl(self): pid = str(uuid.uuid4()) self.manager.create(pid, 12) self.assertEqual(self.manager._pools[pid].idle_ttl, 12) def test_create_passes_in_max_size(self): pid = str(uuid.uuid4()) self.manager.create(pid, 10, 16) self.assertEqual(self.manager._pools[pid].max_size, 16) def test_get_ensures_pool_exists(self): pid = str(uuid.uuid4()) session = mock.Mock() self.assertRaises(KeyError, self.manager.get, pid, session) def test_get_invokes_pool_get(self): pid = str(uuid.uuid4()) session = mock.Mock() self.manager.create(pid) self.manager._pools[pid].get = get = mock.Mock() self.manager.get(pid, session) get.assert_called_once_with(session) def test_free_ensures_pool_exists(self): pid = str(uuid.uuid4()) psycopg2_conn = mock_connection() self.assertRaises(KeyError, self.manager.free, pid, psycopg2_conn) def test_free_invokes_pool_free(self): pid = str(uuid.uuid4()) psycopg2_conn = mock_connection() self.manager.create(pid) self.manager._pools[pid].free = free = mock.Mock() self.manager.free(pid, psycopg2_conn) free.assert_called_once_with(psycopg2_conn) def test_has_connection_ensures_pool_exists(self): pid = str(uuid.uuid4()) self.assertRaises(KeyError, self.manager.has_connection, pid, None) def test_has_idle_connection_ensures_pool_exists(self): pid = str(uuid.uuid4()) self.assertRaises(KeyError, self.manager.has_idle_connection, pid) def test_has_connection_returns_false(self): pid = str(uuid.uuid4()) self.manager.create(pid) self.assertFalse(self.manager.has_connection(pid, mock.Mock())) def test_has_connection_returns_true(self): pid = str(uuid.uuid4()) self.manager.create(pid) psycopg2_conn = mock_connection() self.manager.add(pid, psycopg2_conn) self.assertTrue(self.manager.has_connection(pid, psycopg2_conn)) self.manager.remove(pid) def test_has_idle_connection_returns_false(self): pid = str(uuid.uuid4()) self.manager.create(pid) with mock.patch('queries.pool.Pool.idle_connections', new_callable=mock.PropertyMock) as idle_connections: idle_connections.return_value = 0 self.assertFalse(self.manager.has_idle_connection(pid)) def test_has_idle_connection_returns_true(self): pid = str(uuid.uuid4()) self.manager.create(pid) with mock.patch('queries.pool.Pool.idle_connections', new_callable=mock.PropertyMock) as idle_connections: idle_connections.return_value = 5 self.assertTrue(self.manager.has_idle_connection(pid)) def test_is_full_ensures_pool_exists(self): pid = str(uuid.uuid4()) self.assertRaises(KeyError, self.manager.is_full, pid) def test_is_full_invokes_pool_is_full(self): pid = str(uuid.uuid4()) self.manager.create(pid) with mock.patch('queries.pool.Pool.is_full', new_callable=mock.PropertyMock) as is_full: self.manager.is_full(pid) is_full.assert_called_once_with() def test_lock_ensures_pool_exists(self): pid = str(uuid.uuid4()) self.assertRaises(KeyError, self.manager.lock, pid, None, None) def test_lock_invokes_pool_lock(self): pid = str(uuid.uuid4()) self.manager.create(pid) self.manager._pools[pid].lock = lock = mock.Mock() psycopg2_conn = mock.Mock() session = mock.Mock() self.manager.lock(pid, psycopg2_conn, session) lock.assert_called_once_with(psycopg2_conn, session) def test_remove_ensures_pool_exists(self): pid = str(uuid.uuid4()) self.assertRaises(KeyError, self.manager.remove, pid) def test_remove_invokes_pool_close(self): pid = str(uuid.uuid4()) self.manager.create(pid) self.manager._pools[pid].close = method = mock.Mock() self.manager.remove(pid) method.assert_called_once_with() def test_remove_deletes_pool(self): pid = str(uuid.uuid4()) self.manager.create(pid) self.manager._pools[pid].close = mock.Mock() self.manager.remove(pid) self.assertNotIn(pid, self.manager._pools) def test_remove_connection_ensures_pool_exists(self): pid = str(uuid.uuid4()) self.assertRaises(KeyError, self.manager.remove_connection, pid, None) def test_remove_connection_invokes_pool_remove(self): pid = str(uuid.uuid4()) self.manager.create(pid) self.manager._pools[pid].remove = remove = mock.Mock() psycopg2_conn = mock.Mock() self.manager.remove_connection(pid, psycopg2_conn) remove.assert_called_once_with(psycopg2_conn) def test_size_ensures_pool_exists(self): pid = str(uuid.uuid4()) self.assertRaises(KeyError, self.manager.size, pid) def test_size_returns_pool_length(self): pid = str(uuid.uuid4()) self.manager.create(pid) self.assertEqual(self.manager.size(pid), len(self.manager._pools[pid])) def test_set_idle_ttl_ensures_pool_exists(self): pid = str(uuid.uuid4()) self.assertRaises(KeyError, self.manager.set_idle_ttl, pid, None) def test_set_idle_ttl_invokes_pool_set_idle_ttl(self): pid = str(uuid.uuid4()) self.manager.create(pid) self.manager._pools[pid].set_idle_ttl = set_idle_ttl = mock.Mock() self.manager.set_idle_ttl(pid, 256) set_idle_ttl.assert_called_once_with(256) def test_set_max_size_ensures_pool_exists(self): pid = str(uuid.uuid4()) self.assertRaises(KeyError, self.manager.set_idle_ttl, pid, None) def test_set_max_size_invokes_pool_set_max_size(self): pid = str(uuid.uuid4()) self.manager.create(pid) self.manager._pools[pid].set_max_size = set_max_size = mock.Mock() self.manager.set_max_size(pid, 128) set_max_size.assert_called_once_with(128) def test_shutdown_closes_all(self): pid1, pid2 = str(uuid.uuid4()), str(uuid.uuid4()) self.manager.create(pid1) self.manager._pools[pid1].shutdown = method1 = mock.Mock() self.manager.create(pid2) self.manager._pools[pid2].shutdown = method2 = mock.Mock() self.manager.shutdown() method1.assert_called_once_with() method2.assert_called_once_with() <MSG> Update tests <DFF> @@ -51,9 +51,13 @@ class ManagerTests(unittest.TestCase): psycopg2_conn = mock.Mock() self.assertRaises(KeyError, self.manager.add, pid, psycopg2_conn) - def test_clean_ensures_pool_exists(self): + def test_ensures_pool_exists_raises_key_error(self): pid = str(uuid.uuid4()) - self.assertRaises(KeyError, self.manager.clean, pid) + self.assertRaises(KeyError, self.manager._ensure_pool_exists, pid) + + def test_clean_ensures_pool_exists_catches_key_error(self): + pid = str(uuid.uuid4()) + self.assertIsNone(self.manager.clean(pid)) def test_clean_invokes_pool_clean(self): pid = str(uuid.uuid4())
6
Update tests
2
.py
py
bsd-3-clause
gmr/queries
1275
<NME> tornado_multiple.rst <BEF> Concurrent Queries in Tornado ============================= The following example issues multiple concurrent queries in a single asynchronous request and will wait until all queries are complete before progressing: .. code:: python from tornado import gen, ioloop, web import queries class RequestHandler(web.RequestHandler): def initialize(self): self.session = queries.TornadoSession() @gen.coroutine def get(self, *args, **kwargs): # Issue the three queries and wait for them to finish before progressing q1result, q2result = yield [self.session.query('SELECT * FROM foo'), self.session.query('SELECT * FROM bar'), self.session.query('INSERT INTO requests VALUES (%s, %s, %s)', [self.remote_ip, self.request_uri, self.headers.get('User-Agent', '')])] # Close the connection self.finish({'q1result': q1result, 'q2result': q2result}) 'q2result': q2result.items()}) application = web.Application([ (r"/", RequestHandler) ]).listen(8888) ioloop.IOLoop.instance().start() q3result.free() if __name__ == "__main__": application = web.Application([ (r"/", RequestHandler) ]).listen(8888) ioloop.IOLoop.instance().start() <MSG> Updated Tornado examples <DFF> @@ -18,12 +18,14 @@ request and will wait until all queries are complete before progressing: def get(self, *args, **kwargs): # Issue the three queries and wait for them to finish before progressing - q1result, q2result = yield [self.session.query('SELECT * FROM foo'), - self.session.query('SELECT * FROM bar'), - self.session.query('INSERT INTO requests VALUES (%s, %s, %s)', - [self.remote_ip, - self.request_uri, - self.headers.get('User-Agent', '')])] + ((q1rows, q1result), + q2rows, q2result), + q3rows, q3result)) = yield [self.session.query('SELECT * FROM foo'), + self.session.query('SELECT * FROM bar'), + self.session.query('INSERT INTO requests VALUES (%s, %s, %s)', + [self.remote_ip, + self.request_uri, + self.headers.get('User-Agent', '')])] # Close the connection self.finish({'q1result': q1result, 'q2result': q2result}) @@ -31,4 +33,4 @@ request and will wait until all queries are complete before progressing: application = web.Application([ (r"/", RequestHandler) ]).listen(8888) - ioloop.IOLoop.instance().start() \ No newline at end of file + ioloop.IOLoop.instance().start()
9
Updated Tornado examples
7
.rst
rst
bsd-3-clause
gmr/queries
1276
<NME> tornado_multiple.rst <BEF> Concurrent Queries in Tornado ============================= The following example issues multiple concurrent queries in a single asynchronous request and will wait until all queries are complete before progressing: .. code:: python from tornado import gen, ioloop, web import queries class RequestHandler(web.RequestHandler): def initialize(self): self.session = queries.TornadoSession() @gen.coroutine def get(self, *args, **kwargs): # Issue the three queries and wait for them to finish before progressing q1result, q2result = yield [self.session.query('SELECT * FROM foo'), self.session.query('SELECT * FROM bar'), self.session.query('INSERT INTO requests VALUES (%s, %s, %s)', [self.remote_ip, self.request_uri, self.headers.get('User-Agent', '')])] # Close the connection self.finish({'q1result': q1result, 'q2result': q2result}) 'q2result': q2result.items()}) application = web.Application([ (r"/", RequestHandler) ]).listen(8888) ioloop.IOLoop.instance().start() q3result.free() if __name__ == "__main__": application = web.Application([ (r"/", RequestHandler) ]).listen(8888) ioloop.IOLoop.instance().start() <MSG> Updated Tornado examples <DFF> @@ -18,12 +18,14 @@ request and will wait until all queries are complete before progressing: def get(self, *args, **kwargs): # Issue the three queries and wait for them to finish before progressing - q1result, q2result = yield [self.session.query('SELECT * FROM foo'), - self.session.query('SELECT * FROM bar'), - self.session.query('INSERT INTO requests VALUES (%s, %s, %s)', - [self.remote_ip, - self.request_uri, - self.headers.get('User-Agent', '')])] + ((q1rows, q1result), + q2rows, q2result), + q3rows, q3result)) = yield [self.session.query('SELECT * FROM foo'), + self.session.query('SELECT * FROM bar'), + self.session.query('INSERT INTO requests VALUES (%s, %s, %s)', + [self.remote_ip, + self.request_uri, + self.headers.get('User-Agent', '')])] # Close the connection self.finish({'q1result': q1result, 'q2result': q2result}) @@ -31,4 +33,4 @@ request and will wait until all queries are complete before progressing: application = web.Application([ (r"/", RequestHandler) ]).listen(8888) - ioloop.IOLoop.instance().start() \ No newline at end of file + ioloop.IOLoop.instance().start()
9
Updated Tornado examples
7
.rst
rst
bsd-3-clause
gmr/queries
1277
<NME> tornado_session_tests.py <BEF> ADDFILE <MSG> Remove the outdated tests, stub the new ones <DFF> @@ -0,0 +1,12 @@ +""" +Tests for functionality in the tornado_session module + +""" +import mock +try: + import unittest2 as unittest +except ImportError: + import unittest + +from queries import tornado_session +
12
Remove the outdated tests, stub the new ones
0
.py
py
bsd-3-clause
gmr/queries
1278
<NME> tornado_session_tests.py <BEF> ADDFILE <MSG> Remove the outdated tests, stub the new ones <DFF> @@ -0,0 +1,12 @@ +""" +Tests for functionality in the tornado_session module + +""" +import mock +try: + import unittest2 as unittest +except ImportError: + import unittest + +from queries import tornado_session +
12
Remove the outdated tests, stub the new ones
0
.py
py
bsd-3-clause
gmr/queries
1279
<NME> .travis.yml <BEF> sudo: false language: python dist: xenial env: global: - PATH=$HOME/.local/bin:$PATH - AWS_DEFAULT_REGION=us-east-1 - secure: "inURdx4ldkJqQXL1TyvKImC3EnL5TixC1DlNMBYi5ttygwAk+mSSSw8Yc7klB6D1m6q79xUlHRk06vbz23CsXTM4AClC5Emrk6XN2GlUKl5WI+z+A2skI59buEhLWe7e2KzhB/AVx2E3TfKa0oY7raM0UUnaOkpV1Cj+mHKPIT0=" install: - if [[ $TRAVIS_PYTHON_VERSION == '2.6' ]]; then pip install -r requirements.txt; pip install psycopg2 unittest2; fi - if [[ $TRAVIS_PYTHON_VERSION != '2.7' ]]; then pip install -r requirements.txt; pip install psycopg2; fi - if [[ $TRAVIS_PYTHON_VERSION == 'pypy' ]]; then pip install -r requirements.txt; pip install psycopg2ct; fi - if [[ $TRAVIS_PYTHON_VERSION == '3.2' ]]; then pip install -r requirements.txt; pip install psycopg2; fi - if [[ $TRAVIS_PYTHON_VERSION == '3.3' ]]; then pip install -r requirements.txt; pip install psycopg2; fi services: - postgresql install: - pip install awscli - pip install -r requires/testing.txt - python setup.py develop script: nosetests after_success: - aws s3 cp .coverage "s3://com-gavinroy-travis/queries/$TRAVIS_BUILD_NUMBER/.coverage.${TRAVIS_PYTHON_VERSION}" jobs: include: - python: 2.7 - python: 3.4 - python: 3.5 - python: 3.6 - python: 3.7 - python: 3.8 - stage: coverage if: repo = gmr/queries services: [] python: 3.7 install: - pip install awscli coverage codecov script: - mkdir coverage - aws s3 cp --recursive s3://com-gavinroy-travis/queries/$TRAVIS_BUILD_NUMBER/ coverage - cd coverage - coverage combine - cd .. - mv coverage/.coverage . - coverage report after_success: codecov - stage: deploy if: repo = gmr/queries python: 3.6 services: [] install: true script: true after_success: true deploy: distributions: sdist bdist_wheel provider: pypi user: crad on: tags: true all_branches: true password: secure: UWQWui+QhAL1cz6oW/vqjEEp6/EPn1YOlItNJcWHNOO/WMMOlaTVYVUuXp+y+m52B+8PtYZZCTHwKCUKe97Grh291FLxgd0RJCawA40f4v1gmOFYLNKyZFBGfbC69/amxvGCcDvOPtpChHAlTIeokS5EQneVcAhXg2jXct0HTfI= <MSG> Stupid error in .travis.yml <DFF> @@ -10,7 +10,7 @@ python: install: - if [[ $TRAVIS_PYTHON_VERSION == '2.6' ]]; then pip install -r requirements.txt; pip install psycopg2 unittest2; fi - - if [[ $TRAVIS_PYTHON_VERSION != '2.7' ]]; then pip install -r requirements.txt; pip install psycopg2; fi + - if [[ $TRAVIS_PYTHON_VERSION == '2.7' ]]; then pip install -r requirements.txt; pip install psycopg2; fi - if [[ $TRAVIS_PYTHON_VERSION == 'pypy' ]]; then pip install -r requirements.txt; pip install psycopg2ct; fi - if [[ $TRAVIS_PYTHON_VERSION == '3.2' ]]; then pip install -r requirements.txt; pip install psycopg2; fi - if [[ $TRAVIS_PYTHON_VERSION == '3.3' ]]; then pip install -r requirements.txt; pip install psycopg2; fi
1
Stupid error in .travis.yml
1
.yml
travis
bsd-3-clause
gmr/queries
1280
<NME> .travis.yml <BEF> sudo: false language: python dist: xenial env: global: - PATH=$HOME/.local/bin:$PATH - AWS_DEFAULT_REGION=us-east-1 - secure: "inURdx4ldkJqQXL1TyvKImC3EnL5TixC1DlNMBYi5ttygwAk+mSSSw8Yc7klB6D1m6q79xUlHRk06vbz23CsXTM4AClC5Emrk6XN2GlUKl5WI+z+A2skI59buEhLWe7e2KzhB/AVx2E3TfKa0oY7raM0UUnaOkpV1Cj+mHKPIT0=" install: - if [[ $TRAVIS_PYTHON_VERSION == '2.6' ]]; then pip install -r requirements.txt; pip install psycopg2 unittest2; fi - if [[ $TRAVIS_PYTHON_VERSION != '2.7' ]]; then pip install -r requirements.txt; pip install psycopg2; fi - if [[ $TRAVIS_PYTHON_VERSION == 'pypy' ]]; then pip install -r requirements.txt; pip install psycopg2ct; fi - if [[ $TRAVIS_PYTHON_VERSION == '3.2' ]]; then pip install -r requirements.txt; pip install psycopg2; fi - if [[ $TRAVIS_PYTHON_VERSION == '3.3' ]]; then pip install -r requirements.txt; pip install psycopg2; fi services: - postgresql install: - pip install awscli - pip install -r requires/testing.txt - python setup.py develop script: nosetests after_success: - aws s3 cp .coverage "s3://com-gavinroy-travis/queries/$TRAVIS_BUILD_NUMBER/.coverage.${TRAVIS_PYTHON_VERSION}" jobs: include: - python: 2.7 - python: 3.4 - python: 3.5 - python: 3.6 - python: 3.7 - python: 3.8 - stage: coverage if: repo = gmr/queries services: [] python: 3.7 install: - pip install awscli coverage codecov script: - mkdir coverage - aws s3 cp --recursive s3://com-gavinroy-travis/queries/$TRAVIS_BUILD_NUMBER/ coverage - cd coverage - coverage combine - cd .. - mv coverage/.coverage . - coverage report after_success: codecov - stage: deploy if: repo = gmr/queries python: 3.6 services: [] install: true script: true after_success: true deploy: distributions: sdist bdist_wheel provider: pypi user: crad on: tags: true all_branches: true password: secure: UWQWui+QhAL1cz6oW/vqjEEp6/EPn1YOlItNJcWHNOO/WMMOlaTVYVUuXp+y+m52B+8PtYZZCTHwKCUKe97Grh291FLxgd0RJCawA40f4v1gmOFYLNKyZFBGfbC69/amxvGCcDvOPtpChHAlTIeokS5EQneVcAhXg2jXct0HTfI= <MSG> Stupid error in .travis.yml <DFF> @@ -10,7 +10,7 @@ python: install: - if [[ $TRAVIS_PYTHON_VERSION == '2.6' ]]; then pip install -r requirements.txt; pip install psycopg2 unittest2; fi - - if [[ $TRAVIS_PYTHON_VERSION != '2.7' ]]; then pip install -r requirements.txt; pip install psycopg2; fi + - if [[ $TRAVIS_PYTHON_VERSION == '2.7' ]]; then pip install -r requirements.txt; pip install psycopg2; fi - if [[ $TRAVIS_PYTHON_VERSION == 'pypy' ]]; then pip install -r requirements.txt; pip install psycopg2ct; fi - if [[ $TRAVIS_PYTHON_VERSION == '3.2' ]]; then pip install -r requirements.txt; pip install psycopg2; fi - if [[ $TRAVIS_PYTHON_VERSION == '3.3' ]]; then pip install -r requirements.txt; pip install psycopg2; fi
1
Stupid error in .travis.yml
1
.yml
travis
bsd-3-clause
gmr/queries
1281
<NME> pool.py <BEF> """ Connection Pooling """ import datetime import logging import os import threading import time import weakref import psycopg2 LOGGER = logging.getLogger(__name__) DEFAULT_IDLE_TTL = 60 DEFAULT_MAX_SIZE = int(os.environ.get('QUERIES_MAX_POOL_SIZE', 1)) class Connection(object): """Contains the handle to the connection, the current state of the connection and methods for manipulating the state of the connection. """ _lock = threading.Lock() def __init__(self, handle): self.handle = handle self.used_by = None self.executions = 0 self.exceptions = 0 def close(self): """Close the connection :raises: ConnectionBusyError """ LOGGER.debug('Connection %s closing', self.id) if self.busy and not self.closed: raise ConnectionBusyError(self) with self._lock: if not self.handle.closed: try: self.handle.close() except psycopg2.InterfaceError as error: LOGGER.error('Error closing socket: %s', error) @property def closed(self): """Return if the psycopg2 connection is closed. :rtype: bool """ return self.handle.closed != 0 @property def busy(self): """Return if the connection is currently executing a query or is locked by a session that still exists. :rtype: bool """ if self.handle.isexecuting(): return True elif self.used_by is None: return False return not self.used_by() is None @property def executing(self): """Return if the connection is currently executing a query :rtype: bool """ return self.handle.isexecuting() def free(self): """Remove the lock on the connection if the connection is not active :raises: ConnectionBusyError """ LOGGER.debug('Connection %s freeing', self.id) if self.handle.isexecuting(): raise ConnectionBusyError(self) with self._lock: self.used_by = None LOGGER.debug('Connection %s freed', self.id) @property def id(self): """Return id of the psycopg2 connection object :rtype: int """ return id(self.handle) def lock(self, session): """Lock the connection, ensuring that it is not busy and storing a weakref for the session. :param queries.Session session: The session to lock the connection with :raises: ConnectionBusyError """ if self.busy: raise ConnectionBusyError(self) with self._lock: self.used_by = weakref.ref(session) LOGGER.debug('Connection %s locked', self.id) @property def locked(self): """Return if the connection is currently exclusively locked :rtype: bool """ return self.used_by is not None class Pool(object): """A connection pool for gaining access to and managing connections""" _lock = threading.Lock() idle_start = None def __init__(self, pool_id, idle_ttl=DEFAULT_IDLE_TTL, max_size=DEFAULT_MAX_SIZE): self.connections = {} self._id = pool_id self.idle_ttl = idle_ttl self.max_size = max_size def __contains__(self, connection): """Return True if the pool contains the connection""" def __contains__(self, connection): """Return True if the pool contains the connection""" return id(connection) in self.connections def __len__(self): """Return the number of connections in the pool""" return len(self.connections) def add(self, connection): """Add a new connection to the pool :param connection: The connection to add to the pool :type connection: psycopg2.extensions.connection :raises: PoolFullError """ if id(connection) in self.connections: raise ValueError('Connection already exists in pool') if len(self.connections) == self.max_size: LOGGER.warning('Race condition found when adding new connection') try: connection.close() except (psycopg2.Error, psycopg2.Warning) as error: LOGGER.error('Error closing the conn that cant be used: %s', error) raise PoolFullError(self) with self._lock: self.connections[id(connection)] = Connection(connection) LOGGER.debug('Pool %s added connection %s', self.id, id(connection)) @property def busy_connections(self): """Return a list of active/busy connections :rtype: list """ return [c for c in self.connections.values() if c.busy and not c.closed] def clean(self): """Clean the pool by removing any closed connections and if the pool's idle has exceeded its idle TTL, remove all connections. """ LOGGER.debug('Cleaning the pool') for connection in [self.connections[k] for k in self.connections if self.connections[k].closed]: LOGGER.debug('Removing %s', connection.id) self.remove(connection.handle) if self.idle_duration > self.idle_ttl: self.close() LOGGER.debug('Pool %s cleaned', self.id) def close(self): """Close the pool by closing and removing all of the connections""" for cid in list(self.connections.keys()): self.remove(self.connections[cid].handle) LOGGER.debug('Pool %s closed', self.id) if self.idle_connections == list(self.connections.values()): with self._lock: self.idle_start = time.time() LOGGER.debug('Pool %s freed connection %s', self.id, id(connection)) def get(self, session): return [c for c in self.connections.values() if c.closed] def connection_handle(self, connection): """Return a connection object for the given psycopg2 connection :param connection: The connection to return a parent for :type connection: psycopg2.extensions.connection :rtype: Connection """ return self.connections[id(connection)] @property def executing_connections(self): """Return a list of connections actively executing queries :rtype: list """ return [c for c in self.connections.values() if c.executing] def free(self, connection): """Free the connection from use by the session that was using it. :param connection: The connection to free :type connection: psycopg2.extensions.connection :raises: ConnectionNotFoundError """ LOGGER.debug('Pool %s freeing connection %s', self.id, id(connection)) try: self.connection_handle(connection).free() except KeyError: raise ConnectionNotFoundError(self.id, id(connection)) if self.idle_connections == list(self.connections.values()): with self._lock: self.idle_start = self.time_method() LOGGER.debug('Pool %s freed connection %s', self.id, id(connection)) def get(self, session): """Return an idle connection and assign the session to the connection :param queries.Session session: The session to assign """ if self.idle_start is None: return 0 return time.time() - self.idle_start @property def is_full(self): connection.lock(session) if self.idle_start: with self._lock: self.idle_start = None return connection.handle raise NoIdleConnectionsError(self.id) @property def id(self): """Return the ID for this pool :rtype: str """ return self._id @property def idle_connections(self): """Return a list of idle connections :rtype: list """ return [c for c in self.connections.values() if not c.busy and not c.closed] @property def idle_duration(self): """Return the number of seconds that the pool has had no active connections. :rtype: float """ if self.idle_start is None: return 0 return self.time_method() - self.idle_start @property def is_full(self): """Return True if there are no more open slots for connections. :rtype: bool """ return len(self.connections) >= self.max_size def lock(self, connection, session): """Explicitly lock the specified connection :type connection: psycopg2.extensions.connection :param connection: The connection to lock :param queries.Session session: The session to hold the lock """ cid = id(connection) try: self.connection_handle(connection).lock(session) except KeyError: raise ConnectionNotFoundError(self.id, cid) else: if self.idle_start: with self._lock: self.idle_start = None LOGGER.debug('Pool %s locked connection %s', self.id, cid) @property def locked_connections(self): """Return a list of all locked connections :rtype: list """ return [c for c in self.connections.values() if c.locked] def remove(self, connection): """Remove the connection from the pool :param connection: The connection to remove :type connection: psycopg2.extensions.connection :raises: ConnectionNotFoundError :raises: ConnectionBusyError """ cid = id(connection) if cid not in self.connections: raise ConnectionNotFoundError(self.id, cid) self.connection_handle(connection).close() with self._lock: del self.connections[cid] LOGGER.debug('Pool %s removed connection %s', self.id, cid) def report(self): """Return a report about the pool state and configuration. :rtype: dict """ return { 'connections': { 'busy': len(self.busy_connections), 'closed': len(self.closed_connections), 'executing': len(self.executing_connections), 'idle': len(self.idle_connections), 'locked': len(self.busy_connections) }, 'exceptions': sum([c.exceptions for c in self.connections.values()]), 'executions': sum([c.executions for c in self.connections.values()]), """Only allow a single PoolManager instance to exist, returning the handle for it. :rtype: PoolManager """ def shutdown(self): """Forcefully shutdown the entire pool, closing all non-executing connections. :raises: ConnectionBusyError """ with self._lock: for cid in list(self.connections.keys()): if self.connections[cid].executing: raise ConnectionBusyError(cid) if self.connections[cid].locked: self.connections[cid].free() self.connections[cid].close() del self.connections[cid] def set_idle_ttl(self, ttl): """Set the idle ttl :param int ttl: The TTL when idle """ with self._lock: self.idle_ttl = ttl def set_max_size(self, size): with cls._lock: cls._ensure_pool_exists(pid) cls._pools[pid].clean() # If the pool has no open connections, remove it if not len(cls._pools[pid]): del cls._pools[pid] @classmethod def create(cls, pid, idle_ttl=DEFAULT_IDLE_TTL, max_size=DEFAULT_MAX_SIZE): """Create a new pool, with the ability to pass in values to override the default idle TTL and the default maximum size. their use in queries.Session objects. We carry a pool id instead of the connection URI so that we will not be carrying the URI in memory, creating a possible security issue. """ :param str pid: The pool ID :param int idle_ttl: Time in seconds for the idle TTL :param int max_size: The maximum pool size :raises: KeyError """ raise KeyError('Pool %s already exists' % pid) with cls._lock: LOGGER.debug("Creating Pool: %s (%i/%i)", pid, idle_ttl, max_size) cls._pools[pid] = Pool(pid, idle_ttl, max_size) @classmethod def get(cls, pid, session): :rtype: PoolManager """ if not hasattr(cls, '_instance'): with cls._lock: cls._instance = cls() return cls._instance @classmethod def add(cls, pid, connection): """Add a new connection and session to a pool. :param str pid: The pool id :type connection: psycopg2.extensions.connection :param connection: The connection to add to the pool """ with cls._lock: cls._ensure_pool_exists(pid) cls._pools[pid].add(connection) @classmethod def clean(cls, pid): """Clean the specified pool, removing any closed connections or stale locks. :param str pid: The pool id to clean """ with cls._lock: try: cls._ensure_pool_exists(pid) except KeyError: LOGGER.debug('Pool clean invoked against missing pool %s', pid) return cls._pools[pid].clean() cls._maybe_remove_pool(pid) @classmethod def create(cls, pid, idle_ttl=DEFAULT_IDLE_TTL, max_size=DEFAULT_MAX_SIZE, time_method=None): """Create a new pool, with the ability to pass in values to override the default idle TTL and the default maximum size. A pool's idle TTL defines the amount of time that a pool can be open without any sessions before it is removed. A pool's max size defines the maximum number of connections that can be added to the pool to prevent unbounded open connections. :param str pid: The pool ID :param int idle_ttl: Time in seconds for the idle TTL :param int max_size: The maximum pool size :param callable time_method: Override the use of :py:meth:`time.time` method for time values. :raises: KeyError """ if pid in cls._pools: raise KeyError('Pool %s already exists' % pid) with cls._lock: LOGGER.debug("Creating Pool: %s (%i/%i)", pid, idle_ttl, max_size) cls._pools[pid] = Pool(pid, idle_ttl, max_size, time_method) @classmethod def free(cls, pid, connection): """Free a connection that was locked by a session :param str pid: The pool ID :param connection: The connection to remove :type connection: psycopg2.extensions.connection """ with cls._lock: LOGGER.debug('Freeing %s from pool %s', id(connection), pid) cls._ensure_pool_exists(pid) cls._pools[pid].free(connection) @classmethod def get(cls, pid, session): """Get an idle, unused connection from the pool. Once a connection has been retrieved, it will be marked as in-use until it is freed. :param str pid: The pool ID :param queries.Session session: The session to assign to the connection :rtype: psycopg2.extensions.connection """ with cls._lock: cls._ensure_pool_exists(pid) return cls._pools[pid].get(session) @classmethod def get_connection(cls, pid, connection): """Return the specified :class:`~queries.pool.Connection` from the pool. :param str pid: The pool ID :param connection: The connection to return for :type connection: psycopg2.extensions.connection :rtype: queries.pool.Connection """ with cls._lock: return cls._pools[pid].connection_handle(connection) @classmethod def has_connection(cls, pid, connection): """Check to see if a pool has the specified connection :param str pid: The pool ID :param connection: The connection to check for :type connection: psycopg2.extensions.connection :rtype: bool """ with cls._lock: cls._ensure_pool_exists(pid) return connection in cls._pools[pid] @classmethod def has_idle_connection(cls, pid): """Check to see if a pool has an idle connection :param str pid: The pool ID :rtype: bool """ with cls._lock: cls._ensure_pool_exists(pid) return bool(cls._pools[pid].idle_connections) @classmethod def is_full(cls, pid): """Return a bool indicating if the specified pool is full :param str pid: The pool id :rtype: bool """ with cls._lock: cls._ensure_pool_exists(pid) return cls._pools[pid].is_full @classmethod def lock(cls, pid, connection, session): """Explicitly lock the specified connection in the pool :param str pid: The pool id :type connection: psycopg2.extensions.connection :param connection: The connection to add to the pool :param queries.Session session: The session to hold the lock """ if pid not in cls._pools: raise KeyError('Pool %s has not been created' % pid) class QueriesException(Exception): """Base Exception for all other Queries exceptions""" @classmethod def remove_connection(cls, pid, connection): """Remove a connection from the pool, closing it if is open. :param str pid: The pool ID :param connection: The connection to remove :type connection: psycopg2.extensions.connection :raises: ConnectionNotFoundError """ cls._ensure_pool_exists(pid) cls._pools[pid].remove(connection) @classmethod def set_idle_ttl(cls, pid, ttl): """Set the idle TTL for a pool, after which it will be destroyed. :param str pid: The pool id :param int ttl: The TTL for an idle pool """ with cls._lock: cls._ensure_pool_exists(pid) cls._pools[pid].set_idle_ttl(ttl) @classmethod def set_max_size(cls, pid, size): """Set the maximum number of connections for the specified pool :param str pid: The pool to set the size for :param int size: The maximum number of connections """ with cls._lock: cls._ensure_pool_exists(pid) cls._pools[pid].set_max_size(size) @classmethod def shutdown(cls): """Close all connections on in all pools""" for pid in list(cls._pools.keys()): cls._pools[pid].shutdown() LOGGER.info('Shutdown complete, all pooled connections closed') @classmethod def size(cls, pid): """Return the number of connections in the pool :param str pid: The pool id :rtype int """ with cls._lock: cls._ensure_pool_exists(pid) return len(cls._pools[pid]) @classmethod def report(cls): """Return the state of the all of the registered pools. :rtype: dict """ return { 'timestamp': datetime.datetime.utcnow().isoformat(), 'process': os.getpid(), 'pools': dict([(i, p.report()) for i, p in cls._pools.items()]) } @classmethod def _ensure_pool_exists(cls, pid): """Raise an exception if the pool has yet to be created or has been removed. :param str pid: The pool ID to check for :raises: KeyError """ if pid not in cls._pools: raise KeyError('Pool %s has not been created' % pid) @classmethod def _maybe_remove_pool(cls, pid): """If the pool has no open connections, remove it :param str pid: The pool id to clean """ if not len(cls._pools[pid]): del cls._pools[pid] class QueriesException(Exception): """Base Exception for all other Queries exceptions""" pass class ConnectionException(QueriesException): def __init__(self, cid): self.cid = cid class PoolException(QueriesException): def __init__(self, pid): self.pid = pid class PoolConnectionException(PoolException): def __init__(self, pid, cid): self.pid = pid self.cid = cid class ActivePoolError(PoolException): """Raised when removing a pool that has active connections""" def __str__(self): return 'Pool %s has at least one active connection' % self.pid class ConnectionBusyError(ConnectionException): """Raised when trying to lock a connection that is already busy""" def __str__(self): return 'Connection %s is busy' % self.cid class ConnectionNotFoundError(PoolConnectionException): """Raised if a specific connection is not found in the pool""" def __str__(self): return 'Connection %s not found in pool %s' % (self.cid, self.pid) class NoIdleConnectionsError(PoolException): """Raised if a pool does not have any idle, open connections""" def __str__(self): return 'Pool %s has no idle connections' % self.pid class PoolFullError(PoolException): """Raised when adding a connection to a pool that has hit max-size""" def __str__(self): return 'Pool %s is at its maximum capacity' % self.pid <MSG> Add in an overridable time method for Tornado IOLoop.time <DFF> @@ -132,11 +132,13 @@ class Pool(object): def __init__(self, pool_id, idle_ttl=DEFAULT_IDLE_TTL, - max_size=DEFAULT_MAX_SIZE): + max_size=DEFAULT_MAX_SIZE, + time_method=None): self.connections = {} self._id = pool_id self.idle_ttl = idle_ttl self.max_size = max_size + self.time_method = time_method or time.time def __contains__(self, connection): """Return True if the pool contains the connection""" @@ -207,7 +209,7 @@ class Pool(object): if self.idle_connections == list(self.connections.values()): with self._lock: - self.idle_start = time.time() + self.idle_start = self.time_method() LOGGER.debug('Pool %s freed connection %s', self.id, id(connection)) def get(self, session): @@ -258,7 +260,7 @@ class Pool(object): """ if self.idle_start is None: return 0 - return time.time() - self.idle_start + return self.time_method() - self.idle_start @property def is_full(self): @@ -375,6 +377,8 @@ class PoolManager(object): """Only allow a single PoolManager instance to exist, returning the handle for it. + :param callable time_method: Override the default :py:meth`time.time` + method for time calculations. Only applied on first invocation. :rtype: PoolManager """ @@ -407,13 +411,11 @@ class PoolManager(object): with cls._lock: cls._ensure_pool_exists(pid) cls._pools[pid].clean() - - # If the pool has no open connections, remove it - if not len(cls._pools[pid]): - del cls._pools[pid] + cls._maybe_remove_pool(pid) @classmethod - def create(cls, pid, idle_ttl=DEFAULT_IDLE_TTL, max_size=DEFAULT_MAX_SIZE): + def create(cls, pid, idle_ttl=DEFAULT_IDLE_TTL, max_size=DEFAULT_MAX_SIZE, + time_method=None): """Create a new pool, with the ability to pass in values to override the default idle TTL and the default maximum size. @@ -426,6 +428,8 @@ class PoolManager(object): :param str pid: The pool ID :param int idle_ttl: Time in seconds for the idle TTL :param int max_size: The maximum pool size + :param callable time_method: Override the use of :py:meth:`time.time` + method for time values. :raises: KeyError """ @@ -433,7 +437,7 @@ class PoolManager(object): raise KeyError('Pool %s already exists' % pid) with cls._lock: LOGGER.debug("Creating Pool: %s (%i/%i)", pid, idle_ttl, max_size) - cls._pools[pid] = Pool(pid, idle_ttl, max_size) + cls._pools[pid] = Pool(pid, idle_ttl, max_size, time_method) @classmethod def get(cls, pid, session): @@ -595,6 +599,16 @@ class PoolManager(object): if pid not in cls._pools: raise KeyError('Pool %s has not been created' % pid) + @classmethod + def _maybe_remove_pool(cls, pid): + """If the pool has no open connections, remove it + + :param str pid: The pool id to clean + + """ + if not len(cls._pools[pid]): + del cls._pools[pid] + class QueriesException(Exception): """Base Exception for all other Queries exceptions"""
23
Add in an overridable time method for Tornado IOLoop.time
9
.py
py
bsd-3-clause
gmr/queries
1282
<NME> pool.py <BEF> """ Connection Pooling """ import datetime import logging import os import threading import time import weakref import psycopg2 LOGGER = logging.getLogger(__name__) DEFAULT_IDLE_TTL = 60 DEFAULT_MAX_SIZE = int(os.environ.get('QUERIES_MAX_POOL_SIZE', 1)) class Connection(object): """Contains the handle to the connection, the current state of the connection and methods for manipulating the state of the connection. """ _lock = threading.Lock() def __init__(self, handle): self.handle = handle self.used_by = None self.executions = 0 self.exceptions = 0 def close(self): """Close the connection :raises: ConnectionBusyError """ LOGGER.debug('Connection %s closing', self.id) if self.busy and not self.closed: raise ConnectionBusyError(self) with self._lock: if not self.handle.closed: try: self.handle.close() except psycopg2.InterfaceError as error: LOGGER.error('Error closing socket: %s', error) @property def closed(self): """Return if the psycopg2 connection is closed. :rtype: bool """ return self.handle.closed != 0 @property def busy(self): """Return if the connection is currently executing a query or is locked by a session that still exists. :rtype: bool """ if self.handle.isexecuting(): return True elif self.used_by is None: return False return not self.used_by() is None @property def executing(self): """Return if the connection is currently executing a query :rtype: bool """ return self.handle.isexecuting() def free(self): """Remove the lock on the connection if the connection is not active :raises: ConnectionBusyError """ LOGGER.debug('Connection %s freeing', self.id) if self.handle.isexecuting(): raise ConnectionBusyError(self) with self._lock: self.used_by = None LOGGER.debug('Connection %s freed', self.id) @property def id(self): """Return id of the psycopg2 connection object :rtype: int """ return id(self.handle) def lock(self, session): """Lock the connection, ensuring that it is not busy and storing a weakref for the session. :param queries.Session session: The session to lock the connection with :raises: ConnectionBusyError """ if self.busy: raise ConnectionBusyError(self) with self._lock: self.used_by = weakref.ref(session) LOGGER.debug('Connection %s locked', self.id) @property def locked(self): """Return if the connection is currently exclusively locked :rtype: bool """ return self.used_by is not None class Pool(object): """A connection pool for gaining access to and managing connections""" _lock = threading.Lock() idle_start = None def __init__(self, pool_id, idle_ttl=DEFAULT_IDLE_TTL, max_size=DEFAULT_MAX_SIZE): self.connections = {} self._id = pool_id self.idle_ttl = idle_ttl self.max_size = max_size def __contains__(self, connection): """Return True if the pool contains the connection""" def __contains__(self, connection): """Return True if the pool contains the connection""" return id(connection) in self.connections def __len__(self): """Return the number of connections in the pool""" return len(self.connections) def add(self, connection): """Add a new connection to the pool :param connection: The connection to add to the pool :type connection: psycopg2.extensions.connection :raises: PoolFullError """ if id(connection) in self.connections: raise ValueError('Connection already exists in pool') if len(self.connections) == self.max_size: LOGGER.warning('Race condition found when adding new connection') try: connection.close() except (psycopg2.Error, psycopg2.Warning) as error: LOGGER.error('Error closing the conn that cant be used: %s', error) raise PoolFullError(self) with self._lock: self.connections[id(connection)] = Connection(connection) LOGGER.debug('Pool %s added connection %s', self.id, id(connection)) @property def busy_connections(self): """Return a list of active/busy connections :rtype: list """ return [c for c in self.connections.values() if c.busy and not c.closed] def clean(self): """Clean the pool by removing any closed connections and if the pool's idle has exceeded its idle TTL, remove all connections. """ LOGGER.debug('Cleaning the pool') for connection in [self.connections[k] for k in self.connections if self.connections[k].closed]: LOGGER.debug('Removing %s', connection.id) self.remove(connection.handle) if self.idle_duration > self.idle_ttl: self.close() LOGGER.debug('Pool %s cleaned', self.id) def close(self): """Close the pool by closing and removing all of the connections""" for cid in list(self.connections.keys()): self.remove(self.connections[cid].handle) LOGGER.debug('Pool %s closed', self.id) if self.idle_connections == list(self.connections.values()): with self._lock: self.idle_start = time.time() LOGGER.debug('Pool %s freed connection %s', self.id, id(connection)) def get(self, session): return [c for c in self.connections.values() if c.closed] def connection_handle(self, connection): """Return a connection object for the given psycopg2 connection :param connection: The connection to return a parent for :type connection: psycopg2.extensions.connection :rtype: Connection """ return self.connections[id(connection)] @property def executing_connections(self): """Return a list of connections actively executing queries :rtype: list """ return [c for c in self.connections.values() if c.executing] def free(self, connection): """Free the connection from use by the session that was using it. :param connection: The connection to free :type connection: psycopg2.extensions.connection :raises: ConnectionNotFoundError """ LOGGER.debug('Pool %s freeing connection %s', self.id, id(connection)) try: self.connection_handle(connection).free() except KeyError: raise ConnectionNotFoundError(self.id, id(connection)) if self.idle_connections == list(self.connections.values()): with self._lock: self.idle_start = self.time_method() LOGGER.debug('Pool %s freed connection %s', self.id, id(connection)) def get(self, session): """Return an idle connection and assign the session to the connection :param queries.Session session: The session to assign """ if self.idle_start is None: return 0 return time.time() - self.idle_start @property def is_full(self): connection.lock(session) if self.idle_start: with self._lock: self.idle_start = None return connection.handle raise NoIdleConnectionsError(self.id) @property def id(self): """Return the ID for this pool :rtype: str """ return self._id @property def idle_connections(self): """Return a list of idle connections :rtype: list """ return [c for c in self.connections.values() if not c.busy and not c.closed] @property def idle_duration(self): """Return the number of seconds that the pool has had no active connections. :rtype: float """ if self.idle_start is None: return 0 return self.time_method() - self.idle_start @property def is_full(self): """Return True if there are no more open slots for connections. :rtype: bool """ return len(self.connections) >= self.max_size def lock(self, connection, session): """Explicitly lock the specified connection :type connection: psycopg2.extensions.connection :param connection: The connection to lock :param queries.Session session: The session to hold the lock """ cid = id(connection) try: self.connection_handle(connection).lock(session) except KeyError: raise ConnectionNotFoundError(self.id, cid) else: if self.idle_start: with self._lock: self.idle_start = None LOGGER.debug('Pool %s locked connection %s', self.id, cid) @property def locked_connections(self): """Return a list of all locked connections :rtype: list """ return [c for c in self.connections.values() if c.locked] def remove(self, connection): """Remove the connection from the pool :param connection: The connection to remove :type connection: psycopg2.extensions.connection :raises: ConnectionNotFoundError :raises: ConnectionBusyError """ cid = id(connection) if cid not in self.connections: raise ConnectionNotFoundError(self.id, cid) self.connection_handle(connection).close() with self._lock: del self.connections[cid] LOGGER.debug('Pool %s removed connection %s', self.id, cid) def report(self): """Return a report about the pool state and configuration. :rtype: dict """ return { 'connections': { 'busy': len(self.busy_connections), 'closed': len(self.closed_connections), 'executing': len(self.executing_connections), 'idle': len(self.idle_connections), 'locked': len(self.busy_connections) }, 'exceptions': sum([c.exceptions for c in self.connections.values()]), 'executions': sum([c.executions for c in self.connections.values()]), """Only allow a single PoolManager instance to exist, returning the handle for it. :rtype: PoolManager """ def shutdown(self): """Forcefully shutdown the entire pool, closing all non-executing connections. :raises: ConnectionBusyError """ with self._lock: for cid in list(self.connections.keys()): if self.connections[cid].executing: raise ConnectionBusyError(cid) if self.connections[cid].locked: self.connections[cid].free() self.connections[cid].close() del self.connections[cid] def set_idle_ttl(self, ttl): """Set the idle ttl :param int ttl: The TTL when idle """ with self._lock: self.idle_ttl = ttl def set_max_size(self, size): with cls._lock: cls._ensure_pool_exists(pid) cls._pools[pid].clean() # If the pool has no open connections, remove it if not len(cls._pools[pid]): del cls._pools[pid] @classmethod def create(cls, pid, idle_ttl=DEFAULT_IDLE_TTL, max_size=DEFAULT_MAX_SIZE): """Create a new pool, with the ability to pass in values to override the default idle TTL and the default maximum size. their use in queries.Session objects. We carry a pool id instead of the connection URI so that we will not be carrying the URI in memory, creating a possible security issue. """ :param str pid: The pool ID :param int idle_ttl: Time in seconds for the idle TTL :param int max_size: The maximum pool size :raises: KeyError """ raise KeyError('Pool %s already exists' % pid) with cls._lock: LOGGER.debug("Creating Pool: %s (%i/%i)", pid, idle_ttl, max_size) cls._pools[pid] = Pool(pid, idle_ttl, max_size) @classmethod def get(cls, pid, session): :rtype: PoolManager """ if not hasattr(cls, '_instance'): with cls._lock: cls._instance = cls() return cls._instance @classmethod def add(cls, pid, connection): """Add a new connection and session to a pool. :param str pid: The pool id :type connection: psycopg2.extensions.connection :param connection: The connection to add to the pool """ with cls._lock: cls._ensure_pool_exists(pid) cls._pools[pid].add(connection) @classmethod def clean(cls, pid): """Clean the specified pool, removing any closed connections or stale locks. :param str pid: The pool id to clean """ with cls._lock: try: cls._ensure_pool_exists(pid) except KeyError: LOGGER.debug('Pool clean invoked against missing pool %s', pid) return cls._pools[pid].clean() cls._maybe_remove_pool(pid) @classmethod def create(cls, pid, idle_ttl=DEFAULT_IDLE_TTL, max_size=DEFAULT_MAX_SIZE, time_method=None): """Create a new pool, with the ability to pass in values to override the default idle TTL and the default maximum size. A pool's idle TTL defines the amount of time that a pool can be open without any sessions before it is removed. A pool's max size defines the maximum number of connections that can be added to the pool to prevent unbounded open connections. :param str pid: The pool ID :param int idle_ttl: Time in seconds for the idle TTL :param int max_size: The maximum pool size :param callable time_method: Override the use of :py:meth:`time.time` method for time values. :raises: KeyError """ if pid in cls._pools: raise KeyError('Pool %s already exists' % pid) with cls._lock: LOGGER.debug("Creating Pool: %s (%i/%i)", pid, idle_ttl, max_size) cls._pools[pid] = Pool(pid, idle_ttl, max_size, time_method) @classmethod def free(cls, pid, connection): """Free a connection that was locked by a session :param str pid: The pool ID :param connection: The connection to remove :type connection: psycopg2.extensions.connection """ with cls._lock: LOGGER.debug('Freeing %s from pool %s', id(connection), pid) cls._ensure_pool_exists(pid) cls._pools[pid].free(connection) @classmethod def get(cls, pid, session): """Get an idle, unused connection from the pool. Once a connection has been retrieved, it will be marked as in-use until it is freed. :param str pid: The pool ID :param queries.Session session: The session to assign to the connection :rtype: psycopg2.extensions.connection """ with cls._lock: cls._ensure_pool_exists(pid) return cls._pools[pid].get(session) @classmethod def get_connection(cls, pid, connection): """Return the specified :class:`~queries.pool.Connection` from the pool. :param str pid: The pool ID :param connection: The connection to return for :type connection: psycopg2.extensions.connection :rtype: queries.pool.Connection """ with cls._lock: return cls._pools[pid].connection_handle(connection) @classmethod def has_connection(cls, pid, connection): """Check to see if a pool has the specified connection :param str pid: The pool ID :param connection: The connection to check for :type connection: psycopg2.extensions.connection :rtype: bool """ with cls._lock: cls._ensure_pool_exists(pid) return connection in cls._pools[pid] @classmethod def has_idle_connection(cls, pid): """Check to see if a pool has an idle connection :param str pid: The pool ID :rtype: bool """ with cls._lock: cls._ensure_pool_exists(pid) return bool(cls._pools[pid].idle_connections) @classmethod def is_full(cls, pid): """Return a bool indicating if the specified pool is full :param str pid: The pool id :rtype: bool """ with cls._lock: cls._ensure_pool_exists(pid) return cls._pools[pid].is_full @classmethod def lock(cls, pid, connection, session): """Explicitly lock the specified connection in the pool :param str pid: The pool id :type connection: psycopg2.extensions.connection :param connection: The connection to add to the pool :param queries.Session session: The session to hold the lock """ if pid not in cls._pools: raise KeyError('Pool %s has not been created' % pid) class QueriesException(Exception): """Base Exception for all other Queries exceptions""" @classmethod def remove_connection(cls, pid, connection): """Remove a connection from the pool, closing it if is open. :param str pid: The pool ID :param connection: The connection to remove :type connection: psycopg2.extensions.connection :raises: ConnectionNotFoundError """ cls._ensure_pool_exists(pid) cls._pools[pid].remove(connection) @classmethod def set_idle_ttl(cls, pid, ttl): """Set the idle TTL for a pool, after which it will be destroyed. :param str pid: The pool id :param int ttl: The TTL for an idle pool """ with cls._lock: cls._ensure_pool_exists(pid) cls._pools[pid].set_idle_ttl(ttl) @classmethod def set_max_size(cls, pid, size): """Set the maximum number of connections for the specified pool :param str pid: The pool to set the size for :param int size: The maximum number of connections """ with cls._lock: cls._ensure_pool_exists(pid) cls._pools[pid].set_max_size(size) @classmethod def shutdown(cls): """Close all connections on in all pools""" for pid in list(cls._pools.keys()): cls._pools[pid].shutdown() LOGGER.info('Shutdown complete, all pooled connections closed') @classmethod def size(cls, pid): """Return the number of connections in the pool :param str pid: The pool id :rtype int """ with cls._lock: cls._ensure_pool_exists(pid) return len(cls._pools[pid]) @classmethod def report(cls): """Return the state of the all of the registered pools. :rtype: dict """ return { 'timestamp': datetime.datetime.utcnow().isoformat(), 'process': os.getpid(), 'pools': dict([(i, p.report()) for i, p in cls._pools.items()]) } @classmethod def _ensure_pool_exists(cls, pid): """Raise an exception if the pool has yet to be created or has been removed. :param str pid: The pool ID to check for :raises: KeyError """ if pid not in cls._pools: raise KeyError('Pool %s has not been created' % pid) @classmethod def _maybe_remove_pool(cls, pid): """If the pool has no open connections, remove it :param str pid: The pool id to clean """ if not len(cls._pools[pid]): del cls._pools[pid] class QueriesException(Exception): """Base Exception for all other Queries exceptions""" pass class ConnectionException(QueriesException): def __init__(self, cid): self.cid = cid class PoolException(QueriesException): def __init__(self, pid): self.pid = pid class PoolConnectionException(PoolException): def __init__(self, pid, cid): self.pid = pid self.cid = cid class ActivePoolError(PoolException): """Raised when removing a pool that has active connections""" def __str__(self): return 'Pool %s has at least one active connection' % self.pid class ConnectionBusyError(ConnectionException): """Raised when trying to lock a connection that is already busy""" def __str__(self): return 'Connection %s is busy' % self.cid class ConnectionNotFoundError(PoolConnectionException): """Raised if a specific connection is not found in the pool""" def __str__(self): return 'Connection %s not found in pool %s' % (self.cid, self.pid) class NoIdleConnectionsError(PoolException): """Raised if a pool does not have any idle, open connections""" def __str__(self): return 'Pool %s has no idle connections' % self.pid class PoolFullError(PoolException): """Raised when adding a connection to a pool that has hit max-size""" def __str__(self): return 'Pool %s is at its maximum capacity' % self.pid <MSG> Add in an overridable time method for Tornado IOLoop.time <DFF> @@ -132,11 +132,13 @@ class Pool(object): def __init__(self, pool_id, idle_ttl=DEFAULT_IDLE_TTL, - max_size=DEFAULT_MAX_SIZE): + max_size=DEFAULT_MAX_SIZE, + time_method=None): self.connections = {} self._id = pool_id self.idle_ttl = idle_ttl self.max_size = max_size + self.time_method = time_method or time.time def __contains__(self, connection): """Return True if the pool contains the connection""" @@ -207,7 +209,7 @@ class Pool(object): if self.idle_connections == list(self.connections.values()): with self._lock: - self.idle_start = time.time() + self.idle_start = self.time_method() LOGGER.debug('Pool %s freed connection %s', self.id, id(connection)) def get(self, session): @@ -258,7 +260,7 @@ class Pool(object): """ if self.idle_start is None: return 0 - return time.time() - self.idle_start + return self.time_method() - self.idle_start @property def is_full(self): @@ -375,6 +377,8 @@ class PoolManager(object): """Only allow a single PoolManager instance to exist, returning the handle for it. + :param callable time_method: Override the default :py:meth`time.time` + method for time calculations. Only applied on first invocation. :rtype: PoolManager """ @@ -407,13 +411,11 @@ class PoolManager(object): with cls._lock: cls._ensure_pool_exists(pid) cls._pools[pid].clean() - - # If the pool has no open connections, remove it - if not len(cls._pools[pid]): - del cls._pools[pid] + cls._maybe_remove_pool(pid) @classmethod - def create(cls, pid, idle_ttl=DEFAULT_IDLE_TTL, max_size=DEFAULT_MAX_SIZE): + def create(cls, pid, idle_ttl=DEFAULT_IDLE_TTL, max_size=DEFAULT_MAX_SIZE, + time_method=None): """Create a new pool, with the ability to pass in values to override the default idle TTL and the default maximum size. @@ -426,6 +428,8 @@ class PoolManager(object): :param str pid: The pool ID :param int idle_ttl: Time in seconds for the idle TTL :param int max_size: The maximum pool size + :param callable time_method: Override the use of :py:meth:`time.time` + method for time values. :raises: KeyError """ @@ -433,7 +437,7 @@ class PoolManager(object): raise KeyError('Pool %s already exists' % pid) with cls._lock: LOGGER.debug("Creating Pool: %s (%i/%i)", pid, idle_ttl, max_size) - cls._pools[pid] = Pool(pid, idle_ttl, max_size) + cls._pools[pid] = Pool(pid, idle_ttl, max_size, time_method) @classmethod def get(cls, pid, session): @@ -595,6 +599,16 @@ class PoolManager(object): if pid not in cls._pools: raise KeyError('Pool %s has not been created' % pid) + @classmethod + def _maybe_remove_pool(cls, pid): + """If the pool has no open connections, remove it + + :param str pid: The pool id to clean + + """ + if not len(cls._pools[pid]): + del cls._pools[pid] + class QueriesException(Exception): """Base Exception for all other Queries exceptions"""
23
Add in an overridable time method for Tornado IOLoop.time
9
.py
py
bsd-3-clause
gmr/queries
1283
<NME> session.py <BEF> """The Session class allows for a unified (and simplified) view of interfacing with a PostgreSQL database server. Connection details are passed in as a PostgreSQL URI and connections are pooled by default, allowing for reuse of connections across modules in the Python runtime without having to pass around the object handle. While you can still access the raw `psycopg2` connection and cursor objects to provide ultimate flexibility in how you use the queries.Session object, there are convenience methods designed to simplify the interaction with PostgreSQL. For `psycopg2` functionality outside of what is exposed in Session, simply use the Session.connection or Session.cursor properties to gain access to either object just as you would in a program using psycopg2 directly. """ import contextlib import logging import psycopg2 with queries.Session('pgsql://postgres@localhost/postgres') as session: for row in session.Query('SELECT * FROM table'): print row """ import hashlib import logging import psycopg2 from psycopg2 import extensions, extras DEFAULT_ENCODING = 'UTF8' class Session(object): """The Session class allows for a unified (and simplified) view of interfacing with a PostgreSQL database server. The Session object can class Session(object): """The Session class allows for a unified (and simplified) view of interfacing with a PostgreSQL database server. The Session object can act as a context manager, providing automated cleanup and simple, Pythonic way of interacting with the object. :param str uri: PostgreSQL connection URI :param psycopg2.extensions.cursor: The cursor type to use :param int pool_idle_ttl: How long idle pools keep connections open :param int pool_max_size: The maximum size of the pool to use :param bool use_pool: Use the connection pool """ _from_pool = False _tpc_id = None PREPARED = extensions.STATUS_PREPARED READY = extensions.STATUS_READY SETUP = extensions.STATUS_SETUP # Transaction status constants TX_ACTIVE = extensions.TRANSACTION_STATUS_ACTIVE TX_IDLE = extensions.TRANSACTION_STATUS_IDLE TX_INERROR = extensions.TRANSACTION_STATUS_INERROR TX_INTRANS = extensions.TRANSACTION_STATUS_INTRANS TX_UNKNOWN = extensions.TRANSACTION_STATUS_UNKNOWN def __init__(self, uri=DEFAULT_URI, cursor_factory=extras.RealDictCursor, pool_idle_ttl=pool.DEFAULT_IDLE_TTL, pool_max_size=pool.DEFAULT_MAX_SIZE, autocommit=True): """Connect to a PostgreSQL server using the module wide connection and set the isolation level. :param str uri: PostgreSQL connection URI :param psycopg2.extensions.cursor: The cursor type to use :param int pool_idle_ttl: How long idle pools keep connections open :param int pool_max_size: The maximum size of the pool to use """ self._pool_manager = pool.PoolManager.instance() self._uri = uri # Ensure the pool exists in the pool manager if self.pid not in self._pool_manager: self._pool_manager.create(self.pid, pool_idle_ttl, pool_max_size) self._conn = self._connect() self._cursor_factory = cursor_factory self._cursor = self._get_cursor(self._conn) self._autocommit(autocommit) @property def backend_pid(self): """Return the backend process ID of the PostgreSQL server that this session is connected to. :rtype: int """ return self._conn.get_backend_pid() def callproc(self, name, args=None): """Call a stored procedure on the server, returning the results in a :py:class:`queries.Results` instance. :param str name: The procedure name :param list args: The list of arguments to pass in :rtype: queries.Results :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError :raises: queries.InternalError :raises: queries.InterfaceError :raises: queries.NotSupportedError :raises: queries.OperationalError :raises: queries.ProgrammingError """ try: self._cursor.callproc(name, args) except psycopg2.Error as err: self._incr_exceptions() raise err finally: raise AssertionError('Connection not open') self._conn.close() if self._use_pool: pool.remove_connection(self._uri) self._conn = None self._cursor = None :raises: psycopg2.InterfaceError """ if not self._conn: raise psycopg2.InterfaceError('Connection not open') LOGGER.info('Closing connection %r in %s', self._conn, self.pid) self._pool_manager.free(self.pid, self._conn) self._pool_manager.remove_connection(self.pid, self._conn) # Un-assign the connection and cursor self._conn, self._cursor = None, None @property def connection(self): """Return the current open connection to PostgreSQL. :rtype: psycopg2.extensions.connection """ return self._conn @property def cursor(self): """Return the current, active cursor for the open connection. :rtype: psycopg2.extensions.cursor """ return self._cursor @property def encoding(self): """Return the current client encoding value. :rtype: str """ return self._conn.encoding @property def notices(self): """Return a list of up to the last 50 server notices sent to the client. :rtype: list """ return self._conn.notices @property def pid(self): """Return the pool ID used for connection pooling. :rtype: str """ return hashlib.md5(':'.join([self.__class__.__name__, self._uri]).encode('utf-8')).hexdigest() def query(self, sql, parameters=None): """A generator to issue a query on the server, mogrifying the parameters against the sql statement. Results are returned as a :py:class:`queries.Results` object which can act as an iterator and has multiple ways to access the result data. :param str sql: The SQL statement :param dict parameters: A dictionary of query parameters :rtype: queries.Results :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError :raises: queries.InternalError :raises: queries.InterfaceError :raises: queries.NotSupportedError :raises: queries.OperationalError :raises: queries.ProgrammingError """ try: self._cursor.execute(sql, parameters) except psycopg2.Error as err: self._incr_exceptions() raise err finally: self._incr_executions() return results.Results(self._cursor) def set_encoding(self, value=DEFAULT_ENCODING): """Set the client encoding for the session if the value specified is different than the current client encoding. :param str value: The encoding value to use """ if self._conn.encoding != value: self._conn.set_client_encoding(value) def __del__(self): """When deleting the context, ensure the instance is removed from caches, etc. """ self._cleanup() def __enter__(self): """For use as a context manager, return a handle to this object instance. :rtype: Session """ return self def __exit__(self, exc_type, exc_val, exc_tb): """When leaving the context, ensure the instance is removed from caches, etc. """ self._cleanup() def _autocommit(self, autocommit): """Set the isolation level automatically to commit or not after every query :param autocommit: Boolean (Default - True) """ self._conn.autocommit = autocommit def _cleanup(self): """Remove the connection from the stack, closing out the cursor""" if self._cursor: LOGGER.debug('Closing the cursor on %s', self.pid) self._cursor.close() self._cursor = None if self._conn: LOGGER.debug('Freeing %s in the pool', self.pid) try: pool.PoolManager.instance().free(self.pid, self._conn) except pool.ConnectionNotFoundError: pass self._conn = None def _connect(self): """Connect to PostgreSQL, either by reusing a connection from the pool if possible, or by creating the new connection. :rtype: psycopg2.extensions.connection :raises: pool.NoIdleConnectionsError """ # Attempt to get a cached connection from the connection pool try: connection = self._pool_manager.get(self.pid, self) LOGGER.debug("Re-using connection for %s", self.pid) except pool.NoIdleConnectionsError: if self._pool_manager.is_full(self.pid): raise # Create a new PostgreSQL connection kwargs = utils.uri_to_kwargs(self._uri) LOGGER.debug("Creating a new connection for %s", self.pid) connection = self._psycopg2_connect(kwargs) self._pool_manager.add(self.pid, connection) self._pool_manager.lock(self.pid, connection, self) # Added in because psycopg2ct connects and leaves the connection in # a weird state: consts.STATUS_DATESTYLE, returning from # Connection._setup without setting the state as const.STATUS_OK if utils.PYPY: connection.reset() # Register the custom data types self._register_unicode(connection) self._cursor.close() self._cursor = None if self._conn: pool.free_connection(self._uri) self._conn = None def _connect(self): :param connection: The connection to create a cursor on :type connection: psycopg2.extensions.connection :param str name: A cursor name for a server side cursor :rtype: psycopg2.extensions.cursor """ # Attempt to get a cached connection from the connection pool if self._use_pool: connection = pool.get_connection(self._uri) if connection: self._from_pool = True return connection # Create a new PostgreSQL connection LOGGER.debug('Connection KWARGS: %r', utils.uri_to_kwargs(self._uri)) connection = psycopg2.connect(**utils.uri_to_kwargs(self._uri)) # Add it to the pool, if pooling is enabled if self._use_pool: pool.add_connection(self._uri, connection) # Added in because psycopg2ct connects and leaves the connection in # a weird state: consts.STATUS_DATESTYLE, returning from use in async session adapters. :param dict kwargs: Keyword connection args :rtype: psycopg2.extensions.connection """ return psycopg2.connect(**kwargs) @staticmethod def _register_unicode(connection): """Register the cursor to be able to receive Unicode string. :type connection: psycopg2.extensions.connection :param connection: Where to register things """ psycopg2.extensions.register_type(psycopg2.extensions.UNICODE, """ return self._conn.cursor(cursor_factory=cursor_factory) @staticmethod def _register_json(connection): """Register the JSON extension from the psycopg2.extras module def _status(self): """Return the current connection status as an integer value. The status should match one of the following constants: - queries.Session.INTRANS: Connection established, in transaction - queries.Session.PREPARED: Prepared for second phase of transaction - queries.Session.READY: Connected, no active transaction :rtype: int """ if self._conn.status == psycopg2.extensions.STATUS_BEGIN: return self.READY return self._conn.status <MSG> Switch to using _id for the connection pooling - Will allow for mix of sync and async connections in the pool - Add async flag to class for child classes to become async <DFF> @@ -14,7 +14,7 @@ either object just as you would in a program using psycopg2 directly. """ -import contextlib +import hashlib import logging import psycopg2 @@ -32,7 +32,6 @@ from queries import PYPY DEFAULT_ENCODING = 'UTF8' - class Session(object): """The Session class allows for a unified (and simplified) view of interfacing with a PostgreSQL database server. The Session object can @@ -52,6 +51,8 @@ class Session(object): :param bool use_pool: Use the connection pool """ + _ASYNC = False + _from_pool = False _tpc_id = None @@ -128,7 +129,7 @@ class Session(object): raise AssertionError('Connection not open') self._conn.close() if self._use_pool: - pool.remove_connection(self._uri) + pool.remove_connection(self._id) self._conn = None self._cursor = None @@ -308,7 +309,7 @@ class Session(object): self._cursor.close() self._cursor = None if self._conn: - pool.free_connection(self._uri) + pool.free_connection(self._id) self._conn = None def _connect(self): @@ -320,18 +321,19 @@ class Session(object): """ # Attempt to get a cached connection from the connection pool if self._use_pool: - connection = pool.get_connection(self._uri) + connection = pool.get_connection(self._id) if connection: self._from_pool = True return connection # Create a new PostgreSQL connection - LOGGER.debug('Connection KWARGS: %r', utils.uri_to_kwargs(self._uri)) - connection = psycopg2.connect(**utils.uri_to_kwargs(self._uri)) + kwargs = utils.uri_to_kwargs(self._uri) + kwargs['async'] = self._ASYNC + connection = psycopg2.connect(**kwargs) # Add it to the pool, if pooling is enabled if self._use_pool: - pool.add_connection(self._uri, connection) + pool.add_connection(self._id, connection) # Added in because psycopg2ct connects and leaves the connection in # a weird state: consts.STATUS_DATESTYLE, returning from @@ -355,6 +357,15 @@ class Session(object): """ return self._conn.cursor(cursor_factory=cursor_factory) + @property + def _id(self): + """Return an ID to be used with the connection pool + + :rtype: str + + """ + return hashlib.md5(self._uri).digest() + @staticmethod def _register_json(connection): """Register the JSON extension from the psycopg2.extras module
19
Switch to using _id for the connection pooling
8
.py
py
bsd-3-clause
gmr/queries
1284
<NME> session.py <BEF> """The Session class allows for a unified (and simplified) view of interfacing with a PostgreSQL database server. Connection details are passed in as a PostgreSQL URI and connections are pooled by default, allowing for reuse of connections across modules in the Python runtime without having to pass around the object handle. While you can still access the raw `psycopg2` connection and cursor objects to provide ultimate flexibility in how you use the queries.Session object, there are convenience methods designed to simplify the interaction with PostgreSQL. For `psycopg2` functionality outside of what is exposed in Session, simply use the Session.connection or Session.cursor properties to gain access to either object just as you would in a program using psycopg2 directly. """ import contextlib import logging import psycopg2 with queries.Session('pgsql://postgres@localhost/postgres') as session: for row in session.Query('SELECT * FROM table'): print row """ import hashlib import logging import psycopg2 from psycopg2 import extensions, extras DEFAULT_ENCODING = 'UTF8' class Session(object): """The Session class allows for a unified (and simplified) view of interfacing with a PostgreSQL database server. The Session object can class Session(object): """The Session class allows for a unified (and simplified) view of interfacing with a PostgreSQL database server. The Session object can act as a context manager, providing automated cleanup and simple, Pythonic way of interacting with the object. :param str uri: PostgreSQL connection URI :param psycopg2.extensions.cursor: The cursor type to use :param int pool_idle_ttl: How long idle pools keep connections open :param int pool_max_size: The maximum size of the pool to use :param bool use_pool: Use the connection pool """ _from_pool = False _tpc_id = None PREPARED = extensions.STATUS_PREPARED READY = extensions.STATUS_READY SETUP = extensions.STATUS_SETUP # Transaction status constants TX_ACTIVE = extensions.TRANSACTION_STATUS_ACTIVE TX_IDLE = extensions.TRANSACTION_STATUS_IDLE TX_INERROR = extensions.TRANSACTION_STATUS_INERROR TX_INTRANS = extensions.TRANSACTION_STATUS_INTRANS TX_UNKNOWN = extensions.TRANSACTION_STATUS_UNKNOWN def __init__(self, uri=DEFAULT_URI, cursor_factory=extras.RealDictCursor, pool_idle_ttl=pool.DEFAULT_IDLE_TTL, pool_max_size=pool.DEFAULT_MAX_SIZE, autocommit=True): """Connect to a PostgreSQL server using the module wide connection and set the isolation level. :param str uri: PostgreSQL connection URI :param psycopg2.extensions.cursor: The cursor type to use :param int pool_idle_ttl: How long idle pools keep connections open :param int pool_max_size: The maximum size of the pool to use """ self._pool_manager = pool.PoolManager.instance() self._uri = uri # Ensure the pool exists in the pool manager if self.pid not in self._pool_manager: self._pool_manager.create(self.pid, pool_idle_ttl, pool_max_size) self._conn = self._connect() self._cursor_factory = cursor_factory self._cursor = self._get_cursor(self._conn) self._autocommit(autocommit) @property def backend_pid(self): """Return the backend process ID of the PostgreSQL server that this session is connected to. :rtype: int """ return self._conn.get_backend_pid() def callproc(self, name, args=None): """Call a stored procedure on the server, returning the results in a :py:class:`queries.Results` instance. :param str name: The procedure name :param list args: The list of arguments to pass in :rtype: queries.Results :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError :raises: queries.InternalError :raises: queries.InterfaceError :raises: queries.NotSupportedError :raises: queries.OperationalError :raises: queries.ProgrammingError """ try: self._cursor.callproc(name, args) except psycopg2.Error as err: self._incr_exceptions() raise err finally: raise AssertionError('Connection not open') self._conn.close() if self._use_pool: pool.remove_connection(self._uri) self._conn = None self._cursor = None :raises: psycopg2.InterfaceError """ if not self._conn: raise psycopg2.InterfaceError('Connection not open') LOGGER.info('Closing connection %r in %s', self._conn, self.pid) self._pool_manager.free(self.pid, self._conn) self._pool_manager.remove_connection(self.pid, self._conn) # Un-assign the connection and cursor self._conn, self._cursor = None, None @property def connection(self): """Return the current open connection to PostgreSQL. :rtype: psycopg2.extensions.connection """ return self._conn @property def cursor(self): """Return the current, active cursor for the open connection. :rtype: psycopg2.extensions.cursor """ return self._cursor @property def encoding(self): """Return the current client encoding value. :rtype: str """ return self._conn.encoding @property def notices(self): """Return a list of up to the last 50 server notices sent to the client. :rtype: list """ return self._conn.notices @property def pid(self): """Return the pool ID used for connection pooling. :rtype: str """ return hashlib.md5(':'.join([self.__class__.__name__, self._uri]).encode('utf-8')).hexdigest() def query(self, sql, parameters=None): """A generator to issue a query on the server, mogrifying the parameters against the sql statement. Results are returned as a :py:class:`queries.Results` object which can act as an iterator and has multiple ways to access the result data. :param str sql: The SQL statement :param dict parameters: A dictionary of query parameters :rtype: queries.Results :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError :raises: queries.InternalError :raises: queries.InterfaceError :raises: queries.NotSupportedError :raises: queries.OperationalError :raises: queries.ProgrammingError """ try: self._cursor.execute(sql, parameters) except psycopg2.Error as err: self._incr_exceptions() raise err finally: self._incr_executions() return results.Results(self._cursor) def set_encoding(self, value=DEFAULT_ENCODING): """Set the client encoding for the session if the value specified is different than the current client encoding. :param str value: The encoding value to use """ if self._conn.encoding != value: self._conn.set_client_encoding(value) def __del__(self): """When deleting the context, ensure the instance is removed from caches, etc. """ self._cleanup() def __enter__(self): """For use as a context manager, return a handle to this object instance. :rtype: Session """ return self def __exit__(self, exc_type, exc_val, exc_tb): """When leaving the context, ensure the instance is removed from caches, etc. """ self._cleanup() def _autocommit(self, autocommit): """Set the isolation level automatically to commit or not after every query :param autocommit: Boolean (Default - True) """ self._conn.autocommit = autocommit def _cleanup(self): """Remove the connection from the stack, closing out the cursor""" if self._cursor: LOGGER.debug('Closing the cursor on %s', self.pid) self._cursor.close() self._cursor = None if self._conn: LOGGER.debug('Freeing %s in the pool', self.pid) try: pool.PoolManager.instance().free(self.pid, self._conn) except pool.ConnectionNotFoundError: pass self._conn = None def _connect(self): """Connect to PostgreSQL, either by reusing a connection from the pool if possible, or by creating the new connection. :rtype: psycopg2.extensions.connection :raises: pool.NoIdleConnectionsError """ # Attempt to get a cached connection from the connection pool try: connection = self._pool_manager.get(self.pid, self) LOGGER.debug("Re-using connection for %s", self.pid) except pool.NoIdleConnectionsError: if self._pool_manager.is_full(self.pid): raise # Create a new PostgreSQL connection kwargs = utils.uri_to_kwargs(self._uri) LOGGER.debug("Creating a new connection for %s", self.pid) connection = self._psycopg2_connect(kwargs) self._pool_manager.add(self.pid, connection) self._pool_manager.lock(self.pid, connection, self) # Added in because psycopg2ct connects and leaves the connection in # a weird state: consts.STATUS_DATESTYLE, returning from # Connection._setup without setting the state as const.STATUS_OK if utils.PYPY: connection.reset() # Register the custom data types self._register_unicode(connection) self._cursor.close() self._cursor = None if self._conn: pool.free_connection(self._uri) self._conn = None def _connect(self): :param connection: The connection to create a cursor on :type connection: psycopg2.extensions.connection :param str name: A cursor name for a server side cursor :rtype: psycopg2.extensions.cursor """ # Attempt to get a cached connection from the connection pool if self._use_pool: connection = pool.get_connection(self._uri) if connection: self._from_pool = True return connection # Create a new PostgreSQL connection LOGGER.debug('Connection KWARGS: %r', utils.uri_to_kwargs(self._uri)) connection = psycopg2.connect(**utils.uri_to_kwargs(self._uri)) # Add it to the pool, if pooling is enabled if self._use_pool: pool.add_connection(self._uri, connection) # Added in because psycopg2ct connects and leaves the connection in # a weird state: consts.STATUS_DATESTYLE, returning from use in async session adapters. :param dict kwargs: Keyword connection args :rtype: psycopg2.extensions.connection """ return psycopg2.connect(**kwargs) @staticmethod def _register_unicode(connection): """Register the cursor to be able to receive Unicode string. :type connection: psycopg2.extensions.connection :param connection: Where to register things """ psycopg2.extensions.register_type(psycopg2.extensions.UNICODE, """ return self._conn.cursor(cursor_factory=cursor_factory) @staticmethod def _register_json(connection): """Register the JSON extension from the psycopg2.extras module def _status(self): """Return the current connection status as an integer value. The status should match one of the following constants: - queries.Session.INTRANS: Connection established, in transaction - queries.Session.PREPARED: Prepared for second phase of transaction - queries.Session.READY: Connected, no active transaction :rtype: int """ if self._conn.status == psycopg2.extensions.STATUS_BEGIN: return self.READY return self._conn.status <MSG> Switch to using _id for the connection pooling - Will allow for mix of sync and async connections in the pool - Add async flag to class for child classes to become async <DFF> @@ -14,7 +14,7 @@ either object just as you would in a program using psycopg2 directly. """ -import contextlib +import hashlib import logging import psycopg2 @@ -32,7 +32,6 @@ from queries import PYPY DEFAULT_ENCODING = 'UTF8' - class Session(object): """The Session class allows for a unified (and simplified) view of interfacing with a PostgreSQL database server. The Session object can @@ -52,6 +51,8 @@ class Session(object): :param bool use_pool: Use the connection pool """ + _ASYNC = False + _from_pool = False _tpc_id = None @@ -128,7 +129,7 @@ class Session(object): raise AssertionError('Connection not open') self._conn.close() if self._use_pool: - pool.remove_connection(self._uri) + pool.remove_connection(self._id) self._conn = None self._cursor = None @@ -308,7 +309,7 @@ class Session(object): self._cursor.close() self._cursor = None if self._conn: - pool.free_connection(self._uri) + pool.free_connection(self._id) self._conn = None def _connect(self): @@ -320,18 +321,19 @@ class Session(object): """ # Attempt to get a cached connection from the connection pool if self._use_pool: - connection = pool.get_connection(self._uri) + connection = pool.get_connection(self._id) if connection: self._from_pool = True return connection # Create a new PostgreSQL connection - LOGGER.debug('Connection KWARGS: %r', utils.uri_to_kwargs(self._uri)) - connection = psycopg2.connect(**utils.uri_to_kwargs(self._uri)) + kwargs = utils.uri_to_kwargs(self._uri) + kwargs['async'] = self._ASYNC + connection = psycopg2.connect(**kwargs) # Add it to the pool, if pooling is enabled if self._use_pool: - pool.add_connection(self._uri, connection) + pool.add_connection(self._id, connection) # Added in because psycopg2ct connects and leaves the connection in # a weird state: consts.STATUS_DATESTYLE, returning from @@ -355,6 +357,15 @@ class Session(object): """ return self._conn.cursor(cursor_factory=cursor_factory) + @property + def _id(self): + """Return an ID to be used with the connection pool + + :rtype: str + + """ + return hashlib.md5(self._uri).digest() + @staticmethod def _register_json(connection): """Register the JSON extension from the psycopg2.extras module
19
Switch to using _id for the connection pooling
8
.py
py
bsd-3-clause
gmr/queries
1285
<NME> pool.py <BEF> """ Connection Pooling """ import datetime import logging import os import threading import time import weakref import psycopg2 LOGGER = logging.getLogger(__name__) DEFAULT_IDLE_TTL = 60 DEFAULT_MAX_SIZE = int(os.environ.get('QUERIES_MAX_POOL_SIZE', 1)) class Connection(object): """Contains the handle to the connection, the current state of the connection and methods for manipulating the state of the connection. """ _lock = threading.Lock() def __init__(self, handle): self.handle = handle self.used_by = None self.executions = 0 self.exceptions = 0 def close(self): """Close the connection :raises: ConnectionBusyError """ LOGGER.debug('Connection %s closing', self.id) if self.busy: raise ConnectionBusyError(self) with self._lock: if not self.handle.closed: try: self.handle.close() except psycopg2.InterfaceError as error: LOGGER.error('Error closing socket: %s', error) @property def closed(self): """Return if the psycopg2 connection is closed. :rtype: bool """ return self.handle.closed != 0 @property def busy(self): """Return if the connection is currently executing a query or is locked by a session that still exists. :rtype: bool """ if self.handle.isexecuting(): return True elif self.used_by is None: return False return not self.used_by() is None @property def executing(self): """Return if the connection is currently executing a query :rtype: bool """ return self.handle.isexecuting() def free(self): """Remove the lock on the connection if the connection is not active :raises: ConnectionBusyError """ LOGGER.debug('Connection %s freeing', self.id) if self.handle.isexecuting(): raise ConnectionBusyError(self) with self._lock: self.used_by = None LOGGER.debug('Connection %s freed', self.id) @property def id(self): """Return id of the psycopg2 connection object :rtype: int """ return id(self.handle) def lock(self, session): """Lock the connection, ensuring that it is not busy and storing a weakref for the session. :param queries.Session session: The session to lock the connection with :raises: ConnectionBusyError """ if self.busy: raise ConnectionBusyError(self) with self._lock: self.used_by = weakref.ref(session) LOGGER.debug('Connection %s locked', self.id) @property def locked(self): """Return if the connection is currently exclusively locked :rtype: bool """ return self.used_by is not None class Pool(object): """A connection pool for gaining access to and managing connections""" _lock = threading.Lock() idle_start = None idle_ttl = DEFAULT_IDLE_TTL max_size = DEFAULT_MAX_SIZE def __init__(self, pool_id, idle_ttl=DEFAULT_IDLE_TTL, max_size=DEFAULT_MAX_SIZE, time_method=None): self.connections = {} self._id = pool_id self.idle_ttl = idle_ttl self.max_size = max_size self.time_method = time_method or time.time def __contains__(self, connection): """Return True if the pool contains the connection""" return id(connection) in self.connections def __len__(self): """Return the number of connections in the pool""" return len(self.connections) def add(self, connection): """Add a new connection to the pool :param connection: The connection to add to the pool :type connection: psycopg2.extensions.connection :raises: PoolFullError """ if id(connection) in self.connections: raise ValueError('Connection already exists in pool') if len(self.connections) == self.max_size: LOGGER.warning('Race condition found when adding new connection') try: connection.close() except (psycopg2.Error, psycopg2.Warning) as error: LOGGER.error('Error closing the conn that cant be used: %s', error) raise PoolFullError(self) with self._lock: self.connections[id(connection)] = Connection(connection) LOGGER.debug('Pool %s added connection %s', self.id, id(connection)) @property def busy_connections(self): """Return a list of active/busy connections :rtype: list """ return [c for c in self.connections.values() if c.busy and not c.closed] def clean(self): """Clean the pool by removing any closed connections and if the pool's idle has exceeded its idle TTL, remove all connections. """ LOGGER.debug('Cleaning the pool') for connection in [self.connections[k] for k in self.connections if self.connections[k].closed]: LOGGER.debug('Removing %s', connection.id) self.remove(connection.handle) if self.idle_duration > self.idle_ttl: self.close() LOGGER.debug('Pool %s cleaned', self.id) def close(self): """Close the pool by closing and removing all of the connections""" for cid in list(self.connections.keys()): self.remove(self.connections[cid].handle) LOGGER.debug('Pool %s closed', self.id) @property def closed_connections(self): """Return a list of closed connections :rtype: list """ return [c for c in self.connections.values() if c.closed] def connection_handle(self, connection): """Return a connection object for the given psycopg2 connection :param connection: The connection to return a parent for :type connection: psycopg2.extensions.connection :rtype: Connection """ return self.connections[id(connection)] @property def executing_connections(self): """Return a list of connections actively executing queries :rtype: list """ return [c for c in self.connections.values() if c.executing] def free(self, connection): """Free the connection from use by the session that was using it. :param connection: The connection to free :type connection: psycopg2.extensions.connection :raises: ConnectionNotFoundError """ LOGGER.debug('Pool %s freeing connection %s', self.id, id(connection)) try: self.connection_handle(connection).free() except KeyError: raise ConnectionNotFoundError(self.id, id(connection)) if self.idle_connections == list(self.connections.values()): with self._lock: self.idle_start = self.time_method() LOGGER.debug('Pool %s freed connection %s', self.id, id(connection)) def get(self, session): """Return an idle connection and assign the session to the connection :param queries.Session session: The session to assign :rtype: psycopg2.extensions.connection :raises: NoIdleConnectionsError """ idle = self.idle_connections if idle: connection = idle.pop(0) connection.lock(session) if self.idle_start: with self._lock: self.idle_start = None return connection.handle raise NoIdleConnectionsError(self.id) @property def id(self): """Return the ID for this pool :rtype: str """ return self._id @property def idle_connections(self): """Return a list of idle connections :rtype: list """ return [c for c in self.connections.values() if not c.busy and not c.closed] @property def idle_duration(self): """Return the number of seconds that the pool has had no active connections. :rtype: float """ if self.idle_start is None: return 0 return self.time_method() - self.idle_start @property def is_full(self): """Return True if there are no more open slots for connections. :rtype: bool """ return len(self.connections) >= self.max_size def lock(self, connection, session): """Explicitly lock the specified connection :type connection: psycopg2.extensions.connection :param connection: The connection to lock :param queries.Session session: The session to hold the lock """ cid = id(connection) try: self.connection_handle(connection).lock(session) except KeyError: raise ConnectionNotFoundError(self.id, cid) else: if self.idle_start: with self._lock: self.idle_start = None LOGGER.debug('Pool %s locked connection %s', self.id, cid) @property def locked_connections(self): """Return a list of all locked connections :rtype: list """ return [c for c in self.connections.values() if c.locked] def remove(self, connection): """Remove the connection from the pool :param connection: The connection to remove :type connection: psycopg2.extensions.connection :raises: ConnectionNotFoundError :raises: ConnectionBusyError """ cid = id(connection) if cid not in self.connections: raise ConnectionNotFoundError(self.id, cid) self.connection_handle(connection).close() with self._lock: del self.connections[cid] LOGGER.debug('Pool %s removed connection %s', self.id, cid) def report(self): """Return a report about the pool state and configuration. :rtype: dict """ return { 'connections': { 'busy': len(self.busy_connections), 'closed': len(self.closed_connections), 'executing': len(self.executing_connections), 'idle': len(self.idle_connections), 'locked': len(self.busy_connections) }, 'exceptions': sum([c.exceptions for c in self.connections.values()]), 'executions': sum([c.executions for c in self.connections.values()]), 'full': self.is_full, 'idle': { 'duration': self.idle_duration, 'ttl': self.idle_ttl }, 'max_size': self.max_size } def shutdown(self): """Forcefully shutdown the entire pool, closing all non-executing connections. :raises: ConnectionBusyError """ with self._lock: for cid in list(self.connections.keys()): if self.connections[cid].executing: raise ConnectionBusyError(cid) if self.connections[cid].locked: self.connections[cid].free() self.connections[cid].close() del self.connections[cid] def set_idle_ttl(self, ttl): """Set the idle ttl :param int ttl: The TTL when idle """ with self._lock: self.idle_ttl = ttl def set_max_size(self, size): """Set the maximum number of connections :param int size: The maximum number of connections """ with self._lock: self.max_size = size class PoolManager(object): """The connection pool object implements behavior around connections and their use in queries.Session objects. We carry a pool id instead of the connection URI so that we will not be carrying the URI in memory, creating a possible security issue. """ _lock = threading.Lock() _pools = {} def __contains__(self, pid): """Returns True if the pool exists :param str pid: The pool id to check for :rtype: bool """ return pid in self.__class__._pools @classmethod def instance(cls): """Only allow a single PoolManager instance to exist, returning the handle for it. :rtype: PoolManager """ if not hasattr(cls, '_instance'): with cls._lock: cls._instance = cls() return cls._instance @classmethod def add(cls, pid, connection): """Add a new connection and session to a pool. :param str pid: The pool id :type connection: psycopg2.extensions.connection :param connection: The connection to add to the pool """ with cls._lock: cls._ensure_pool_exists(pid) cls._pools[pid].add(connection) @classmethod def clean(cls, pid): """Clean the specified pool, removing any closed connections or stale locks. :param str pid: The pool id to clean """ with cls._lock: try: cls._ensure_pool_exists(pid) except KeyError: LOGGER.debug('Pool clean invoked against missing pool %s', pid) return cls._pools[pid].clean() cls._maybe_remove_pool(pid) @classmethod def create(cls, pid, idle_ttl=DEFAULT_IDLE_TTL, max_size=DEFAULT_MAX_SIZE, time_method=None): """Create a new pool, with the ability to pass in values to override the default idle TTL and the default maximum size. A pool's idle TTL defines the amount of time that a pool can be open without any sessions before it is removed. A pool's max size defines the maximum number of connections that can be added to the pool to prevent unbounded open connections. :param str pid: The pool ID :param int idle_ttl: Time in seconds for the idle TTL :param int max_size: The maximum pool size :param callable time_method: Override the use of :py:meth:`time.time` method for time values. :raises: KeyError """ if pid in cls._pools: raise KeyError('Pool %s already exists' % pid) with cls._lock: LOGGER.debug("Creating Pool: %s (%i/%i)", pid, idle_ttl, max_size) cls._pools[pid] = Pool(pid, idle_ttl, max_size, time_method) @classmethod def free(cls, pid, connection): """Free a connection that was locked by a session :param str pid: The pool ID :param connection: The connection to remove :type connection: psycopg2.extensions.connection """ with cls._lock: LOGGER.debug('Freeing %s from pool %s', id(connection), pid) cls._ensure_pool_exists(pid) cls._pools[pid].free(connection) @classmethod def get(cls, pid, session): """Get an idle, unused connection from the pool. Once a connection has been retrieved, it will be marked as in-use until it is freed. :param str pid: The pool ID :param queries.Session session: The session to assign to the connection :rtype: psycopg2.extensions.connection """ with cls._lock: cls._ensure_pool_exists(pid) return cls._pools[pid].get(session) @classmethod def get_connection(cls, pid, connection): """Return the specified :class:`~queries.pool.Connection` from the pool. :param str pid: The pool ID :param connection: The connection to return for :type connection: psycopg2.extensions.connection :rtype: queries.pool.Connection """ with cls._lock: return cls._pools[pid].connection_handle(connection) @classmethod def has_connection(cls, pid, connection): """Check to see if a pool has the specified connection :param str pid: The pool ID :param connection: The connection to check for :type connection: psycopg2.extensions.connection :rtype: bool """ with cls._lock: cls._ensure_pool_exists(pid) return connection in cls._pools[pid] @classmethod def has_idle_connection(cls, pid): """Check to see if a pool has an idle connection :param str pid: The pool ID :rtype: bool """ with cls._lock: cls._ensure_pool_exists(pid) return bool(cls._pools[pid].idle_connections) @classmethod def is_full(cls, pid): """Return a bool indicating if the specified pool is full :param str pid: The pool id :rtype: bool """ with cls._lock: cls._ensure_pool_exists(pid) return cls._pools[pid].is_full @classmethod def lock(cls, pid, connection, session): """Explicitly lock the specified connection in the pool :param str pid: The pool id :type connection: psycopg2.extensions.connection :param connection: The connection to add to the pool :param queries.Session session: The session to hold the lock """ with cls._lock: cls._ensure_pool_exists(pid) cls._pools[pid].lock(connection, session) @classmethod def remove(cls, pid): """Remove a pool, closing all connections :param str pid: The pool ID """ with cls._lock: cls._ensure_pool_exists(pid) cls._pools[pid].close() del cls._pools[pid] @classmethod def remove_connection(cls, pid, connection): """Remove a connection from the pool, closing it if is open. :param str pid: The pool ID :param connection: The connection to remove :type connection: psycopg2.extensions.connection :raises: ConnectionNotFoundError """ cls._ensure_pool_exists(pid) cls._pools[pid].remove(connection) @classmethod def set_idle_ttl(cls, pid, ttl): """Set the idle TTL for a pool, after which it will be destroyed. :param str pid: The pool id :param int ttl: The TTL for an idle pool """ with cls._lock: cls._ensure_pool_exists(pid) cls._pools[pid].set_idle_ttl(ttl) @classmethod def set_max_size(cls, pid, size): """Set the maximum number of connections for the specified pool :param str pid: The pool to set the size for :param int size: The maximum number of connections """ with cls._lock: cls._ensure_pool_exists(pid) cls._pools[pid].set_max_size(size) @classmethod def shutdown(cls): """Close all connections on in all pools""" for pid in list(cls._pools.keys()): cls._pools[pid].shutdown() LOGGER.info('Shutdown complete, all pooled connections closed') @classmethod def size(cls, pid): """Return the number of connections in the pool :param str pid: The pool id :rtype int """ with cls._lock: cls._ensure_pool_exists(pid) return len(cls._pools[pid]) @classmethod def report(cls): """Return the state of the all of the registered pools. :rtype: dict """ return { 'timestamp': datetime.datetime.utcnow().isoformat(), 'process': os.getpid(), 'pools': dict([(i, p.report()) for i, p in cls._pools.items()]) } @classmethod def _ensure_pool_exists(cls, pid): """Raise an exception if the pool has yet to be created or has been removed. :param str pid: The pool ID to check for :raises: KeyError """ if pid not in cls._pools: raise KeyError('Pool %s has not been created' % pid) @classmethod def _maybe_remove_pool(cls, pid): """If the pool has no open connections, remove it :param str pid: The pool id to clean """ if not len(cls._pools[pid]): del cls._pools[pid] class QueriesException(Exception): """Base Exception for all other Queries exceptions""" pass class ConnectionException(QueriesException): def __init__(self, cid): self.cid = cid class PoolException(QueriesException): def __init__(self, pid): self.pid = pid class PoolConnectionException(PoolException): def __init__(self, pid, cid): self.pid = pid self.cid = cid class ActivePoolError(PoolException): """Raised when removing a pool that has active connections""" def __str__(self): return 'Pool %s has at least one active connection' % self.pid class ConnectionBusyError(ConnectionException): """Raised when trying to lock a connection that is already busy""" def __str__(self): return 'Connection %s is busy' % self.cid class ConnectionNotFoundError(PoolConnectionException): """Raised if a specific connection is not found in the pool""" def __str__(self): return 'Connection %s not found in pool %s' % (self.cid, self.pid) class NoIdleConnectionsError(PoolException): """Raised if a pool does not have any idle, open connections""" def __str__(self): return 'Pool %s has no idle connections' % self.pid class PoolFullError(PoolException): """Raised when adding a connection to a pool that has hit max-size""" def __str__(self): return 'Pool %s is at its maximum capacity' % self.pid <MSG> Add check to ensure its not busy and closed when closing conn <DFF> @@ -37,7 +37,7 @@ class Connection(object): """ LOGGER.debug('Connection %s closing', self.id) - if self.busy: + if self.busy and not self.closed: raise ConnectionBusyError(self) with self._lock: if not self.handle.closed:
1
Add check to ensure its not busy and closed when closing conn
1
.py
py
bsd-3-clause
gmr/queries
1286
<NME> pool.py <BEF> """ Connection Pooling """ import datetime import logging import os import threading import time import weakref import psycopg2 LOGGER = logging.getLogger(__name__) DEFAULT_IDLE_TTL = 60 DEFAULT_MAX_SIZE = int(os.environ.get('QUERIES_MAX_POOL_SIZE', 1)) class Connection(object): """Contains the handle to the connection, the current state of the connection and methods for manipulating the state of the connection. """ _lock = threading.Lock() def __init__(self, handle): self.handle = handle self.used_by = None self.executions = 0 self.exceptions = 0 def close(self): """Close the connection :raises: ConnectionBusyError """ LOGGER.debug('Connection %s closing', self.id) if self.busy: raise ConnectionBusyError(self) with self._lock: if not self.handle.closed: try: self.handle.close() except psycopg2.InterfaceError as error: LOGGER.error('Error closing socket: %s', error) @property def closed(self): """Return if the psycopg2 connection is closed. :rtype: bool """ return self.handle.closed != 0 @property def busy(self): """Return if the connection is currently executing a query or is locked by a session that still exists. :rtype: bool """ if self.handle.isexecuting(): return True elif self.used_by is None: return False return not self.used_by() is None @property def executing(self): """Return if the connection is currently executing a query :rtype: bool """ return self.handle.isexecuting() def free(self): """Remove the lock on the connection if the connection is not active :raises: ConnectionBusyError """ LOGGER.debug('Connection %s freeing', self.id) if self.handle.isexecuting(): raise ConnectionBusyError(self) with self._lock: self.used_by = None LOGGER.debug('Connection %s freed', self.id) @property def id(self): """Return id of the psycopg2 connection object :rtype: int """ return id(self.handle) def lock(self, session): """Lock the connection, ensuring that it is not busy and storing a weakref for the session. :param queries.Session session: The session to lock the connection with :raises: ConnectionBusyError """ if self.busy: raise ConnectionBusyError(self) with self._lock: self.used_by = weakref.ref(session) LOGGER.debug('Connection %s locked', self.id) @property def locked(self): """Return if the connection is currently exclusively locked :rtype: bool """ return self.used_by is not None class Pool(object): """A connection pool for gaining access to and managing connections""" _lock = threading.Lock() idle_start = None idle_ttl = DEFAULT_IDLE_TTL max_size = DEFAULT_MAX_SIZE def __init__(self, pool_id, idle_ttl=DEFAULT_IDLE_TTL, max_size=DEFAULT_MAX_SIZE, time_method=None): self.connections = {} self._id = pool_id self.idle_ttl = idle_ttl self.max_size = max_size self.time_method = time_method or time.time def __contains__(self, connection): """Return True if the pool contains the connection""" return id(connection) in self.connections def __len__(self): """Return the number of connections in the pool""" return len(self.connections) def add(self, connection): """Add a new connection to the pool :param connection: The connection to add to the pool :type connection: psycopg2.extensions.connection :raises: PoolFullError """ if id(connection) in self.connections: raise ValueError('Connection already exists in pool') if len(self.connections) == self.max_size: LOGGER.warning('Race condition found when adding new connection') try: connection.close() except (psycopg2.Error, psycopg2.Warning) as error: LOGGER.error('Error closing the conn that cant be used: %s', error) raise PoolFullError(self) with self._lock: self.connections[id(connection)] = Connection(connection) LOGGER.debug('Pool %s added connection %s', self.id, id(connection)) @property def busy_connections(self): """Return a list of active/busy connections :rtype: list """ return [c for c in self.connections.values() if c.busy and not c.closed] def clean(self): """Clean the pool by removing any closed connections and if the pool's idle has exceeded its idle TTL, remove all connections. """ LOGGER.debug('Cleaning the pool') for connection in [self.connections[k] for k in self.connections if self.connections[k].closed]: LOGGER.debug('Removing %s', connection.id) self.remove(connection.handle) if self.idle_duration > self.idle_ttl: self.close() LOGGER.debug('Pool %s cleaned', self.id) def close(self): """Close the pool by closing and removing all of the connections""" for cid in list(self.connections.keys()): self.remove(self.connections[cid].handle) LOGGER.debug('Pool %s closed', self.id) @property def closed_connections(self): """Return a list of closed connections :rtype: list """ return [c for c in self.connections.values() if c.closed] def connection_handle(self, connection): """Return a connection object for the given psycopg2 connection :param connection: The connection to return a parent for :type connection: psycopg2.extensions.connection :rtype: Connection """ return self.connections[id(connection)] @property def executing_connections(self): """Return a list of connections actively executing queries :rtype: list """ return [c for c in self.connections.values() if c.executing] def free(self, connection): """Free the connection from use by the session that was using it. :param connection: The connection to free :type connection: psycopg2.extensions.connection :raises: ConnectionNotFoundError """ LOGGER.debug('Pool %s freeing connection %s', self.id, id(connection)) try: self.connection_handle(connection).free() except KeyError: raise ConnectionNotFoundError(self.id, id(connection)) if self.idle_connections == list(self.connections.values()): with self._lock: self.idle_start = self.time_method() LOGGER.debug('Pool %s freed connection %s', self.id, id(connection)) def get(self, session): """Return an idle connection and assign the session to the connection :param queries.Session session: The session to assign :rtype: psycopg2.extensions.connection :raises: NoIdleConnectionsError """ idle = self.idle_connections if idle: connection = idle.pop(0) connection.lock(session) if self.idle_start: with self._lock: self.idle_start = None return connection.handle raise NoIdleConnectionsError(self.id) @property def id(self): """Return the ID for this pool :rtype: str """ return self._id @property def idle_connections(self): """Return a list of idle connections :rtype: list """ return [c for c in self.connections.values() if not c.busy and not c.closed] @property def idle_duration(self): """Return the number of seconds that the pool has had no active connections. :rtype: float """ if self.idle_start is None: return 0 return self.time_method() - self.idle_start @property def is_full(self): """Return True if there are no more open slots for connections. :rtype: bool """ return len(self.connections) >= self.max_size def lock(self, connection, session): """Explicitly lock the specified connection :type connection: psycopg2.extensions.connection :param connection: The connection to lock :param queries.Session session: The session to hold the lock """ cid = id(connection) try: self.connection_handle(connection).lock(session) except KeyError: raise ConnectionNotFoundError(self.id, cid) else: if self.idle_start: with self._lock: self.idle_start = None LOGGER.debug('Pool %s locked connection %s', self.id, cid) @property def locked_connections(self): """Return a list of all locked connections :rtype: list """ return [c for c in self.connections.values() if c.locked] def remove(self, connection): """Remove the connection from the pool :param connection: The connection to remove :type connection: psycopg2.extensions.connection :raises: ConnectionNotFoundError :raises: ConnectionBusyError """ cid = id(connection) if cid not in self.connections: raise ConnectionNotFoundError(self.id, cid) self.connection_handle(connection).close() with self._lock: del self.connections[cid] LOGGER.debug('Pool %s removed connection %s', self.id, cid) def report(self): """Return a report about the pool state and configuration. :rtype: dict """ return { 'connections': { 'busy': len(self.busy_connections), 'closed': len(self.closed_connections), 'executing': len(self.executing_connections), 'idle': len(self.idle_connections), 'locked': len(self.busy_connections) }, 'exceptions': sum([c.exceptions for c in self.connections.values()]), 'executions': sum([c.executions for c in self.connections.values()]), 'full': self.is_full, 'idle': { 'duration': self.idle_duration, 'ttl': self.idle_ttl }, 'max_size': self.max_size } def shutdown(self): """Forcefully shutdown the entire pool, closing all non-executing connections. :raises: ConnectionBusyError """ with self._lock: for cid in list(self.connections.keys()): if self.connections[cid].executing: raise ConnectionBusyError(cid) if self.connections[cid].locked: self.connections[cid].free() self.connections[cid].close() del self.connections[cid] def set_idle_ttl(self, ttl): """Set the idle ttl :param int ttl: The TTL when idle """ with self._lock: self.idle_ttl = ttl def set_max_size(self, size): """Set the maximum number of connections :param int size: The maximum number of connections """ with self._lock: self.max_size = size class PoolManager(object): """The connection pool object implements behavior around connections and their use in queries.Session objects. We carry a pool id instead of the connection URI so that we will not be carrying the URI in memory, creating a possible security issue. """ _lock = threading.Lock() _pools = {} def __contains__(self, pid): """Returns True if the pool exists :param str pid: The pool id to check for :rtype: bool """ return pid in self.__class__._pools @classmethod def instance(cls): """Only allow a single PoolManager instance to exist, returning the handle for it. :rtype: PoolManager """ if not hasattr(cls, '_instance'): with cls._lock: cls._instance = cls() return cls._instance @classmethod def add(cls, pid, connection): """Add a new connection and session to a pool. :param str pid: The pool id :type connection: psycopg2.extensions.connection :param connection: The connection to add to the pool """ with cls._lock: cls._ensure_pool_exists(pid) cls._pools[pid].add(connection) @classmethod def clean(cls, pid): """Clean the specified pool, removing any closed connections or stale locks. :param str pid: The pool id to clean """ with cls._lock: try: cls._ensure_pool_exists(pid) except KeyError: LOGGER.debug('Pool clean invoked against missing pool %s', pid) return cls._pools[pid].clean() cls._maybe_remove_pool(pid) @classmethod def create(cls, pid, idle_ttl=DEFAULT_IDLE_TTL, max_size=DEFAULT_MAX_SIZE, time_method=None): """Create a new pool, with the ability to pass in values to override the default idle TTL and the default maximum size. A pool's idle TTL defines the amount of time that a pool can be open without any sessions before it is removed. A pool's max size defines the maximum number of connections that can be added to the pool to prevent unbounded open connections. :param str pid: The pool ID :param int idle_ttl: Time in seconds for the idle TTL :param int max_size: The maximum pool size :param callable time_method: Override the use of :py:meth:`time.time` method for time values. :raises: KeyError """ if pid in cls._pools: raise KeyError('Pool %s already exists' % pid) with cls._lock: LOGGER.debug("Creating Pool: %s (%i/%i)", pid, idle_ttl, max_size) cls._pools[pid] = Pool(pid, idle_ttl, max_size, time_method) @classmethod def free(cls, pid, connection): """Free a connection that was locked by a session :param str pid: The pool ID :param connection: The connection to remove :type connection: psycopg2.extensions.connection """ with cls._lock: LOGGER.debug('Freeing %s from pool %s', id(connection), pid) cls._ensure_pool_exists(pid) cls._pools[pid].free(connection) @classmethod def get(cls, pid, session): """Get an idle, unused connection from the pool. Once a connection has been retrieved, it will be marked as in-use until it is freed. :param str pid: The pool ID :param queries.Session session: The session to assign to the connection :rtype: psycopg2.extensions.connection """ with cls._lock: cls._ensure_pool_exists(pid) return cls._pools[pid].get(session) @classmethod def get_connection(cls, pid, connection): """Return the specified :class:`~queries.pool.Connection` from the pool. :param str pid: The pool ID :param connection: The connection to return for :type connection: psycopg2.extensions.connection :rtype: queries.pool.Connection """ with cls._lock: return cls._pools[pid].connection_handle(connection) @classmethod def has_connection(cls, pid, connection): """Check to see if a pool has the specified connection :param str pid: The pool ID :param connection: The connection to check for :type connection: psycopg2.extensions.connection :rtype: bool """ with cls._lock: cls._ensure_pool_exists(pid) return connection in cls._pools[pid] @classmethod def has_idle_connection(cls, pid): """Check to see if a pool has an idle connection :param str pid: The pool ID :rtype: bool """ with cls._lock: cls._ensure_pool_exists(pid) return bool(cls._pools[pid].idle_connections) @classmethod def is_full(cls, pid): """Return a bool indicating if the specified pool is full :param str pid: The pool id :rtype: bool """ with cls._lock: cls._ensure_pool_exists(pid) return cls._pools[pid].is_full @classmethod def lock(cls, pid, connection, session): """Explicitly lock the specified connection in the pool :param str pid: The pool id :type connection: psycopg2.extensions.connection :param connection: The connection to add to the pool :param queries.Session session: The session to hold the lock """ with cls._lock: cls._ensure_pool_exists(pid) cls._pools[pid].lock(connection, session) @classmethod def remove(cls, pid): """Remove a pool, closing all connections :param str pid: The pool ID """ with cls._lock: cls._ensure_pool_exists(pid) cls._pools[pid].close() del cls._pools[pid] @classmethod def remove_connection(cls, pid, connection): """Remove a connection from the pool, closing it if is open. :param str pid: The pool ID :param connection: The connection to remove :type connection: psycopg2.extensions.connection :raises: ConnectionNotFoundError """ cls._ensure_pool_exists(pid) cls._pools[pid].remove(connection) @classmethod def set_idle_ttl(cls, pid, ttl): """Set the idle TTL for a pool, after which it will be destroyed. :param str pid: The pool id :param int ttl: The TTL for an idle pool """ with cls._lock: cls._ensure_pool_exists(pid) cls._pools[pid].set_idle_ttl(ttl) @classmethod def set_max_size(cls, pid, size): """Set the maximum number of connections for the specified pool :param str pid: The pool to set the size for :param int size: The maximum number of connections """ with cls._lock: cls._ensure_pool_exists(pid) cls._pools[pid].set_max_size(size) @classmethod def shutdown(cls): """Close all connections on in all pools""" for pid in list(cls._pools.keys()): cls._pools[pid].shutdown() LOGGER.info('Shutdown complete, all pooled connections closed') @classmethod def size(cls, pid): """Return the number of connections in the pool :param str pid: The pool id :rtype int """ with cls._lock: cls._ensure_pool_exists(pid) return len(cls._pools[pid]) @classmethod def report(cls): """Return the state of the all of the registered pools. :rtype: dict """ return { 'timestamp': datetime.datetime.utcnow().isoformat(), 'process': os.getpid(), 'pools': dict([(i, p.report()) for i, p in cls._pools.items()]) } @classmethod def _ensure_pool_exists(cls, pid): """Raise an exception if the pool has yet to be created or has been removed. :param str pid: The pool ID to check for :raises: KeyError """ if pid not in cls._pools: raise KeyError('Pool %s has not been created' % pid) @classmethod def _maybe_remove_pool(cls, pid): """If the pool has no open connections, remove it :param str pid: The pool id to clean """ if not len(cls._pools[pid]): del cls._pools[pid] class QueriesException(Exception): """Base Exception for all other Queries exceptions""" pass class ConnectionException(QueriesException): def __init__(self, cid): self.cid = cid class PoolException(QueriesException): def __init__(self, pid): self.pid = pid class PoolConnectionException(PoolException): def __init__(self, pid, cid): self.pid = pid self.cid = cid class ActivePoolError(PoolException): """Raised when removing a pool that has active connections""" def __str__(self): return 'Pool %s has at least one active connection' % self.pid class ConnectionBusyError(ConnectionException): """Raised when trying to lock a connection that is already busy""" def __str__(self): return 'Connection %s is busy' % self.cid class ConnectionNotFoundError(PoolConnectionException): """Raised if a specific connection is not found in the pool""" def __str__(self): return 'Connection %s not found in pool %s' % (self.cid, self.pid) class NoIdleConnectionsError(PoolException): """Raised if a pool does not have any idle, open connections""" def __str__(self): return 'Pool %s has no idle connections' % self.pid class PoolFullError(PoolException): """Raised when adding a connection to a pool that has hit max-size""" def __str__(self): return 'Pool %s is at its maximum capacity' % self.pid <MSG> Add check to ensure its not busy and closed when closing conn <DFF> @@ -37,7 +37,7 @@ class Connection(object): """ LOGGER.debug('Connection %s closing', self.id) - if self.busy: + if self.busy and not self.closed: raise ConnectionBusyError(self) with self._lock: if not self.handle.closed:
1
Add check to ensure its not busy and closed when closing conn
1
.py
py
bsd-3-clause
gmr/queries
1287
<NME> results_tests.py <BEF> """ Tests for functionality in the results module """ import logging import unittest import mock import psycopg2 from queries import results LOGGER = logging.getLogger(__name__) class ResultsTestCase(unittest.TestCase): def setUp(self): self.cursor = mock.MagicMock() self.obj = results.Results(self.cursor) def test_cursor_is_assigned(self): self.assertEqual(self.obj.cursor, self.cursor) def test_getitem_invokes_scroll(self): self.cursor.scroll = mock.Mock() self.cursor.fetchone = mock.Mock() row = self.obj[1] LOGGER.debug('Row: %r', row) self.cursor.scroll.assert_called_once_with(1, 'absolute') def test_getitem_raises_index_error(self): self.cursor.scroll = mock.Mock(side_effect=psycopg2.ProgrammingError) self.cursor.fetchone = mock.Mock() def get_row(): return self.obj[1] self.assertRaises(IndexError, get_row) def test_getitem_invokes_fetchone(self): _row = self.obj[1] self.cursor.fetchone.assert_called_once_with() def test_iter_rewinds(self): self.cursor.__iter__ = mock.Mock(return_value=iter([1, 2, 3])) with mock.patch.object(self.obj, '_rewind') as rewind: def test_iter_rewinds(self): self.cursor.__iter__ = mock.Mock(return_value=iter([1, 2, 3])) with mock.patch.object(self.obj, '_rewind') as rewind: [x for x in self.obj] rewind.assert_called_once_with() def test_iter_iters(self): self.cursor.__iter__ = mock.Mock(return_value=iter([1, 2, 3])) with mock.patch.object(self.obj, '_rewind'): self.assertEqual([x for x in self.obj], [1, 2, 3]) def test_rowcount_value(self): self.cursor.rowcount = 128 self.assertEqual(len(self.obj), 128) def test_nonzero_false(self): self.cursor.rowcount = 0 self.assertFalse(bool(self.obj)) def test_nonzero_true(self): self.cursor.rowcount = 128 self.assertTrue(bool(self.obj)) def test_repr_str(self): self.cursor.rowcount = 128 self.assertEqual(str(self.obj), '<queries.Results rows=128>') def test_as_dict_no_rows(self): self.cursor.rowcount = 0 self.assertDictEqual(self.obj.as_dict(), {}) def test_as_dict_rewinds(self): expectation = {'foo': 'bar', 'baz': 'qux'} self.cursor.rowcount = 1 self.cursor.fetchone = mock.Mock(return_value=expectation) with mock.patch.object(self.obj, '_rewind') as rewind: result = self.obj.as_dict() LOGGER.debug('Result: %r', result) rewind.assert_called_once_with() def test_as_dict_value(self): expectation = {'foo': 'bar', 'baz': 'qux'} self.cursor.rowcount = 1 self.cursor.fetchone = mock.Mock(return_value=expectation) with mock.patch.object(self.obj, '_rewind'): self.assertDictEqual(self.obj.as_dict(), expectation) def test_as_dict_with_multiple_rows_raises(self): self.cursor.rowcount = 2 with mock.patch.object(self.obj, '_rewind'): self.assertRaises(ValueError, self.obj.as_dict) def test_free_raises_exception(self): self.assertRaises(NotImplementedError, self.obj.free) def test_items_invokes_scroll(self): self.cursor.scroll = mock.Mock() self.cursor.fetchall = mock.Mock() def test_items_invokes_scroll(self): self.cursor.scroll = mock.Mock() self.cursor.fetchall = mock.Mock() self.obj.items() self.cursor.scroll.assert_called_once_with(0, 'absolute') def test_items_invokes_fetchall(self): self.cursor.scroll = mock.Mock() self.cursor.fetchall = mock.Mock() self.obj.items() self.cursor.fetchall.assert_called_once_with() def test_rownumber_value(self): self.cursor.rownumber = 10 self.assertEqual(self.obj.rownumber, 10) def test_query_value(self): self.cursor.query = 'SELECT * FROM foo' self.assertEqual(self.obj.query, 'SELECT * FROM foo') def test_status_value(self): self.cursor.statusmessage = 'Status message' self.assertEqual(self.obj.status, 'Status message') def test_rewind_invokes_scroll(self): self.cursor.scroll = mock.Mock() self.obj._rewind() self.cursor.scroll.assert_called_once_with(0, 'absolute') <MSG> Merge pull request #4 from den-t/master Allow iterating over empty query results <DFF> @@ -41,6 +41,12 @@ class ResultsTestCase(unittest.TestCase): _row = self.obj[1] self.cursor.fetchone.assert_called_once_with() + def test_iter_on_empty(self): + self.cursor.rowcount = 0 + with mock.patch.object(self.obj, '_rewind') as rewind: + [x for x in self.obj] + assert not rewind.called, '_rewind should not be called on empty result' + def test_iter_rewinds(self): self.cursor.__iter__ = mock.Mock(return_value=iter([1, 2, 3])) with mock.patch.object(self.obj, '_rewind') as rewind: @@ -99,6 +105,13 @@ class ResultsTestCase(unittest.TestCase): def test_free_raises_exception(self): self.assertRaises(NotImplementedError, self.obj.free) + def test_items_returns_on_empty(self): + self.cursor.rowcount = 0 + self.cursor.scroll = mock.Mock() + self.cursor.fetchall = mock.Mock() + self.obj.items() + assert not self.cursor.scroll.called, 'Cursor.scroll should not be called on empty result' + def test_items_invokes_scroll(self): self.cursor.scroll = mock.Mock() self.cursor.fetchall = mock.Mock()
13
Merge pull request #4 from den-t/master
0
.py
py
bsd-3-clause
gmr/queries
1288
<NME> results_tests.py <BEF> """ Tests for functionality in the results module """ import logging import unittest import mock import psycopg2 from queries import results LOGGER = logging.getLogger(__name__) class ResultsTestCase(unittest.TestCase): def setUp(self): self.cursor = mock.MagicMock() self.obj = results.Results(self.cursor) def test_cursor_is_assigned(self): self.assertEqual(self.obj.cursor, self.cursor) def test_getitem_invokes_scroll(self): self.cursor.scroll = mock.Mock() self.cursor.fetchone = mock.Mock() row = self.obj[1] LOGGER.debug('Row: %r', row) self.cursor.scroll.assert_called_once_with(1, 'absolute') def test_getitem_raises_index_error(self): self.cursor.scroll = mock.Mock(side_effect=psycopg2.ProgrammingError) self.cursor.fetchone = mock.Mock() def get_row(): return self.obj[1] self.assertRaises(IndexError, get_row) def test_getitem_invokes_fetchone(self): _row = self.obj[1] self.cursor.fetchone.assert_called_once_with() def test_iter_rewinds(self): self.cursor.__iter__ = mock.Mock(return_value=iter([1, 2, 3])) with mock.patch.object(self.obj, '_rewind') as rewind: def test_iter_rewinds(self): self.cursor.__iter__ = mock.Mock(return_value=iter([1, 2, 3])) with mock.patch.object(self.obj, '_rewind') as rewind: [x for x in self.obj] rewind.assert_called_once_with() def test_iter_iters(self): self.cursor.__iter__ = mock.Mock(return_value=iter([1, 2, 3])) with mock.patch.object(self.obj, '_rewind'): self.assertEqual([x for x in self.obj], [1, 2, 3]) def test_rowcount_value(self): self.cursor.rowcount = 128 self.assertEqual(len(self.obj), 128) def test_nonzero_false(self): self.cursor.rowcount = 0 self.assertFalse(bool(self.obj)) def test_nonzero_true(self): self.cursor.rowcount = 128 self.assertTrue(bool(self.obj)) def test_repr_str(self): self.cursor.rowcount = 128 self.assertEqual(str(self.obj), '<queries.Results rows=128>') def test_as_dict_no_rows(self): self.cursor.rowcount = 0 self.assertDictEqual(self.obj.as_dict(), {}) def test_as_dict_rewinds(self): expectation = {'foo': 'bar', 'baz': 'qux'} self.cursor.rowcount = 1 self.cursor.fetchone = mock.Mock(return_value=expectation) with mock.patch.object(self.obj, '_rewind') as rewind: result = self.obj.as_dict() LOGGER.debug('Result: %r', result) rewind.assert_called_once_with() def test_as_dict_value(self): expectation = {'foo': 'bar', 'baz': 'qux'} self.cursor.rowcount = 1 self.cursor.fetchone = mock.Mock(return_value=expectation) with mock.patch.object(self.obj, '_rewind'): self.assertDictEqual(self.obj.as_dict(), expectation) def test_as_dict_with_multiple_rows_raises(self): self.cursor.rowcount = 2 with mock.patch.object(self.obj, '_rewind'): self.assertRaises(ValueError, self.obj.as_dict) def test_free_raises_exception(self): self.assertRaises(NotImplementedError, self.obj.free) def test_items_invokes_scroll(self): self.cursor.scroll = mock.Mock() self.cursor.fetchall = mock.Mock() def test_items_invokes_scroll(self): self.cursor.scroll = mock.Mock() self.cursor.fetchall = mock.Mock() self.obj.items() self.cursor.scroll.assert_called_once_with(0, 'absolute') def test_items_invokes_fetchall(self): self.cursor.scroll = mock.Mock() self.cursor.fetchall = mock.Mock() self.obj.items() self.cursor.fetchall.assert_called_once_with() def test_rownumber_value(self): self.cursor.rownumber = 10 self.assertEqual(self.obj.rownumber, 10) def test_query_value(self): self.cursor.query = 'SELECT * FROM foo' self.assertEqual(self.obj.query, 'SELECT * FROM foo') def test_status_value(self): self.cursor.statusmessage = 'Status message' self.assertEqual(self.obj.status, 'Status message') def test_rewind_invokes_scroll(self): self.cursor.scroll = mock.Mock() self.obj._rewind() self.cursor.scroll.assert_called_once_with(0, 'absolute') <MSG> Merge pull request #4 from den-t/master Allow iterating over empty query results <DFF> @@ -41,6 +41,12 @@ class ResultsTestCase(unittest.TestCase): _row = self.obj[1] self.cursor.fetchone.assert_called_once_with() + def test_iter_on_empty(self): + self.cursor.rowcount = 0 + with mock.patch.object(self.obj, '_rewind') as rewind: + [x for x in self.obj] + assert not rewind.called, '_rewind should not be called on empty result' + def test_iter_rewinds(self): self.cursor.__iter__ = mock.Mock(return_value=iter([1, 2, 3])) with mock.patch.object(self.obj, '_rewind') as rewind: @@ -99,6 +105,13 @@ class ResultsTestCase(unittest.TestCase): def test_free_raises_exception(self): self.assertRaises(NotImplementedError, self.obj.free) + def test_items_returns_on_empty(self): + self.cursor.rowcount = 0 + self.cursor.scroll = mock.Mock() + self.cursor.fetchall = mock.Mock() + self.obj.items() + assert not self.cursor.scroll.called, 'Cursor.scroll should not be called on empty result' + def test_items_invokes_scroll(self): self.cursor.scroll = mock.Mock() self.cursor.fetchall = mock.Mock()
13
Merge pull request #4 from den-t/master
0
.py
py
bsd-3-clause
gmr/queries
1289
<NME> setup.py <BEF> import os import platform import setuptools # PYPY vs cpython if platform.python_implementation() == 'PyPy': install_requires = ['psycopg2cffi>=2.7.2,<3'] install_requires = ['psycopg2'] setup(name='pgsql_wrapper', version='1.1.1', description="PostgreSQL / psycopg2 caching wrapper class", maintainer="Gavin M. Roy", maintainer_email="[email protected]", setuptools.setup( name='queries', version='2.1.0', description='Simplified PostgreSQL client built upon Psycopg2', long_description=open('README.rst').read(), maintainer='Gavin M. Roy', maintainer_email='[email protected]', url='https://github.com/gmr/queries', install_requires=install_requires, extras_require={'tornado': 'tornado<6'}, license='BSD', package_data={'': ['LICENSE', 'README.rst']}, packages=['queries'], classifiers=[ 'Development Status :: 5 - Production/Stable', 'Intended Audience :: Developers', 'License :: OSI Approved :: BSD License', 'Operating System :: OS Independent', 'Programming Language :: Python :: 2', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.4', 'Programming Language :: Python :: 3.5', 'Programming Language :: Python :: 3.6', 'Programming Language :: Python :: 3.7', 'Programming Language :: Python :: 3.8', 'Programming Language :: Python :: Implementation :: CPython', 'Programming Language :: Python :: Implementation :: PyPy', 'Topic :: Database', 'Topic :: Software Development :: Libraries'], zip_safe=True) <MSG> Setup self._connection_hash on __init__ and cleanup docstrings <DFF> @@ -9,7 +9,7 @@ else: install_requires = ['psycopg2'] setup(name='pgsql_wrapper', - version='1.1.1', + version='1.1.2', description="PostgreSQL / psycopg2 caching wrapper class", maintainer="Gavin M. Roy", maintainer_email="[email protected]",
1
Setup self._connection_hash on __init__ and cleanup docstrings
1
.py
py
bsd-3-clause
gmr/queries
1290
<NME> setup.py <BEF> import os import platform import setuptools # PYPY vs cpython if platform.python_implementation() == 'PyPy': install_requires = ['psycopg2cffi>=2.7.2,<3'] install_requires = ['psycopg2'] setup(name='pgsql_wrapper', version='1.1.1', description="PostgreSQL / psycopg2 caching wrapper class", maintainer="Gavin M. Roy", maintainer_email="[email protected]", setuptools.setup( name='queries', version='2.1.0', description='Simplified PostgreSQL client built upon Psycopg2', long_description=open('README.rst').read(), maintainer='Gavin M. Roy', maintainer_email='[email protected]', url='https://github.com/gmr/queries', install_requires=install_requires, extras_require={'tornado': 'tornado<6'}, license='BSD', package_data={'': ['LICENSE', 'README.rst']}, packages=['queries'], classifiers=[ 'Development Status :: 5 - Production/Stable', 'Intended Audience :: Developers', 'License :: OSI Approved :: BSD License', 'Operating System :: OS Independent', 'Programming Language :: Python :: 2', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.4', 'Programming Language :: Python :: 3.5', 'Programming Language :: Python :: 3.6', 'Programming Language :: Python :: 3.7', 'Programming Language :: Python :: 3.8', 'Programming Language :: Python :: Implementation :: CPython', 'Programming Language :: Python :: Implementation :: PyPy', 'Topic :: Database', 'Topic :: Software Development :: Libraries'], zip_safe=True) <MSG> Setup self._connection_hash on __init__ and cleanup docstrings <DFF> @@ -9,7 +9,7 @@ else: install_requires = ['psycopg2'] setup(name='pgsql_wrapper', - version='1.1.1', + version='1.1.2', description="PostgreSQL / psycopg2 caching wrapper class", maintainer="Gavin M. Roy", maintainer_email="[email protected]",
1
Setup self._connection_hash on __init__ and cleanup docstrings
1
.py
py
bsd-3-clause
gmr/queries
1291
<NME> bootstrap <BEF> #!/bin/sh # vim: set ts=2 sts=2 sw=2 et: test -n "$SHELLDEBUG" && set -x if test -e /var/run/docker.sock then DOCKER_IP=127.0.0.1 else echo "Docker environment not detected." exit 1 fi set -e if test -z "$COMPOSE_PROJECT_NAME" then CWD=${PWD##*/} export COMPOSE_PROJECT_NAME=${CWD/_/} fi mkdir -p build get_exposed_port() { docker-compose port $1 $2 | cut -d: -f2 } docker-compose down -t 0 --volumes --remove-orphans docker-compose pull docker-compose up -d --no-recreate PORT=$(get_exposed_port postgres 5432) printf "Waiting for postgres " export PG until docker-compose exec postgres psql -U postgres -c 'SELECT 1' > /dev/null 2> /dev/null; do printf "." sleep 1 done echo " done" cat > build/test-environment<<EOF export PGHOST=${DOCKER_IP} export PGPORT=${PORT} EOF <MSG> Use pg_isready <DFF> @@ -30,7 +30,7 @@ PORT=$(get_exposed_port postgres 5432) printf "Waiting for postgres " export PG -until docker-compose exec postgres psql -U postgres -c 'SELECT 1' > /dev/null 2> /dev/null; do +until docker-compose exec postgres pg_isready -q; do printf "." sleep 1 done
1
Use pg_isready
1
bootstrap
bsd-3-clause
gmr/queries
1292
<NME> bootstrap <BEF> #!/bin/sh # vim: set ts=2 sts=2 sw=2 et: test -n "$SHELLDEBUG" && set -x if test -e /var/run/docker.sock then DOCKER_IP=127.0.0.1 else echo "Docker environment not detected." exit 1 fi set -e if test -z "$COMPOSE_PROJECT_NAME" then CWD=${PWD##*/} export COMPOSE_PROJECT_NAME=${CWD/_/} fi mkdir -p build get_exposed_port() { docker-compose port $1 $2 | cut -d: -f2 } docker-compose down -t 0 --volumes --remove-orphans docker-compose pull docker-compose up -d --no-recreate PORT=$(get_exposed_port postgres 5432) printf "Waiting for postgres " export PG until docker-compose exec postgres psql -U postgres -c 'SELECT 1' > /dev/null 2> /dev/null; do printf "." sleep 1 done echo " done" cat > build/test-environment<<EOF export PGHOST=${DOCKER_IP} export PGPORT=${PORT} EOF <MSG> Use pg_isready <DFF> @@ -30,7 +30,7 @@ PORT=$(get_exposed_port postgres 5432) printf "Waiting for postgres " export PG -until docker-compose exec postgres psql -U postgres -c 'SELECT 1' > /dev/null 2> /dev/null; do +until docker-compose exec postgres pg_isready -q; do printf "." sleep 1 done
1
Use pg_isready
1
bootstrap
bsd-3-clause
gmr/queries
1293
<NME> tornado_session.py <BEF> ADDFILE <MSG> Initial commit with Tornado support Required a little bit of refactoring of Session, not too much <DFF> @@ -0,0 +1,211 @@ +""" +Tornado Session Adapter + +Use Queries asynchronously within the Tornado framework. + +""" +import logging + +from psycopg2 import extensions +from psycopg2 import extras +from tornado import gen +from tornado import ioloop +from tornado import stack_context +import psycopg2 + +from queries import session +from queries import DEFAULT_URI + +LOGGER = logging.getLogger(__name__) + + +class TornadoSession(session.Session): + + def __init__(self, + uri=DEFAULT_URI, + cursor_factory=extras.RealDictCursor, + use_pool=True): + """Connect to a PostgreSQL server using the module wide connection and + set the isolation level. + + :param str uri: PostgreSQL connection URI + :param psycopg2.cursor: The cursor type to use + :param bool use_pool: Use the connection pool + + """ + self._callbacks = dict() + self._conn, self._cursor = None, None + self._connections = dict() + self._commands = dict() + self._cursor_factory = cursor_factory + self._exceptions = dict() + self._ioloop = ioloop.IOLoop.instance() + self._uri = uri + self._use_pool = use_pool + + @gen.coroutine + def callproc(self, name, parameters=None): + + # Grab a connection, either new or out of the pool + connection, fd, status = self._connect() + + # Add a callback for either connecting or waiting for the query + self._callbacks[fd] = yield gen.Callback((self, fd)) + + # Add the connection to the IOLoop + self._ioloop.add_handler(connection.fileno(), self._on_io_events, + ioloop.IOLoop.WRITE) + + # Maybe wait for the connection + if status == self.SETUP and connection.poll() != extensions.POLL_OK: + yield gen.Wait((self, fd)) + # Setup the callback for the actual query + self._callbacks[fd] = yield gen.Callback((self, fd)) + + # Get the cursor, execute the query and wait for the result + cursor = self._get_cursor(connection) + cursor.callproc(name, parameters) + yield gen.Wait((self, fd)) + + # If there was an exception, cleanup, then raise it + if fd in self._exceptions and self._exceptions[fd]: + error = self._exceptions[fd] + self._exec_cleanup(cursor, fd) + raise error + + # Attempt to get any result that's pending for the query + try: + result = cursor.fetchall() + except psycopg2.ProgrammingError: + result = None + + # Close the cursor and cleanup the references for this request + self._exec_cleanup(cursor, fd) + + # Return the result if there are any + raise gen.Return(result) + + @gen.coroutine + def query(self, sql, parameters=None): + + # Grab a connection, either new or out of the pool + connection, fd, status = self._connect() + + # Add a callback for either connecting or waiting for the query + self._callbacks[fd] = yield gen.Callback((self, fd)) + + # Add the connection to the IOLoop + self._ioloop.add_handler(connection.fileno(), self._on_io_events, + ioloop.IOLoop.WRITE) + + # Maybe wait for the connection + if status == self.SETUP and connection.poll() != extensions.POLL_OK: + yield gen.Wait((self, fd)) + # Setup the callback for the actual query + self._callbacks[fd] = yield gen.Callback((self, fd)) + + # Get the cursor, execute the query and wait for the result + cursor = self._get_cursor(connection) + cursor.execute(sql, parameters) + yield gen.Wait((self, fd)) + + # If there was an exception, cleanup, then raise it + if fd in self._exceptions and self._exceptions[fd]: + error = self._exceptions[fd] + self._exec_cleanup(cursor, fd) + raise error + + # Attempt to get any result that's pending for the query + try: + result = cursor.fetchall() + except psycopg2.ProgrammingError: + result = None + + # Close the cursor and cleanup the references for this request + self._exec_cleanup(cursor, fd) + + # Return the result if there are any + raise gen.Return(result) + + def _connect(self): + connection = super(TornadoSession, self)._connect() + fd, status = connection.fileno(), connection.status + + # Add the connection for use in _poll_connection + self._connections[fd] = connection + + return connection, fd, status + + def _exec_cleanup(self, cursor, fd): + """Close the cursor, remove any references to the fd in internal state + and remove the fd from the ioloop. + + :param psycopg2.cursor cursor: The cursor to close + :param int fd: The connection file descriptor + + """ + cursor.close() + if fd in self._exceptions: + del self._exceptions[fd] + if fd in self._callbacks: + del self._callbacks[fd] + if fd in self._connections: + del self._connections[fd] + self._ioloop.remove_handler(fd) + + def _get_cursor(self, connection): + """Return a cursor for the given connection. + + :param psycopg2._psycopg.connection connection: The connection to use + :rtype: psycopg2.extensions.cursor + + """ + return connection.cursor(cursor_factory=self._cursor_factory) + + @gen.coroutine + def _on_io_events(self, fd=None, events=None): + """Invoked by Tornado's IOLoop when there are events for the fd + + :param int fd: The file descriptor for the event + :param int events: The events raised + + """ + if fd not in self._connections: + return + self._poll_connection(fd) + + @gen.coroutine + def _poll_connection(self, fd): + """Check with psycopg2 to see what action to take. If the state is + POLL_OK, we should have a pending callback for that fd. + + :param int fd: The socket fd for the postgresql connection + + """ + try: + state = self._connections[fd].poll() + except (psycopg2.Error, psycopg2.Warning) as error: + self._exceptions[fd] = error + yield self._callbacks[fd]((self, fd)) + else: + if state == extensions.POLL_OK: + yield self._callbacks[fd]((self, fd)) + elif state == extensions.POLL_WRITE: + self._ioloop.update_handler(fd, ioloop.IOLoop.WRITE) + elif state == extensions.POLL_READ: + self._ioloop.update_handler(fd, ioloop.IOLoop.READ) + elif state == extensions.POLL_ERROR: + LOGGER.debug('Error') + self._ioloop.remove_handler(fd) + + def _psycopg2_connect(self, kwargs): + """Return a psycopg2 connection for the specified kwargs. Extend for + use in async session adapters. + + :param dict kwargs: Keyword connection args + :rtype: psycopg2.connection + + """ + kwargs['async'] = True + with stack_context.NullContext(): + return psycopg2.connect(**kwargs)
211
Initial commit with Tornado support
0
.py
py
bsd-3-clause
gmr/queries
1294
<NME> tornado_session.py <BEF> ADDFILE <MSG> Initial commit with Tornado support Required a little bit of refactoring of Session, not too much <DFF> @@ -0,0 +1,211 @@ +""" +Tornado Session Adapter + +Use Queries asynchronously within the Tornado framework. + +""" +import logging + +from psycopg2 import extensions +from psycopg2 import extras +from tornado import gen +from tornado import ioloop +from tornado import stack_context +import psycopg2 + +from queries import session +from queries import DEFAULT_URI + +LOGGER = logging.getLogger(__name__) + + +class TornadoSession(session.Session): + + def __init__(self, + uri=DEFAULT_URI, + cursor_factory=extras.RealDictCursor, + use_pool=True): + """Connect to a PostgreSQL server using the module wide connection and + set the isolation level. + + :param str uri: PostgreSQL connection URI + :param psycopg2.cursor: The cursor type to use + :param bool use_pool: Use the connection pool + + """ + self._callbacks = dict() + self._conn, self._cursor = None, None + self._connections = dict() + self._commands = dict() + self._cursor_factory = cursor_factory + self._exceptions = dict() + self._ioloop = ioloop.IOLoop.instance() + self._uri = uri + self._use_pool = use_pool + + @gen.coroutine + def callproc(self, name, parameters=None): + + # Grab a connection, either new or out of the pool + connection, fd, status = self._connect() + + # Add a callback for either connecting or waiting for the query + self._callbacks[fd] = yield gen.Callback((self, fd)) + + # Add the connection to the IOLoop + self._ioloop.add_handler(connection.fileno(), self._on_io_events, + ioloop.IOLoop.WRITE) + + # Maybe wait for the connection + if status == self.SETUP and connection.poll() != extensions.POLL_OK: + yield gen.Wait((self, fd)) + # Setup the callback for the actual query + self._callbacks[fd] = yield gen.Callback((self, fd)) + + # Get the cursor, execute the query and wait for the result + cursor = self._get_cursor(connection) + cursor.callproc(name, parameters) + yield gen.Wait((self, fd)) + + # If there was an exception, cleanup, then raise it + if fd in self._exceptions and self._exceptions[fd]: + error = self._exceptions[fd] + self._exec_cleanup(cursor, fd) + raise error + + # Attempt to get any result that's pending for the query + try: + result = cursor.fetchall() + except psycopg2.ProgrammingError: + result = None + + # Close the cursor and cleanup the references for this request + self._exec_cleanup(cursor, fd) + + # Return the result if there are any + raise gen.Return(result) + + @gen.coroutine + def query(self, sql, parameters=None): + + # Grab a connection, either new or out of the pool + connection, fd, status = self._connect() + + # Add a callback for either connecting or waiting for the query + self._callbacks[fd] = yield gen.Callback((self, fd)) + + # Add the connection to the IOLoop + self._ioloop.add_handler(connection.fileno(), self._on_io_events, + ioloop.IOLoop.WRITE) + + # Maybe wait for the connection + if status == self.SETUP and connection.poll() != extensions.POLL_OK: + yield gen.Wait((self, fd)) + # Setup the callback for the actual query + self._callbacks[fd] = yield gen.Callback((self, fd)) + + # Get the cursor, execute the query and wait for the result + cursor = self._get_cursor(connection) + cursor.execute(sql, parameters) + yield gen.Wait((self, fd)) + + # If there was an exception, cleanup, then raise it + if fd in self._exceptions and self._exceptions[fd]: + error = self._exceptions[fd] + self._exec_cleanup(cursor, fd) + raise error + + # Attempt to get any result that's pending for the query + try: + result = cursor.fetchall() + except psycopg2.ProgrammingError: + result = None + + # Close the cursor and cleanup the references for this request + self._exec_cleanup(cursor, fd) + + # Return the result if there are any + raise gen.Return(result) + + def _connect(self): + connection = super(TornadoSession, self)._connect() + fd, status = connection.fileno(), connection.status + + # Add the connection for use in _poll_connection + self._connections[fd] = connection + + return connection, fd, status + + def _exec_cleanup(self, cursor, fd): + """Close the cursor, remove any references to the fd in internal state + and remove the fd from the ioloop. + + :param psycopg2.cursor cursor: The cursor to close + :param int fd: The connection file descriptor + + """ + cursor.close() + if fd in self._exceptions: + del self._exceptions[fd] + if fd in self._callbacks: + del self._callbacks[fd] + if fd in self._connections: + del self._connections[fd] + self._ioloop.remove_handler(fd) + + def _get_cursor(self, connection): + """Return a cursor for the given connection. + + :param psycopg2._psycopg.connection connection: The connection to use + :rtype: psycopg2.extensions.cursor + + """ + return connection.cursor(cursor_factory=self._cursor_factory) + + @gen.coroutine + def _on_io_events(self, fd=None, events=None): + """Invoked by Tornado's IOLoop when there are events for the fd + + :param int fd: The file descriptor for the event + :param int events: The events raised + + """ + if fd not in self._connections: + return + self._poll_connection(fd) + + @gen.coroutine + def _poll_connection(self, fd): + """Check with psycopg2 to see what action to take. If the state is + POLL_OK, we should have a pending callback for that fd. + + :param int fd: The socket fd for the postgresql connection + + """ + try: + state = self._connections[fd].poll() + except (psycopg2.Error, psycopg2.Warning) as error: + self._exceptions[fd] = error + yield self._callbacks[fd]((self, fd)) + else: + if state == extensions.POLL_OK: + yield self._callbacks[fd]((self, fd)) + elif state == extensions.POLL_WRITE: + self._ioloop.update_handler(fd, ioloop.IOLoop.WRITE) + elif state == extensions.POLL_READ: + self._ioloop.update_handler(fd, ioloop.IOLoop.READ) + elif state == extensions.POLL_ERROR: + LOGGER.debug('Error') + self._ioloop.remove_handler(fd) + + def _psycopg2_connect(self, kwargs): + """Return a psycopg2 connection for the specified kwargs. Extend for + use in async session adapters. + + :param dict kwargs: Keyword connection args + :rtype: psycopg2.connection + + """ + kwargs['async'] = True + with stack_context.NullContext(): + return psycopg2.connect(**kwargs)
211
Initial commit with Tornado support
0
.py
py
bsd-3-clause
gmr/queries
1295
<NME> setup.py <BEF> import os import platform import setuptools # PYPY vs cpython if target == 'PyPy': install_requires = ['psycopg2ct'] else: install_requires = ['psycopg2'] # Install tornado if generating docs on readthedocs if os.environ.get('READTHEDOCS', None) == 'True': install_requires.append('tornado') setuptools.setup( name='queries', version='2.1.0', description='Simplified PostgreSQL client built upon Psycopg2', long_description=open('README.rst').read(), maintainer='Gavin M. Roy', maintainer_email='[email protected]', url='https://github.com/gmr/queries', install_requires=install_requires, extras_require={'tornado': 'tornado<6'}, license='BSD', package_data={'': ['LICENSE', 'README.rst']}, packages=['queries'], classifiers=[ 'Development Status :: 5 - Production/Stable', 'Intended Audience :: Developers', 'License :: OSI Approved :: BSD License', 'Operating System :: OS Independent', 'Programming Language :: Python :: 2', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.4', 'Programming Language :: Python :: 3.5', 'Programming Language :: Python :: 3.6', 'Programming Language :: Python :: 3.7', 'Programming Language :: Python :: 3.8', 'Programming Language :: Python :: Implementation :: CPython', 'Programming Language :: Python :: Implementation :: PyPy', 'Topic :: Database', 'Topic :: Software Development :: Libraries'], zip_safe=True) <MSG> Merge pull request #11 from djt5019/set-min-pin Set a minimum pin on psycopg2 <DFF> @@ -7,7 +7,7 @@ target = platform.python_implementation() if target == 'PyPy': install_requires = ['psycopg2ct'] else: - install_requires = ['psycopg2'] + install_requires = ['psycopg2>=2.5.1'] # Install tornado if generating docs on readthedocs if os.environ.get('READTHEDOCS', None) == 'True':
1
Merge pull request #11 from djt5019/set-min-pin
1
.py
py
bsd-3-clause
gmr/queries
1296
<NME> setup.py <BEF> import os import platform import setuptools # PYPY vs cpython if target == 'PyPy': install_requires = ['psycopg2ct'] else: install_requires = ['psycopg2'] # Install tornado if generating docs on readthedocs if os.environ.get('READTHEDOCS', None) == 'True': install_requires.append('tornado') setuptools.setup( name='queries', version='2.1.0', description='Simplified PostgreSQL client built upon Psycopg2', long_description=open('README.rst').read(), maintainer='Gavin M. Roy', maintainer_email='[email protected]', url='https://github.com/gmr/queries', install_requires=install_requires, extras_require={'tornado': 'tornado<6'}, license='BSD', package_data={'': ['LICENSE', 'README.rst']}, packages=['queries'], classifiers=[ 'Development Status :: 5 - Production/Stable', 'Intended Audience :: Developers', 'License :: OSI Approved :: BSD License', 'Operating System :: OS Independent', 'Programming Language :: Python :: 2', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.4', 'Programming Language :: Python :: 3.5', 'Programming Language :: Python :: 3.6', 'Programming Language :: Python :: 3.7', 'Programming Language :: Python :: 3.8', 'Programming Language :: Python :: Implementation :: CPython', 'Programming Language :: Python :: Implementation :: PyPy', 'Topic :: Database', 'Topic :: Software Development :: Libraries'], zip_safe=True) <MSG> Merge pull request #11 from djt5019/set-min-pin Set a minimum pin on psycopg2 <DFF> @@ -7,7 +7,7 @@ target = platform.python_implementation() if target == 'PyPy': install_requires = ['psycopg2ct'] else: - install_requires = ['psycopg2'] + install_requires = ['psycopg2>=2.5.1'] # Install tornado if generating docs on readthedocs if os.environ.get('READTHEDOCS', None) == 'True':
1
Merge pull request #11 from djt5019/set-min-pin
1
.py
py
bsd-3-clause
gmr/queries
1297
<NME> session.py <BEF> """The Session class allows for a unified (and simplified) view of interfacing with a PostgreSQL database server. Connection details are passed in as a PostgreSQL URI and connections are pooled by default, allowing for reuse of connections across modules in the Python runtime without having to pass around the object handle. While you can still access the raw `psycopg2` connection and cursor objects to provide ultimate flexibility in how you use the queries.Session object, there are convenience methods designed to simplify the interaction with PostgreSQL. For `psycopg2` functionality outside of what is exposed in Session, simply use the Session.connection or Session.cursor properties to gain access to either object just as you would in a program using psycopg2 directly. Example usage: .. code:: python import queries with queries.Session('pgsql://postgres@localhost/postgres') as session: for row in session.Query('SELECT * FROM table'): print row """ import hashlib import logging import psycopg2 from psycopg2 import extensions, extras from queries import pool, results, utils LOGGER = logging.getLogger(__name__) DEFAULT_ENCODING = 'UTF8' DEFAULT_URI = 'postgresql://localhost:5432' class Session(object): """The Session class allows for a unified (and simplified) view of interfacing with a PostgreSQL database server. The Session object can act as a context manager, providing automated cleanup and simple, Pythonic way of interacting with the object. :param str uri: PostgreSQL connection URI :param psycopg2.extensions.cursor: The cursor type to use :param int pool_idle_ttl: How long idle pools keep connections open :param int pool_max_size: The maximum size of the pool to use """ _conn = None _cursor = None _tpc_id = None _uri = None # Connection status constants INTRANS = extensions.STATUS_IN_TRANSACTION PREPARED = extensions.STATUS_PREPARED READY = extensions.STATUS_READY SETUP = extensions.STATUS_SETUP # Transaction status constants TX_ACTIVE = extensions.TRANSACTION_STATUS_ACTIVE TX_IDLE = extensions.TRANSACTION_STATUS_IDLE TX_INERROR = extensions.TRANSACTION_STATUS_INERROR TX_INTRANS = extensions.TRANSACTION_STATUS_INTRANS TX_UNKNOWN = extensions.TRANSACTION_STATUS_UNKNOWN def __init__(self, uri=DEFAULT_URI, cursor_factory=extras.RealDictCursor, pool_idle_ttl=pool.DEFAULT_IDLE_TTL, pool_max_size=pool.DEFAULT_MAX_SIZE, autocommit=True): """Connect to a PostgreSQL server using the module wide connection and set the isolation level. :param str uri: PostgreSQL connection URI :param psycopg2.extensions.cursor: The cursor type to use :param int pool_idle_ttl: How long idle pools keep connections open :param int pool_max_size: The maximum size of the pool to use """ self._pool_manager = pool.PoolManager.instance() self._uri = uri # Ensure the pool exists in the pool manager if self.pid not in self._pool_manager: self._pool_manager.create(self.pid, pool_idle_ttl, pool_max_size) self._conn = self._connect() self._cursor_factory = cursor_factory self._cursor = self._get_cursor(self._conn) self._autocommit(autocommit) @property def backend_pid(self): """Return the backend process ID of the PostgreSQL server that this session is connected to. :rtype: int """ return self._conn.get_backend_pid() def callproc(self, name, args=None): """Call a stored procedure on the server, returning the results in a :py:class:`queries.Results` instance. :param str name: The procedure name :param list args: The list of arguments to pass in :rtype: queries.Results :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError :raises: queries.InternalError :raises: queries.InterfaceError :raises: queries.NotSupportedError :raises: queries.OperationalError :raises: queries.ProgrammingError """ try: self._cursor.callproc(name, args) except psycopg2.Error as err: self._incr_exceptions() raise err finally: self._incr_executions() return results.Results(self._cursor) def close(self): """Explicitly close the connection and remove it from the connection pool if pooling is enabled. If the connection is already closed :raises: psycopg2.InterfaceError """ if not self._conn: raise psycopg2.InterfaceError('Connection not open') LOGGER.info('Closing connection %r in %s', self._conn, self.pid) self._pool_manager.free(self.pid, self._conn) self._pool_manager.remove_connection(self.pid, self._conn) # Un-assign the connection and cursor self._conn, self._cursor = None, None @property def connection(self): """Return the current open connection to PostgreSQL. :rtype: psycopg2.extensions.connection """ return self._conn @property def cursor(self): """Return the current, active cursor for the open connection. :rtype: psycopg2.extensions.cursor """ return self._cursor @property def encoding(self): """Return the current client encoding value. :rtype: str """ return self._conn.encoding @property def notices(self): """Return a list of up to the last 50 server notices sent to the client. :rtype: list """ return self._conn.notices @property def pid(self): """Return the pool ID used for connection pooling. :rtype: str """ return hashlib.md5(':'.join([self.__class__.__name__, self._uri]).encode('utf-8')).hexdigest() def query(self, sql, parameters=None): """A generator to issue a query on the server, mogrifying the parameters against the sql statement. Results are returned as a :py:class:`queries.Results` object which can act as an iterator and has multiple ways to access the result data. :param str sql: The SQL statement :param dict parameters: A dictionary of query parameters :rtype: queries.Results :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError :raises: queries.InternalError :raises: queries.InterfaceError :raises: queries.NotSupportedError :raises: queries.OperationalError :raises: queries.ProgrammingError """ try: self._cursor.execute(sql, parameters) except psycopg2.Error as err: self._incr_exceptions() raise err finally: self._incr_executions() return results.Results(self._cursor) def set_encoding(self, value=DEFAULT_ENCODING): """Set the client encoding for the session if the value specified is different than the current client encoding. :param str value: The encoding value to use """ if self._conn.encoding != value: self._conn.set_client_encoding(value) def __del__(self): """When deleting the context, ensure the instance is removed from caches, etc. """ self._cleanup() def __enter__(self): """For use as a context manager, return a handle to this object instance. :rtype: Session """ return self def __exit__(self, exc_type, exc_val, exc_tb): """When leaving the context, ensure the instance is removed from caches, etc. """ self._cleanup() if self._conn: try: self._pool_manager.free(self.pid, self._conn) except pool.ConnectionNotFoundError: pass self._conn = None self._pool_manager.clean(self.pid) def _connect(self): """Connect to PostgreSQL, either by reusing a connection from the pool if possible, or by creating the new connection. self._cursor.close() self._cursor = None if self._conn: LOGGER.debug('Freeing %s in the pool', self.pid) # Attempt to get a cached connection from the connection pool try: connection = self._pool_manager.get(self.pid, self) except pool.NoIdleConnectionsError: if self._pool_manager.is_full(self.pid): raise # Create a new PostgreSQL connection kwargs = utils.uri_to_kwargs(self._uri) connection = self._psycopg2_connect(kwargs) self._pool_manager.add(self.pid, connection) # Attempt to get a cached connection from the connection pool try: connection = self._pool_manager.get(self.pid, self) LOGGER.debug("Re-using connection for %s", self.pid) except pool.NoIdleConnectionsError: if self._pool_manager.is_full(self.pid): raise # Create a new PostgreSQL connection kwargs = utils.uri_to_kwargs(self._uri) LOGGER.debug("Creating a new connection for %s", self.pid) connection = self._psycopg2_connect(kwargs) self._pool_manager.add(self.pid, connection) self._pool_manager.lock(self.pid, connection, self) # Added in because psycopg2ct connects and leaves the connection in # a weird state: consts.STATUS_DATESTYLE, returning from # Connection._setup without setting the state as const.STATUS_OK if utils.PYPY: connection.reset() # Register the custom data types self._register_unicode(connection) self._register_uuid(connection) return connection def _get_cursor(self, connection, name=None): """Return a cursor for the given cursor_factory. Specify a name to use server-side cursors. :param connection: The connection to create a cursor on :type connection: psycopg2.extensions.connection :param str name: A cursor name for a server side cursor :rtype: psycopg2.extensions.cursor """ cursor = connection.cursor(name=name, cursor_factory=self._cursor_factory) if name is not None: cursor.scrollable = True cursor.withhold = True return cursor def _incr_exceptions(self): """Increment the number of exceptions for the current connection.""" self._pool_manager.get_connection(self.pid, self._conn).exceptions += 1 def _incr_executions(self): """Increment the number of executions for the current connection.""" self._pool_manager.get_connection(self.pid, self._conn).executions += 1 def _psycopg2_connect(self, kwargs): """Return a psycopg2 connection for the specified kwargs. Extend for use in async session adapters. :param dict kwargs: Keyword connection args :rtype: psycopg2.extensions.connection """ return psycopg2.connect(**kwargs) @staticmethod def _register_unicode(connection): """Register the cursor to be able to receive Unicode string. :type connection: psycopg2.extensions.connection :param connection: Where to register things """ psycopg2.extensions.register_type(psycopg2.extensions.UNICODE, connection) psycopg2.extensions.register_type(psycopg2.extensions.UNICODEARRAY, connection) @staticmethod def _register_uuid(connection): """Register the UUID extension from the psycopg2.extra module :type connection: psycopg2.extensions.connection :param connection: Where to register things """ psycopg2.extras.register_uuid(conn_or_curs=connection) @property def _status(self): """Return the current connection status as an integer value. The status should match one of the following constants: - queries.Session.INTRANS: Connection established, in transaction - queries.Session.PREPARED: Prepared for second phase of transaction - queries.Session.READY: Connected, no active transaction :rtype: int """ if self._conn.status == psycopg2.extensions.STATUS_BEGIN: return self.READY return self._conn.status <MSG> Don't clean the pool on Session cleanup Cleaning the pool on session cleanup will remove the pool completely even if there are other sessions with the same DSN/PID. Also, go directly to the pool manager instance to free instead of the internal handle for it. Add debug logging for when a connection is reused vs created new <DFF> @@ -255,13 +255,11 @@ class Session(object): if self._conn: try: - self._pool_manager.free(self.pid, self._conn) + pool.PoolManager.instance().free(self.pid, self._conn) except pool.ConnectionNotFoundError: pass self._conn = None - self._pool_manager.clean(self.pid) - def _connect(self): """Connect to PostgreSQL, either by reusing a connection from the pool if possible, or by creating the new connection. @@ -273,12 +271,14 @@ class Session(object): # Attempt to get a cached connection from the connection pool try: connection = self._pool_manager.get(self.pid, self) + LOGGER.debug("Re-using connection for %s", self.pid) except pool.NoIdleConnectionsError: if self._pool_manager.is_full(self.pid): raise # Create a new PostgreSQL connection kwargs = utils.uri_to_kwargs(self._uri) + LOGGER.debug("Creating a new connection for %s", self.pid) connection = self._psycopg2_connect(kwargs) self._pool_manager.add(self.pid, connection)
3
Don't clean the pool on Session cleanup
3
.py
py
bsd-3-clause
gmr/queries
1298
<NME> session.py <BEF> """The Session class allows for a unified (and simplified) view of interfacing with a PostgreSQL database server. Connection details are passed in as a PostgreSQL URI and connections are pooled by default, allowing for reuse of connections across modules in the Python runtime without having to pass around the object handle. While you can still access the raw `psycopg2` connection and cursor objects to provide ultimate flexibility in how you use the queries.Session object, there are convenience methods designed to simplify the interaction with PostgreSQL. For `psycopg2` functionality outside of what is exposed in Session, simply use the Session.connection or Session.cursor properties to gain access to either object just as you would in a program using psycopg2 directly. Example usage: .. code:: python import queries with queries.Session('pgsql://postgres@localhost/postgres') as session: for row in session.Query('SELECT * FROM table'): print row """ import hashlib import logging import psycopg2 from psycopg2 import extensions, extras from queries import pool, results, utils LOGGER = logging.getLogger(__name__) DEFAULT_ENCODING = 'UTF8' DEFAULT_URI = 'postgresql://localhost:5432' class Session(object): """The Session class allows for a unified (and simplified) view of interfacing with a PostgreSQL database server. The Session object can act as a context manager, providing automated cleanup and simple, Pythonic way of interacting with the object. :param str uri: PostgreSQL connection URI :param psycopg2.extensions.cursor: The cursor type to use :param int pool_idle_ttl: How long idle pools keep connections open :param int pool_max_size: The maximum size of the pool to use """ _conn = None _cursor = None _tpc_id = None _uri = None # Connection status constants INTRANS = extensions.STATUS_IN_TRANSACTION PREPARED = extensions.STATUS_PREPARED READY = extensions.STATUS_READY SETUP = extensions.STATUS_SETUP # Transaction status constants TX_ACTIVE = extensions.TRANSACTION_STATUS_ACTIVE TX_IDLE = extensions.TRANSACTION_STATUS_IDLE TX_INERROR = extensions.TRANSACTION_STATUS_INERROR TX_INTRANS = extensions.TRANSACTION_STATUS_INTRANS TX_UNKNOWN = extensions.TRANSACTION_STATUS_UNKNOWN def __init__(self, uri=DEFAULT_URI, cursor_factory=extras.RealDictCursor, pool_idle_ttl=pool.DEFAULT_IDLE_TTL, pool_max_size=pool.DEFAULT_MAX_SIZE, autocommit=True): """Connect to a PostgreSQL server using the module wide connection and set the isolation level. :param str uri: PostgreSQL connection URI :param psycopg2.extensions.cursor: The cursor type to use :param int pool_idle_ttl: How long idle pools keep connections open :param int pool_max_size: The maximum size of the pool to use """ self._pool_manager = pool.PoolManager.instance() self._uri = uri # Ensure the pool exists in the pool manager if self.pid not in self._pool_manager: self._pool_manager.create(self.pid, pool_idle_ttl, pool_max_size) self._conn = self._connect() self._cursor_factory = cursor_factory self._cursor = self._get_cursor(self._conn) self._autocommit(autocommit) @property def backend_pid(self): """Return the backend process ID of the PostgreSQL server that this session is connected to. :rtype: int """ return self._conn.get_backend_pid() def callproc(self, name, args=None): """Call a stored procedure on the server, returning the results in a :py:class:`queries.Results` instance. :param str name: The procedure name :param list args: The list of arguments to pass in :rtype: queries.Results :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError :raises: queries.InternalError :raises: queries.InterfaceError :raises: queries.NotSupportedError :raises: queries.OperationalError :raises: queries.ProgrammingError """ try: self._cursor.callproc(name, args) except psycopg2.Error as err: self._incr_exceptions() raise err finally: self._incr_executions() return results.Results(self._cursor) def close(self): """Explicitly close the connection and remove it from the connection pool if pooling is enabled. If the connection is already closed :raises: psycopg2.InterfaceError """ if not self._conn: raise psycopg2.InterfaceError('Connection not open') LOGGER.info('Closing connection %r in %s', self._conn, self.pid) self._pool_manager.free(self.pid, self._conn) self._pool_manager.remove_connection(self.pid, self._conn) # Un-assign the connection and cursor self._conn, self._cursor = None, None @property def connection(self): """Return the current open connection to PostgreSQL. :rtype: psycopg2.extensions.connection """ return self._conn @property def cursor(self): """Return the current, active cursor for the open connection. :rtype: psycopg2.extensions.cursor """ return self._cursor @property def encoding(self): """Return the current client encoding value. :rtype: str """ return self._conn.encoding @property def notices(self): """Return a list of up to the last 50 server notices sent to the client. :rtype: list """ return self._conn.notices @property def pid(self): """Return the pool ID used for connection pooling. :rtype: str """ return hashlib.md5(':'.join([self.__class__.__name__, self._uri]).encode('utf-8')).hexdigest() def query(self, sql, parameters=None): """A generator to issue a query on the server, mogrifying the parameters against the sql statement. Results are returned as a :py:class:`queries.Results` object which can act as an iterator and has multiple ways to access the result data. :param str sql: The SQL statement :param dict parameters: A dictionary of query parameters :rtype: queries.Results :raises: queries.DataError :raises: queries.DatabaseError :raises: queries.IntegrityError :raises: queries.InternalError :raises: queries.InterfaceError :raises: queries.NotSupportedError :raises: queries.OperationalError :raises: queries.ProgrammingError """ try: self._cursor.execute(sql, parameters) except psycopg2.Error as err: self._incr_exceptions() raise err finally: self._incr_executions() return results.Results(self._cursor) def set_encoding(self, value=DEFAULT_ENCODING): """Set the client encoding for the session if the value specified is different than the current client encoding. :param str value: The encoding value to use """ if self._conn.encoding != value: self._conn.set_client_encoding(value) def __del__(self): """When deleting the context, ensure the instance is removed from caches, etc. """ self._cleanup() def __enter__(self): """For use as a context manager, return a handle to this object instance. :rtype: Session """ return self def __exit__(self, exc_type, exc_val, exc_tb): """When leaving the context, ensure the instance is removed from caches, etc. """ self._cleanup() if self._conn: try: self._pool_manager.free(self.pid, self._conn) except pool.ConnectionNotFoundError: pass self._conn = None self._pool_manager.clean(self.pid) def _connect(self): """Connect to PostgreSQL, either by reusing a connection from the pool if possible, or by creating the new connection. self._cursor.close() self._cursor = None if self._conn: LOGGER.debug('Freeing %s in the pool', self.pid) # Attempt to get a cached connection from the connection pool try: connection = self._pool_manager.get(self.pid, self) except pool.NoIdleConnectionsError: if self._pool_manager.is_full(self.pid): raise # Create a new PostgreSQL connection kwargs = utils.uri_to_kwargs(self._uri) connection = self._psycopg2_connect(kwargs) self._pool_manager.add(self.pid, connection) # Attempt to get a cached connection from the connection pool try: connection = self._pool_manager.get(self.pid, self) LOGGER.debug("Re-using connection for %s", self.pid) except pool.NoIdleConnectionsError: if self._pool_manager.is_full(self.pid): raise # Create a new PostgreSQL connection kwargs = utils.uri_to_kwargs(self._uri) LOGGER.debug("Creating a new connection for %s", self.pid) connection = self._psycopg2_connect(kwargs) self._pool_manager.add(self.pid, connection) self._pool_manager.lock(self.pid, connection, self) # Added in because psycopg2ct connects and leaves the connection in # a weird state: consts.STATUS_DATESTYLE, returning from # Connection._setup without setting the state as const.STATUS_OK if utils.PYPY: connection.reset() # Register the custom data types self._register_unicode(connection) self._register_uuid(connection) return connection def _get_cursor(self, connection, name=None): """Return a cursor for the given cursor_factory. Specify a name to use server-side cursors. :param connection: The connection to create a cursor on :type connection: psycopg2.extensions.connection :param str name: A cursor name for a server side cursor :rtype: psycopg2.extensions.cursor """ cursor = connection.cursor(name=name, cursor_factory=self._cursor_factory) if name is not None: cursor.scrollable = True cursor.withhold = True return cursor def _incr_exceptions(self): """Increment the number of exceptions for the current connection.""" self._pool_manager.get_connection(self.pid, self._conn).exceptions += 1 def _incr_executions(self): """Increment the number of executions for the current connection.""" self._pool_manager.get_connection(self.pid, self._conn).executions += 1 def _psycopg2_connect(self, kwargs): """Return a psycopg2 connection for the specified kwargs. Extend for use in async session adapters. :param dict kwargs: Keyword connection args :rtype: psycopg2.extensions.connection """ return psycopg2.connect(**kwargs) @staticmethod def _register_unicode(connection): """Register the cursor to be able to receive Unicode string. :type connection: psycopg2.extensions.connection :param connection: Where to register things """ psycopg2.extensions.register_type(psycopg2.extensions.UNICODE, connection) psycopg2.extensions.register_type(psycopg2.extensions.UNICODEARRAY, connection) @staticmethod def _register_uuid(connection): """Register the UUID extension from the psycopg2.extra module :type connection: psycopg2.extensions.connection :param connection: Where to register things """ psycopg2.extras.register_uuid(conn_or_curs=connection) @property def _status(self): """Return the current connection status as an integer value. The status should match one of the following constants: - queries.Session.INTRANS: Connection established, in transaction - queries.Session.PREPARED: Prepared for second phase of transaction - queries.Session.READY: Connected, no active transaction :rtype: int """ if self._conn.status == psycopg2.extensions.STATUS_BEGIN: return self.READY return self._conn.status <MSG> Don't clean the pool on Session cleanup Cleaning the pool on session cleanup will remove the pool completely even if there are other sessions with the same DSN/PID. Also, go directly to the pool manager instance to free instead of the internal handle for it. Add debug logging for when a connection is reused vs created new <DFF> @@ -255,13 +255,11 @@ class Session(object): if self._conn: try: - self._pool_manager.free(self.pid, self._conn) + pool.PoolManager.instance().free(self.pid, self._conn) except pool.ConnectionNotFoundError: pass self._conn = None - self._pool_manager.clean(self.pid) - def _connect(self): """Connect to PostgreSQL, either by reusing a connection from the pool if possible, or by creating the new connection. @@ -273,12 +271,14 @@ class Session(object): # Attempt to get a cached connection from the connection pool try: connection = self._pool_manager.get(self.pid, self) + LOGGER.debug("Re-using connection for %s", self.pid) except pool.NoIdleConnectionsError: if self._pool_manager.is_full(self.pid): raise # Create a new PostgreSQL connection kwargs = utils.uri_to_kwargs(self._uri) + LOGGER.debug("Creating a new connection for %s", self.pid) connection = self._psycopg2_connect(kwargs) self._pool_manager.add(self.pid, connection)
3
Don't clean the pool on Session cleanup
3
.py
py
bsd-3-clause
gmr/queries
1299
<NME> bootstrap <BEF> #!/bin/sh # vim: set ts=2 sts=2 sw=2 et: test -n "$SHELLDEBUG" && set -x if test -e /var/run/docker.sock then DOCKER_IP=127.0.0.1 else echo "Docker environment not detected." exit 1 fi set -e if test -z "$DOCKER_COMPOSE_PREFIX" then CWD=${PWD##*/} DOCKER_COMPOSE_PREFIX=${CWD/_/} fi COMPOSE_ARGS="-p ${DOCKER_COMPOSE_PREFIX}" test -d build || mkdir build get_exposed_port() { docker-compose ${COMPOSE_ARGS} port $1 $2 | cut -d: -f2 } docker-compose ${COMPOSE_ARGS} down --volumes --remove-orphans docker-compose ${COMPOSE_ARGS} pull docker-compose ${COMPOSE_ARGS} up -d --no-recreate CONTAINER="${DOCKER_COMPOSE_PREFIX}_postgres_1" PORT=$(get_exposed_port postgres 5432) echo "Waiting for ${CONTAINER} \c" export PG until psql -U postgres -h ${DOCKER_IP} -p ${PORT} -c 'SELECT 1' > /dev/null 2> /dev/null; do echo ".\c" sleep 1 done echo " done" cat > build/test-environment<<EOF export DOCKER_COMPOSE_PREFIX=${DOCKER_COMPOSE_PREFIX} export PGHOST=${DOCKER_IP} export PGPORT=${PORT} EOF <MSG> Merge pull request #30 from nvllsvm/devenv boostrap updates <DFF> @@ -10,36 +10,33 @@ else fi set -e -if test -z "$DOCKER_COMPOSE_PREFIX" +if test -z "$COMPOSE_PROJECT_NAME" then CWD=${PWD##*/} - DOCKER_COMPOSE_PREFIX=${CWD/_/} + export COMPOSE_PROJECT_NAME=${CWD/_/} fi -COMPOSE_ARGS="-p ${DOCKER_COMPOSE_PREFIX}" -test -d build || mkdir build +mkdir -p build get_exposed_port() { - docker-compose ${COMPOSE_ARGS} port $1 $2 | cut -d: -f2 + docker-compose port $1 $2 | cut -d: -f2 } -docker-compose ${COMPOSE_ARGS} down --volumes --remove-orphans -docker-compose ${COMPOSE_ARGS} pull -docker-compose ${COMPOSE_ARGS} up -d --no-recreate +docker-compose down -t 0 --volumes --remove-orphans +docker-compose pull +docker-compose up -d --no-recreate -CONTAINER="${DOCKER_COMPOSE_PREFIX}_postgres_1" PORT=$(get_exposed_port postgres 5432) -echo "Waiting for ${CONTAINER} \c" +printf "Waiting for postgres " export PG -until psql -U postgres -h ${DOCKER_IP} -p ${PORT} -c 'SELECT 1' > /dev/null 2> /dev/null; do - echo ".\c" +until docker-compose exec postgres pg_isready -q; do + printf "." sleep 1 done echo " done" cat > build/test-environment<<EOF -export DOCKER_COMPOSE_PREFIX=${DOCKER_COMPOSE_PREFIX} export PGHOST=${DOCKER_IP} export PGPORT=${PORT} EOF
10
Merge pull request #30 from nvllsvm/devenv
13
bootstrap
bsd-3-clause
gmr/queries