diff --git a/README.md b/README.md index dc7b2f4..de712f8 100644 --- a/README.md +++ b/README.md @@ -8,6 +8,8 @@ Datums is a PostgreSQL pipeline for [Reporter](http://www.reporter-app.com/). Da # Getting Started +Skip ahead to [Migrating to v1.0.0](#migrating-to-v100) + ## Create the Database To create the datums database, first ensure that you have postgres installed and that the server is running locally. To create the database @@ -70,40 +72,66 @@ or, from Python >>> base.database_teardown(base.engine) ``` +#### Migrating to v1.0.0 + +##### `alembic` +v1.0.0 introduces some changes to the database schema and the datums data model. To upgrade your existing datums database to a v1.0.0-compatible schema, a series of alembic mirations have been provided. To access these migrations, you will need to have the datums repository cloned to your local machine. If you've installed datums via pip, feel free to delete the cloned repository after you migrate your database, but remember to `pip install --upgrade datums` before trying to add more reports. + +To migrate your database, clone (or pull) this repository and run the setup script, then `cd` into the repository and run the migrations with + +```bash +/path/to/datums/ $ alembic upgrade head +/path/to/datums/ $ datums --update "/path/to/reporter/folder/*.json" +/path/to/datums/ $ datums --add "/path/to/reporter/folder/*.json" +``` + +After migrating, it's important to `--update` all reports to add the `pressure_in` and `pressure_mb` attributes on weather reports as well as the `inland_water` attribute to placemark reports. You can safely ignore the `UserWarning` that no `uniqueIdentifier` can be found for altitude reports; those altitude reports will be added when you `--add` in the next step. + +v1.0.0 adds support for altitude reports. After updating, you'll need to `--add` all your reports to capture altitude reports from before May, 2015. They must be added instead of updated because altitude reports have not always had `uniqueIdentifiers`. Adding will allow datums to create UUIDs for these earlier altitude reports. If no UUID is found for an altitude report, datums canot update or delete it. See [issue 29](https://github.com/thejunglejane/datums/issues/29) for more information. + +##### Quick and Dirty +Alternatively, you could just teardown your existing datums database and setup a new one. Make sure you teardown your database before upgrading datums. +```bash +$ datums --teardown +$ pip install --upgrade datums +$ datums --setup +$ datums --add "/path/to/reporter/folder/*.json" +``` + # Adding, Updating, and Deleting The `pipeline` module allows you to add, update, and delete reports and questions. ### Definitions We should define a few terms before getting into how to use the pipeline. -* A **reporter file** is a JSON file that contains all the **report**s and all the **question**s for a given day. These files should be located in your Dropbox/Apps/Reporter-App folder. -* A **report** comprises a **snapshot** and all the **response**s collected by Reporter when you make a report. -* A **snapshot** contains the information that the Reporter app automatically collects when you make a report, things like the weather, background noise, etc. +* A **reporter file** is a JSON file that contains all the **snapshot**s and all the **question**s for a given day. These files should be located in your Dropbox/Apps/Reporter-App folder. +* A **snapshot** comprises a **report** and all the **response**s collected by Reporter when you make a report. +* A **report** contains the information that the Reporter app automatically collects when you make a report, things like the weather, background noise, etc. * A **response** is the answer you enter for a question. -Every **report** will have one **snapshot** and some number **response**s associated with it, and every **reporter file** will have some number **report**s and some number of **question**s associated with it, depending on how many times you make reports throughout the day. +Every **snapshot** will have one **report** and some number **response**s associated with it, and every **reporter file** will have some number **snapshot**s and some number of **question**s associated with it, depending on how many times you make reports throughout the day. If you add or delete questions from the Reporter app, different **reporter file**s will have different **question**s from day to day. When you add a new **reporter file**, first add the **question**s from that day. If there are no new **question**s, nothing will happen; if there is a new **question**, datums will add it to the database. -## Adding questions and reports +## Adding questions, reports, and responses -When you first set datums up, you'll probably want to add all the questions and reports in your Dropbox Reporter folder. +When you first set datums up, you'll probably want to add all the questions, reports, and responses in your Dropbox Reporter folder. #### Command Line -To add all the reporter files in your Dropbox Reporter folder from the command line, execute `datums` with the `--add` flag followed by the path to your Dropbox Reporter folder +To add all the Reporter files in your Dropbox Reporter folder from the command line, execute `datums` with the `--add` flag followed by the path to your Dropbox Reporter folder ``` $ datums --add "/path/to/reporter/folder/*.json" ``` Make sure you include the '*.json' at the end to exclude the extra files in that folder. -To add the questions and reports from a single reporter file, include the filepath after the `--add` flag instead of the directory's path +To add the questions and reports from a single Reporter file, include the filepath after the `--add` flag instead of the directory's path ``` $ datums --add "/path/to/file" ``` #### Python -You can add all the reporter files or a single reporter file from Python as well. +You can add all the Reporter files or a single Reporter file from Python as well. ```python >>> from datums.pipeline import add @@ -117,8 +145,9 @@ You can add all the reporter files or a single reporter file from Python as well ... # Add questions first because reports need them ... for question in day['questions']: ... add.add_question(question) -... for report in day['snapshots']: -... add.add_report(report) +... for snapshot in day['snapshots']: +... # Add report and responses +... add.add_snapshot(snapshot) ``` ```python >>> from datums.pipeline import add @@ -128,11 +157,12 @@ You can add all the reporter files or a single reporter file from Python as well >>> # Add questions first because reports need them >>> for question in day['questions']: ... add.add_question(question) ->>> for report in day['snapshots']: -... add.add_report(report) +>>> for snapshot in day['snapshots']: +... # Add report and responses +... add.add_snapshot(snapshot) ``` -You can also add a single report from a reporter file, if you need/want to +You can also add a single snapshot from a Reporter file, if you need/want to ```python >>> from datums.pipeline import add >>> import json @@ -142,18 +172,18 @@ You can also add a single report from a reporter file, if you need/want to >>> add.add_report(report) ``` -## Updating reports +## Updating reports and responses -If you make a change to one of your Reporter files, or if Reporter makes a change to one of those files, you can also update your reports. If a new report has been added the file located at '/path/to/file', the update will create it in the database. +If you make a change to one of your Reporter files, or if Reporter makes a change to one of those files, you can also update your reports and responses. If a new snapshot has been added the file located at '/path/to/file', the update will create it in the database. #### Command Line -To update all reports in all the files in your Dropbox Reporter folder +To update all snapshots in all the files in your Dropbox Reporter folder ``` $ datums --update "/path/to/reporter/folder/*.json" ``` -and to update all the reports in a single reporter file +and to update all the snapshots in a single Reporter file ``` $ datums --update "/path/to/file" ``` @@ -169,42 +199,42 @@ From Python >>> for file in all_reporter_files: ... with open(os.path.expanduser(file), 'r') as f: ... day = json.load(f) -... for report in day['snapshots']: -... update.update_report(report) +... for snapshot in day['snapshots']: +... update.update_snapshot(snapshot) ``` ```python >>> from datums.pipeline import update >>> import json >>> with open('/path/to/file', 'r') as f: ... day = json.load(f) ->>> for report in day['snapshots']: -... update.update_report(report) +>>> for snapshot in day['snapshots']: +... update.update_snapshot(snapshot) ``` -To update an individual report within a reporter file with +To update an individual snapshot within a snapshoter file with ```python >>> from datums.pipeline import update >>> import json >>> with open('/path/to/file', 'r') as f: ... day = json.load(f) ->>> report = day['snapshots'][n] # where n is the index of the report ->>> update.update_report(report) +>>> snapshot = day['snapshots'][n] # where n is the index of the snapshot +>>> update.update_snapshot(snapshot) ``` -#### Changing a Report -> While it is possible to change your response to a question from Python, it's not recommended. Datums won't overwrite the contents of your files, and you will lose the changes that you make the next time you update the reports in that file. If you make changes to a file itself, you may run into conflicts if Reporter tries to update that file. +#### Changing a Snapshot +> While it is possible to change your response to a question from Python, it's not recommended. Datums won't overwrite the contents of your files, and you will lose the changes that you make the next time you update the snapshots in that file. If you make changes to a file itself, you may run into conflicts if Reporter tries to update that file. -> If you do need to change your response to a question, I recommend that you do so from the Reporter app. The list icon in the top left corner will display all of your reports, and you can select a report and make changes. If you have 'Save to Dropbox' enabled, the Dropbox file containing that report will be updated when you save your changes; if you don't have 'Save to Dropbox' enabled, the file containing the report will be updated the next time you export. Once the file is updated, you can follow the steps above to update the reports in that file in the database. +> If you do need to change your response to a question, I recommend that you do so from the Reporter app. The list icon in the top left corner will display all of your snapshots, and you can select a snapshot and make changes. If you have 'Save to Dropbox' enabled, the Dropbox file containing that snapshot will be updated when you save your changes; if you don't have 'Save to Dropbox' enabled, the file containing the snapshot will be updated the next time you export. Once the file is updated, you can follow the steps above to update the snapshots in that file in the database. -## Deleting reports +## Deleting reports and responses -Deleting reports from the database is much the same. +Deleting reports and responses from the database is much the same. Note that deleting a report will delete any responses included in the snapshot containing that report. #### Command Line -You can delete all reports in your Dropbox Reporter folder with +You can delete all snapshots in your Dropbox Reporter folder with ``` $ datums --delete "/path/to/reporter/folder/*.json" ``` -and the reports in a single file with +and the snapshots in a single file with ``` $ datums --delete "/path/to/file" ``` @@ -219,26 +249,26 @@ $ datums --delete "/path/to/file" >>> for file in all_reporter_files: ... with open(os.path.expanduser(file), 'r') as f: ... day = json.load(f) -... for report in day['snapshots']: -... delete.delete_report(report) +... for snapshot in day['snapshots']: +... delete.delete_snapshot(snapshot) ``` ```python >>> from datums.pipeline import delete >>> import json >>> with open('/path/to/file', 'r') as f: ... day = json.load(f) ->>> for report in day['snapshots']: -... delete.delete_report(report) +>>> for snapshot in day['snapshots']: +... delete.delete_snapshot(snapshot) ``` -To delete a single report within a reporter file +To delete a single snapshot within a Reporter file ```python >>> from datums.pipeline import delete >>> import json >>> with open('/path/to/file', 'r') as f: ... day = json.load(f) ->>> report = day['snapshots'][n] # where n is the index of the report ->>> delete.delete_report(report) +>>> snapshot = day['snapshots'][n] # where n is the index of the snapshot +>>> delete.delete_snapshot(snapshot) ``` ## Deleting questions @@ -257,6 +287,7 @@ You can also delete questions from the database. Note that this will delete any # Notes 1. This version of datums only supports JSON exports. +2. Photo sets are not supported. # Licensing diff --git a/alembic.ini b/alembic.ini new file mode 100644 index 0000000..6a01e82 --- /dev/null +++ b/alembic.ini @@ -0,0 +1,68 @@ +# A generic, single database configuration. + +[alembic] +# path to migration scripts +script_location = datums/migrations + +# template used to generate migration files +# file_template = %%(rev)s_%%(slug)s + +# max length of characters to apply to the +# "slug" field +#truncate_slug_length = 40 + +# set to 'true' to run the environment during +# the 'revision' command, regardless of autogenerate +# revision_environment = false + +# set to 'true' to allow .pyc and .pyo files without +# a source .py file to be detected as revisions in the +# versions/ directory +# sourceless = false + +# version location specification; this defaults +# to alembic/versions. When using multiple version +# directories, initial revisions must be specified with --version-path +# version_locations = %(here)s/bar %(here)s/bat alembic/versions + +# the output encoding used when revision files +# are written from script.py.mako +# output_encoding = utf-8 + +sqlalchemy.url = driver://user:pass@localhost/dbname + + +# Logging configuration +[loggers] +keys = root,sqlalchemy,alembic + +[handlers] +keys = console + +[formatters] +keys = generic + +[logger_root] +level = WARN +handlers = console +qualname = + +[logger_sqlalchemy] +level = WARN +handlers = +qualname = sqlalchemy.engine + +[logger_alembic] +level = INFO +handlers = +qualname = alembic + +[handler_console] +class = StreamHandler +args = (sys.stderr,) +level = NOTSET +formatter = generic + +[formatter_generic] +format = %(levelname)-5.5s [%(name)s] %(message)s +datefmt = %H:%M:%S diff --git a/bin/datums b/bin/datums index 06e4764..7e5a8e1 100755 --- a/bin/datums +++ b/bin/datums @@ -1,4 +1,6 @@ #!/usr/bin/env python +from __future__ import with_statement + import argparse import glob import json @@ -6,12 +8,7 @@ import sys import os from datums import __version__ - from datums import pipeline -from datums.pipeline import add -from datums.pipeline import update -from datums.pipeline import delete - from datums import models from datums.models import base @@ -21,18 +18,20 @@ from datums.models import base def create_parser(): # Breaking out argument parsing for easier testing parser = argparse.ArgumentParser( - prog='datums', description='PostgreSQL pipeline for Reporter.', usage='%(prog)s [options]') - parser.add_argument('-v', '--version', action='store_true') - parser.add_argument('--setup', action='store_true', - help='Setup the datums database') - parser.add_argument('--teardown', action='store_true', - help='Tear down the datums database') - parser.add_argument('-A', '--add', - help='Add the reports in file(s) specified') - parser.add_argument('-U', '--update', - help='Update the reports in the file(s) specified') - parser.add_argument('-D', '--delete', - help='Delete the reports in the file(s) specified from the database') + prog='datums', description='PostgreSQL pipeline for Reporter.', + usage='%(prog)s [options]') + parser.add_argument('-V', '--version', action='store_true') + parser.add_argument( + '--setup', action='store_true', help='Setup the datums database') + parser.add_argument( + '--teardown', action='store_true', + help='Tear down the datums database') + parser.add_argument( + '-A', '--add', help='Add the reports in file(s) specified') + parser.add_argument( + '-U', '--update', help='Update the reports in the file(s) specified') + parser.add_argument( + '-D', '--delete', help='Delete the reports in the file(s) specified') return parser @@ -51,26 +50,26 @@ def main(): files = glob.glob(os.path.expanduser(args.add)) for file in files: with open(file, 'r') as f: - reports = json.load(f) - # Add questions first because reports need them - for question in reports['questions']: - add.add_question(question) - for snapshot in reports['snapshots']: - add.add_report(snapshot) + day = json.load(f) + # Add questions first because responses need them + for question in day['questions']: + pipeline.QuestionPipeline(question).add() + for snapshot in day['snapshots']: + pipeline.SnapshotPipeline(snapshot).add() if args.update: files = glob.glob(os.path.expanduser(args.update)) for file in files: with open(file, 'r') as f: - reports = json.load(f) - for snapshot in reports['snapshots']: - update.update_report(snapshot) + day = json.load(f) + for snapshot in day['snapshots']: + pipeline.SnapshotPipeline(snapshot).update() if args.delete: files = glob.glob(os.path.expanduser(args.delete)) for file in files: with open(file, 'r') as f: - reports = json.load(f) - for snapshot in reports['snapshots']: - delete.delete_report(snapshot) + day = json.load(f) + for snapshot in day['snapshots']: + pipeline.SnapshotPipeline(snapshot).delete() if __name__ == '__main__': diff --git a/datums/__init__.py b/datums/__init__.py index 0fd1318..d0476ec 100644 --- a/datums/__init__.py +++ b/datums/__init__.py @@ -1 +1,2 @@ -__version__ = '0.0.7' \ No newline at end of file + +__version__ = '1.0.0' diff --git a/datums/migrations/README b/datums/migrations/README new file mode 100644 index 0000000..98e4f9c --- /dev/null +++ b/datums/migrations/README @@ -0,0 +1 @@ +Generic single-database configuration. \ No newline at end of file diff --git a/datums/migrations/env.py b/datums/migrations/env.py new file mode 100644 index 0000000..9f93ff3 --- /dev/null +++ b/datums/migrations/env.py @@ -0,0 +1,75 @@ +from __future__ import with_statement +from alembic import context +from sqlalchemy import engine_from_config, pool +from logging.config import fileConfig +import os + + +# this is the Alembic Config object, which provides +# access to the values within the .ini file in use. +config = context.config + +# Interpret the config file for Python logging. +# This line sets up loggers basically. +fileConfig(config.config_file_name) + +# add your model's MetaData object here +# for 'autogenerate' support +# from myapp import mymodel +# target_metadata = mymodel.Base.metadata +target_metadata = None + +# other values from the config, defined by the needs of env.py, +# can be acquired: +# my_important_option = config.get_main_option("my_important_option") +# ... etc. + + +def run_migrations_offline(): + """Run migrations in 'offline' mode. + + This configures the context with just a URL + and not an Engine, though an Engine is acceptable + here as well. By skipping the Engine creation + we don't even need a DBAPI to be available. + + Calls to context.execute() here emit the given string to the + script output. + + """ + url = os.environ['DATABASE_URI'] + context.configure( + url=url, target_metadata=target_metadata, literal_binds=True) + + with context.begin_transaction(): + context.run_migrations() + + +def run_migrations_online(): + """Run migrations in 'online' mode. + + In this scenario we need to create an Engine + and associate a connection with the context. + + """ + alembic_config = config.get_section(config.config_ini_section) + alembic_config['sqlalchemy.url'] = os.environ['DATABASE_URI'] + + connectable = engine_from_config( + alembic_config, + prefix='sqlalchemy.', + poolclass=pool.NullPool) + + with connectable.connect() as connection: + context.configure( + connection=connection, + target_metadata=target_metadata + ) + + with context.begin_transaction(): + context.run_migrations() + +if context.is_offline_mode(): + run_migrations_offline() +else: + run_migrations_online() diff --git a/datums/migrations/script.py.mako b/datums/migrations/script.py.mako new file mode 100644 index 0000000..43c0940 --- /dev/null +++ b/datums/migrations/script.py.mako @@ -0,0 +1,24 @@ +"""${message} + +Revision ID: ${up_revision} +Revises: ${down_revision | comma,n} +Create Date: ${create_date} + +""" + +# revision identifiers, used by Alembic. +revision = ${repr(up_revision)} +down_revision = ${repr(down_revision)} +branch_labels = ${repr(branch_labels)} +depends_on = ${repr(depends_on)} + +from alembic import op +import sqlalchemy as sa +${imports if imports else ""} + +def upgrade(): + ${upgrades if upgrades else "pass"} + + +def downgrade(): + ${downgrades if downgrades else "pass"} diff --git a/datums/migrations/versions/2698789ba4a4_add_inland_water_to_placemark_reports_.py b/datums/migrations/versions/2698789ba4a4_add_inland_water_to_placemark_reports_.py new file mode 100644 index 0000000..d336e19 --- /dev/null +++ b/datums/migrations/versions/2698789ba4a4_add_inland_water_to_placemark_reports_.py @@ -0,0 +1,24 @@ +"""Add inland_water to placemark_reports table + +Revision ID: 2698789ba4a4 +Revises: 8f22f932fa58 +Create Date: 2016-01-25 23:11:18.515616 + +""" + +# revision identifiers, used by Alembic. +revision = '2698789ba4a4' +down_revision = '8f22f932fa58' +branch_labels = None +depends_on = None + +from alembic import op +from sqlalchemy import Column, String + + +def upgrade(): + op.add_column('placemark_reports', Column('inland_water', String)) + + +def downgrade(): + op.drop_column('placemark_reports', 'inland_water') diff --git a/datums/migrations/versions/457bbf802239_add_foreign_key_constraints.py b/datums/migrations/versions/457bbf802239_add_foreign_key_constraints.py new file mode 100644 index 0000000..2c09240 --- /dev/null +++ b/datums/migrations/versions/457bbf802239_add_foreign_key_constraints.py @@ -0,0 +1,61 @@ +"""Add foreign key constraints + +Revision ID: 457bbf802239 +Revises: e912ea8b3cb1 +Create Date: 2016-01-25 23:04:37.492418 + +""" + +# revision identifiers, used by Alembic. +revision = '457bbf802239' +down_revision = 'e912ea8b3cb1' +branch_labels = None +depends_on = None + +from alembic import op +import sqlalchemy as sa + + +def upgrade(): + op.create_foreign_key( + constraint_name='audio_reports_report_id_fkey', + source_table='audio_reports', referent_table='reports', + local_cols=['report_id'], remote_cols=['id'], ondelete='CASCADE') + + op.create_foreign_key( + constraint_name='location_reports_report_id_fkey', + source_table='location_reports', referent_table='reports', + local_cols=['report_id'], remote_cols=['id'], ondelete='CASCADE') + + op.create_foreign_key( + constraint_name='placemark_reports_location_report_id_fkey', + source_table='placemark_reports', referent_table='location_reports', + local_cols=['location_report_id'], remote_cols=['id'], + ondelete='CASCADE') + + op.create_foreign_key( + constraint_name='weather_reports_report_id_fkey', + source_table='weather_reports', referent_table='reports', + local_cols=['report_id'], remote_cols=['id'], ondelete='CASCADE') + + op.create_foreign_key( + constraint_name='responses_report_id_fkey', + source_table='responses', referent_table='reports', + local_cols=['report_id'], remote_cols=['id'], ondelete='CASCADE') + + +def downgrade(): + with op.batch_alter_table('audio_reports') as batch_op: + batch_op.drop_constraint('audio_reports_report_id_fkey') + + with op.batch_alter_table('location_reports') as batch_op: + batch_op.drop_constraint('location_reports_report_id_fkey') + + with op.batch_alter_table('placemark_reports') as batch_op: + batch_op.drop_constraint('placemark_reports_location_report_id_fkey') + + with op.batch_alter_table('weather_reports') as batch_op: + batch_op.drop_constraint('weather_reports_report_id_fkey') + + with op.batch_alter_table('responses') as batch_op: + batch_op.drop_constraint('responses_report_id_fkey') diff --git a/datums/migrations/versions/728b24c64cea_add_primary_key_constraints.py b/datums/migrations/versions/728b24c64cea_add_primary_key_constraints.py new file mode 100644 index 0000000..305d3a1 --- /dev/null +++ b/datums/migrations/versions/728b24c64cea_add_primary_key_constraints.py @@ -0,0 +1,53 @@ +"""Add primary key constraints + +Revision ID: 728b24c64cea +Revises: 785ab1c2c255 +Create Date: 2016-01-25 22:54:18.023243 + +""" + +# revision identifiers, used by Alembic. +revision = '728b24c64cea' +down_revision = '785ab1c2c255' +branch_labels = None +depends_on = None + +from alembic import op +import sqlalchemy as sa + + +def upgrade(): + with op.batch_alter_table('audio_reports') as batch_op: + batch_op.drop_constraint('audio_snapshots_pkey') + batch_op.create_primary_key('audio_reports_pkey', columns=['id']) + + with op.batch_alter_table('location_reports') as batch_op: + batch_op.drop_constraint('location_snapshots_pkey') + batch_op.create_primary_key('location_reports_pkey', columns=['id']) + + with op.batch_alter_table('placemark_reports') as batch_op: + batch_op.drop_constraint('placemark_snapshots_pkey') + batch_op.create_primary_key('placemark_reports_pkey', columns=['id']) + + with op.batch_alter_table('weather_reports') as batch_op: + batch_op.drop_constraint('weather_snapshots_pkey') + batch_op.create_primary_key('weather_reports_pkey', columns=['id']) + + +def downgrade(): + with op.batch_alter_table('audio_reports') as batch_op: + batch_op.drop_constraint('audio_reports_pkey') + batch_op.create_primary_key('audio_snapshots_pkey', columns=['id']) + + with op.batch_alter_table('location_reports') as batch_op: + batch_op.drop_constraint('location_reports_pkey') + batch_op.create_primary_key('location_snapshots_pkey', columns=['id']) + + with op.batch_alter_table('placemark_reports') as batch_op: + batch_op.drop_constraint('placemark_reports_pkey') + batch_op.create_primary_key('placemark_snapshots_pkey', columns=['id']) + + with op.batch_alter_table('weather_reports') as batch_op: + batch_op.drop_constraint('weather_reports_pkey') + batch_op.create_primary_key('weather_snapshots_pkey', columns=['id']) + diff --git a/datums/migrations/versions/785ab1c2c255_delete_foreign_key_constraints.py b/datums/migrations/versions/785ab1c2c255_delete_foreign_key_constraints.py new file mode 100644 index 0000000..c7a28df --- /dev/null +++ b/datums/migrations/versions/785ab1c2c255_delete_foreign_key_constraints.py @@ -0,0 +1,65 @@ +"""Delete foreign key constraints + +Revision ID: 785ab1c2c255 +Revises: c984c6d45f23 +Create Date: 2016-01-25 22:52:32.756368 + +""" + +# revision identifiers, used by Alembic. +revision = '785ab1c2c255' +down_revision = 'c984c6d45f23' +branch_labels = None +depends_on = None + +from alembic import op +import sqlalchemy as sa + + +def upgrade(): + with op.batch_alter_table('audio_reports') as batch_op: + batch_op.drop_constraint( + 'audio_snapshots_snapshot_id_fkey') + + with op.batch_alter_table('location_reports') as batch_op: + batch_op.drop_constraint( + 'location_snapshots_snapshot_id_fkey') + + with op.batch_alter_table('placemark_reports') as batch_op: + batch_op.drop_constraint( + 'placemark_snapshots_location_snapshot_id_fkey') + + with op.batch_alter_table('weather_reports') as batch_op: + batch_op.drop_constraint( + 'weather_snapshots_snapshot_id_fkey') + + with op.batch_alter_table('responses') as batch_op: + batch_op.drop_constraint('responses_snapshot_id_fkey') + + +def downgrade(): + op.create_foreign_key( + constraint_name='audio_snapshots_snapshot_id_fkey', + source_table='audio_reports', referent_table='reports', + local_cols=['report_id'], remote_cols=['id'], ondelete='CASCADE') + + op.create_foreign_key( + constraint_name='location_snapshots_snapshot_id_fkey', + source_table='location_reports', referent_table='reports', + local_cols=['report_id'], remote_cols=['id'], ondelete='CASCADE') + + op.create_foreign_key( + constraint_name='placemark_reports_snapshots_snapshot_id_fkey', + source_table='placemark_reports', referent_table='location_reports', + local_cols=['location_snapshot_id'], remote_cols=['id'], + ondelete='CASCADE') + + op.create_foreign_key( + constraint_name='weather_snapshots_snapshot_id_fkey', + source_table='weather_reports', referent_table='reports', + local_cols=['report_id'], remote_cols=['id'], ondelete='CASCADE') + + op.create_foreign_key( + constraint_name='responses_snapshot_id_fkey', source_table='responses', + referent_table='reports', local_cols=['report_id'], + remote_cols=['id'], ondelete='CASCADE') diff --git a/datums/migrations/versions/8f22f932fa58_add_pressure_in_and_pressure_mb_to_.py b/datums/migrations/versions/8f22f932fa58_add_pressure_in_and_pressure_mb_to_.py new file mode 100644 index 0000000..63a10e5 --- /dev/null +++ b/datums/migrations/versions/8f22f932fa58_add_pressure_in_and_pressure_mb_to_.py @@ -0,0 +1,26 @@ +"""Add pressure_in and pressure_mb to weather_reports table + +Revision ID: 8f22f932fa58 +Revises: f24937651f71 +Create Date: 2016-01-25 23:09:20.325410 + +""" + +# revision identifiers, used by Alembic. +revision = '8f22f932fa58' +down_revision = 'f24937651f71' +branch_labels = None +depends_on = None + +from alembic import op +from sqlalchemy import Column, Numeric + + +def upgrade(): + op.add_column('weather_reports', Column('pressure_in', Numeric)) + op.add_column('weather_reports', Column('pressure_mb', Numeric)) + + +def downgrade(): + op.drop_column('weather_reports', 'pressure_in') + op.drop_column('weather_reports', 'pressure_mb') diff --git a/datums/migrations/versions/c984c6d45f23_rename_snapshot_tables_to_report_tables.py b/datums/migrations/versions/c984c6d45f23_rename_snapshot_tables_to_report_tables.py new file mode 100644 index 0000000..352718b --- /dev/null +++ b/datums/migrations/versions/c984c6d45f23_rename_snapshot_tables_to_report_tables.py @@ -0,0 +1,32 @@ +"""Rename snapshot tables to report tables. + +Revision ID: c984c6d45f23 +Revises: +Create Date: 2016-01-25 22:42:52.440714 + +""" + +# revision identifiers, used by Alembic. +revision = 'c984c6d45f23' +down_revision = None +branch_labels = None +depends_on = None + +from alembic import op +import sqlalchemy as sa + + +def upgrade(): + op.rename_table('snapshots', 'reports') + op.rename_table('audio_snapshots', 'audio_reports') + op.rename_table('location_snapshots', 'location_reports') + op.rename_table('placemark_snapshots', 'placemark_reports') + op.rename_table('weather_snapshots', 'weather_reports') + + +def downgrade(): + op.rename_table('reports', 'snapshots') + op.rename_table('audio_reports', 'audio_snapshots') + op.rename_table('location_reports', 'location_snapshots') + op.rename_table('placemark_reports', 'placemark_snapshots') + op.rename_table('weather_reports', 'weather_snapshots') diff --git a/datums/migrations/versions/e912ea8b3cb1_rename_snapshot_id_columns_to_report_id.py b/datums/migrations/versions/e912ea8b3cb1_rename_snapshot_id_columns_to_report_id.py new file mode 100644 index 0000000..3c20c1c --- /dev/null +++ b/datums/migrations/versions/e912ea8b3cb1_rename_snapshot_id_columns_to_report_id.py @@ -0,0 +1,49 @@ +"""Rename snapshot_id columns to report_id + +Revision ID: e912ea8b3cb1 +Revises: 728b24c64cea +Create Date: 2016-01-25 23:04:18.091843 + +""" + +# revision identifiers, used by Alembic. +revision = 'e912ea8b3cb1' +down_revision = '728b24c64cea' +branch_labels = None +depends_on = None + +from alembic import op +import sqlalchemy as sa + + +def upgrade(): + op.alter_column( + 'audio_reports', 'snapshot_id', new_column_name='report_id') + + op.alter_column( + 'location_reports', 'snapshot_id', new_column_name='report_id') + + op.alter_column( + 'placemark_reports', 'location_snapshot_id', + new_column_name='location_report_id') + + op.alter_column( + 'weather_reports', 'snapshot_id', new_column_name='report_id') + + op.alter_column('responses', 'snapshot_id', new_column_name='report_id') + + +def downgrade(): + op.alter_column( + 'audio_reports', 'report_id', new_column_name='snapshot_id') + + op.alter_column( + 'location_reports', 'report_id', new_column_name='snapshot_id') + + op.alter_column( + 'placemark_reports', 'report_id', new_column_name='snapshot_id') + + op.alter_column( + 'weather_reports', 'report_id', new_column_name='snapshot_id') + + op.alter_column('responses', 'report_id', new_column_name='snapshot_id') diff --git a/datums/migrations/versions/f24937651f71_add_altitude_reports_table.py b/datums/migrations/versions/f24937651f71_add_altitude_reports_table.py new file mode 100644 index 0000000..104100e --- /dev/null +++ b/datums/migrations/versions/f24937651f71_add_altitude_reports_table.py @@ -0,0 +1,35 @@ +"""Add altitude_reports table + +Revision ID: f24937651f71 +Revises: 457bbf802239 +Create Date: 2016-01-25 23:07:52.202616 + +""" + +# revision identifiers, used by Alembic. +revision = 'f24937651f71' +down_revision = '457bbf802239' +branch_labels = None +depends_on = None + +from alembic import op +from sqlalchemy import Column, ForeignKey, Numeric +from sqlalchemy_utils import UUIDType + + +def upgrade(): + op.create_table( + 'altitude_reports', + Column('id', UUIDType, primary_key=True), + Column('floors_ascended', Numeric), + Column('floors_descended', Numeric), + Column('gps_altitude_from_location', Numeric), + Column('gps_altitude_raw', Numeric), + Column('pressure', Numeric), + Column('pressure_adjusted', Numeric), + Column('report_id', UUIDType, ForeignKey( + 'reports.id', ondelete='CASCADE'), nullable=False)) + + +def downgrade(): + op.drop_table('altitude_reports') diff --git a/datums/models/__init__.py b/datums/models/__init__.py index c3b179c..8e04168 100644 --- a/datums/models/__init__.py +++ b/datums/models/__init__.py @@ -1,6 +1,6 @@ from questions import * from responses import * -from snapshots import * +from reports import * '''SQLAlchemy models for this application.''' diff --git a/datums/models/base.py b/datums/models/base.py index 8fb06a5..9202d15 100644 --- a/datums/models/base.py +++ b/datums/models/base.py @@ -1,7 +1,10 @@ +# -*- coding: utf-8 -*- + +import os +from sqlalchemy import create_engine from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import sessionmaker, scoped_session -from sqlalchemy import create_engine -import os + # Initialize Base class Base = declarative_base() @@ -14,16 +17,40 @@ session.configure(bind=engine) +def database_setup(engine): + '''Set up the database. + ''' + metadata.create_all(engine) + + +def database_teardown(engine): + '''BURN IT ALL DOWN (╯°□°)╯︵ ┻━┻ + ''' + metadata.drop_all(engine) + + +def _action_and_commit(obj, action): + '''Adds/deletes the instance obj to/from the session based on the action. + ''' + action(obj) + session.commit() + + class GhostBase(Base): '''The GhostBase class extends the declarative Base class.''' __abstract__ = True - def __repr__(self, attrs=None): - return '''<{obj.__name__}( - ', '.join(['='.join([attr, getattr(obj, attr)] for attr in attrs)]) - )>'''.format(obj=self) + def __str__(self, attrs): + return '''<{0}({1})>'''.format(self.__class__.__name__, ', '.join([ + '='.join([attr, str(getattr(self, attr, ''))]) for attr in attrs])) + + @classmethod + def _get_instance(cls, **kwargs): + '''Returns the first instance of cls with attributes matching **kwargs. + ''' + return session.query(cls).filter_by(**kwargs).first() @classmethod def get_or_create(cls, **kwargs): @@ -31,27 +58,26 @@ def get_or_create(cls, **kwargs): If a record matching the instance already exists in the database, then return it, otherwise create a new record. ''' - q = session.query(cls).filter_by(**kwargs).first() + q = cls._get_instance(**kwargs) if q: return q q = cls(**kwargs) - session.add(q) - session.commit() + _action_and_commit(q, session.add) return q + # TODO (jsa): _traverse_report only needs to return the ID for an update @classmethod - def update(cls, snapshot, **kwargs): + def update(cls, **kwargs): ''' If a record matching the instance id already exists in the database, update it. If a record matching the instance id does not already exist, create a new record. ''' - q = session.query(cls).filter_by(**kwargs).first() + q = cls._get_instance(**{'id': kwargs['id']}) if q: - for k in snapshot: - q.__dict__.update(k=snapshot[k]) - session.add(q) - session.commit() + for k, v in kwargs.items(): + setattr(q, k, v) + _action_and_commit(q, session.add) else: cls.get_or_create(**kwargs) @@ -60,10 +86,9 @@ def delete(cls, **kwargs): ''' If a record matching the instance id exists in the database, delete it. ''' - q = session.query(cls).filter_by(**kwargs).first() + q = cls._get_instance(**kwargs) if q: - session.delete(q) - session.commit() + _action_and_commit(q, session.delete) class ResponseClassLegacyAccessor(object): @@ -73,81 +98,75 @@ def __init__(self, response_class, column, accessor): self.column = column self.accessor = accessor + def _get_instance(self, **kwargs): + '''Return the first existing instance of the response record. + ''' + return session.query(self.response_class).filter_by(**kwargs).first() + def get_or_create_from_legacy_response(self, response, **kwargs): - response_cls = self.response_class(**kwargs) - # Return the existing or newly created response record - response_cls = response_cls.get_or_create(**kwargs) - # If the record does not have a response, add it + ''' + If a record matching the instance already does not already exist in the + database, then create a new record. + ''' + response_cls = self.response_class(**kwargs).get_or_create(**kwargs) if not getattr(response_cls, self.column): setattr(response_cls, self.column, self.accessor(response)) - session.add(response_cls) - session.commit() + _action_and_commit(response_cls, session.add) def update(self, response, **kwargs): - response_cls = self.response_class(**kwargs) - # Return the existing response record - response_cls = session.query( - response_cls.__class__).filter_by(**kwargs).first() + ''' + If a record matching the instance already exists in the database, update + it, else create a new record. + ''' + response_cls = self._get_instance(**kwargs) if response_cls: setattr(response_cls, self.column, self.accessor(response)) - session.add(response_cls) - session.commit() + _action_and_commit(response_cls, session.add) else: - response_cls.get_or_create_from_legacy_response(response, **kwargs) + self.get_or_create_from_legacy_response(response, **kwargs) def delete(self, response, **kwargs): - response_cls = self.response_class(**kwargs) - # Return the existing response record - response_cls = session.query( - response_cls.__class__).filter_by(**kwargs).first() + ''' + If a record matching the instance id exists in the database, delete it. + ''' + response_cls = self._get_instance(**kwargs) if response_cls: - session.delete(response_cls) - session.commit() + _action_and_commit(response_cls, session.delete) class LocationResponseClassLegacyAccessor(ResponseClassLegacyAccessor): def __init__( - self, response_class, column, accessor, venue_column, venue_accessor): + self, response_class, column, + accessor, venue_column, venue_accessor): super(LocationResponseClassLegacyAccessor, self).__init__( response_class, column, accessor) self.venue_column = venue_column self.venue_accessor = venue_accessor def get_or_create_from_legacy_response(self, response, **kwargs): - ResponseClassLegacyAccessor.get_or_create_from_legacy_response( - self, response, **kwargs) - response_cls = self.response_class(**kwargs) - # Return the existing or newly created response record - response_cls = response_cls.get_or_create(**kwargs) - # If the record does not have a response, add it + ''' + If a record matching the instance already does not already exist in the + database, then create a new record. + ''' + response_cls = self.response_class(**kwargs).get_or_create(**kwargs) if not getattr(response_cls, self.column): setattr(response_cls, self.column, self.accessor(response)) + _action_and_commit(response_cls, session.add) if not getattr(response_cls, self.venue_column): setattr( response_cls, self.venue_column, self.venue_accessor(response)) - - session.add(response_cls) - session.commit() + _action_and_commit(response_cls, session.add) def update(self, response, **kwargs): - response_cls = self.response_class(**kwargs) - # Return the existing response record - response_cls = session.query( - response_cls.__class__).filter_by(**kwargs).first() + ''' + If a record matching the instance already exists in the database, update + both the column and venue column attributes, else create a new record. + ''' + response_cls = super( + LocationResponseClassLegacyAccessor, self)._get_instance(**kwargs) if response_cls: setattr(response_cls, self.column, self.accessor(response)) setattr( response_cls, self.venue_column, self.venue_accessor(response)) - session.add(response_cls) - session.commit() - - -def database_setup(engine): - # Set up the database - metadata.create_all(engine) - - -def database_teardown(engine): - # BURN IT TO THE GROUND - metadata.drop_all(engine) + _action_and_commit(response_cls, session.add) diff --git a/datums/models/questions.py b/datums/models/questions.py index 280222a..d07755b 100644 --- a/datums/models/questions.py +++ b/datums/models/questions.py @@ -1,8 +1,10 @@ +# -*- coding: utf-8 -*- + +from base import GhostBase, session from sqlalchemy import Column, ForeignKey from sqlalchemy import Integer, String from sqlalchemy.orm import relationship -from base import GhostBase, session __all__ = ['Question'] @@ -17,7 +19,6 @@ class Question(GhostBase): responses = relationship('Response', cascade='save-update, merge, delete') - def __repr__(self): - return ''''''.format(self) + def __str__(self): + attrs = ['id', 'type', 'prompt'] + super(Question, self).__str__(attrs) diff --git a/datums/models/reports.py b/datums/models/reports.py new file mode 100644 index 0000000..b87cb97 --- /dev/null +++ b/datums/models/reports.py @@ -0,0 +1,170 @@ +# -*- coding: utf-8 -*- + +from base import GhostBase +from sqlalchemy import Column, ForeignKey +from sqlalchemy import Integer, Numeric, String, DateTime, Boolean +from sqlalchemy.orm import relationship, backref +from sqlalchemy_utils import UUIDType + + +__all__ = ['Report', 'AltitudeReport', 'AudioReport', + 'LocationReport', 'PlacemarkReport', 'WeatherReport'] + + +class Report(GhostBase): + + __tablename__ = 'reports' + + id = Column(UUIDType, primary_key=True) + background = Column(Numeric) + battery = Column(Numeric) + connection = Column(Numeric) + created_at = Column(DateTime(timezone=False)) + draft = Column(Boolean) + report_impetus = Column(Integer) + section_identifier = Column(String) + steps = Column(Integer) + + responses = relationship( + 'Response', backref=backref('Report', order_by=id), passive_deletes=True) + altitude_report = relationship( + 'AltitudeReport', backref=backref('Report'), passive_deletes=True) + audio_report = relationship( + 'AudioReport', backref=backref('Report'), passive_deletes=True) + location_report = relationship( + 'LocationReport', backref=backref('Report'), passive_deletes=True) + weather_report = relationship( + 'WeatherReport', backref=backref('Report'), passive_deletes=True) + + def __str__(self): + attrs = ['id', 'created_at', 'report_impetus', 'battery', 'steps', + 'section_identifier', 'background', 'connection', 'draft'] + super(report, self).__str__(attrs) + +class AltitudeReport(GhostBase): + + __tablename__ = 'altitude_reports' + + id = Column(UUIDType, primary_key=True) + floors_ascended = Column(Numeric) + floors_descended = Column(Numeric) + gps_altitude_from_location = Column(Numeric) + gps_altitude_raw = Column(Numeric) + pressure = Column(Numeric) + pressure_adjusted = Column(Numeric) + report_id = Column( + UUIDType, ForeignKey('reports.id', ondelete='CASCADE'), nullable=False) + + def __str__(self): + attrs = ['id', 'report_id', 'average', 'peak'] + super(AltitudeReport, self).__str__(attrs) + + +class AudioReport(GhostBase): + + __tablename__ = 'audio_reports' + + id = Column(UUIDType, primary_key=True) + average = Column(Numeric) + peak = Column(Numeric) + report_id = Column( + UUIDType, ForeignKey('reports.id', ondelete='CASCADE'), nullable=False) + + def __str__(self): + attrs = ['id', 'report_id', 'average', 'peak'] + super(AudioReport, self).__str__(attrs) + + +class LocationReport(GhostBase): + + __tablename__ = 'location_reports' + + id = Column(UUIDType, primary_key=True) + altitude = Column(Numeric) + course = Column(Numeric) + created_at = Column(DateTime(timezone=False)) + horizontal_accuracy = Column(Numeric) + latitude = Column(Numeric) + longitude = Column(Numeric) + report_id = Column( + UUIDType, ForeignKey('reports.id', ondelete='CASCADE'), nullable=False) + speed = Column(Numeric) + vertical_accuracy = Column(Numeric) + + placemark = relationship('PlacemarkReport', backref=backref( + 'location_reports', order_by=id), passive_deletes=True) + + def __str__(self): + attrs = ['id', 'report_id', 'created_at', 'latitude', + 'longitudue', 'altitude', 'speed', 'course', + 'vertical_accuracy', 'horizontal_accuracy'] + super(LocationReport, self).__str__(attrs) + + +class PlacemarkReport(GhostBase): + + __tablename__ = 'placemark_reports' + + id = Column(UUIDType, primary_key=True) + address = Column(String) + city = Column(String) + country = Column(String) + county = Column(String) + inland_water = Column(String) + location_report_id = Column( + UUIDType, ForeignKey('location_reports.id', ondelete='CASCADE'), + nullable=False) + neighborhood = Column(String) + postal_code = Column(String) + region = Column(String) + state = Column(String) + street_name = Column(String) + street_number = Column(String) + + def __str__(self): + attrs = ['id', 'location_report_id', 'street_number', + 'street_name', 'address', 'neighborhood', 'city', 'county', + 'state', 'country', 'postal_code', 'region'] + super(PlacemarkReport, self).__str__(attrs) + + +class WeatherReport(GhostBase): + + __tablename__ = 'weather_reports' + + id = Column(UUIDType, primary_key=True) + dewpoint_celsius = Column(Numeric) + feels_like_celsius = Column(Numeric) + feels_like_fahrenheit = Column(Numeric) + latitude = Column(Numeric) + longitude = Column(Numeric) + precipitation_in = Column(Numeric) + precipitation_mm = Column(Numeric) + pressure_in = Column(Numeric) + pressure_mb = Column(Numeric) + relative_humidity = Column(String) + report_id = Column( + UUIDType, ForeignKey('reports.id', ondelete='CASCADE'), nullable=False) + station_id = Column(String) + temperature_celsius = Column(Numeric) + temperature_fahrenheit = Column(Numeric) + uv = Column(Numeric) + visibility_km = Column(Numeric) + visibility_mi = Column(Numeric) + weather = Column(String) + wind_degrees = Column(Integer) + wind_direction = Column(String) + wind_gust_kph = Column(Numeric) + wind_gust_mph = Column(Numeric) + wind_kph = Column(Numeric) + wind_mph = Column(Numeric) + + def __str__(self): + attrs = ['id', 'report_id', 'station_id', 'latitude', + 'longitude', 'weather', 'temperature_fahrenheit', + 'temperature_celsius', 'feels_like_fahrenheit', + 'feels_like_celsius', 'wind_direction', 'wind_degrees', + 'wind_mph', 'wind_kph', 'wind_gust_mph', 'wind_gust_kph', + 'relative_humidity', 'precipitation_in', 'precipitation_mm', + 'dewpoint_celsius', 'visibility_mi', 'visibility_km', 'uv'] + super(WeatherReport, self).__str__(attrs) diff --git a/datums/models/responses.py b/datums/models/responses.py index deed72b..fef2fec 100644 --- a/datums/models/responses.py +++ b/datums/models/responses.py @@ -1,10 +1,12 @@ +# -*- coding: utf-8 -*- + +from base import GhostBase, ResponseClassLegacyAccessor from sqlalchemy import Column, ForeignKey from sqlalchemy import Boolean, Float, Integer, String -from sqlalchemy.orm import backref, relationship from sqlalchemy.dialects import postgresql +from sqlalchemy.orm import backref, relationship from sqlalchemy_utils import UUIDType -from base import GhostBase, ResponseClassLegacyAccessor __all__ = ['Response', 'BooleanResponse', 'NumericResponse', 'LocationResponse', 'MultiResponse', 'NoteResponse', 'PeopleResponse', 'TokenResponse'] @@ -15,24 +17,23 @@ class Response(GhostBase): __tablename__ = 'responses' id = Column(Integer, primary_key=True) - snapshot_id = Column(UUIDType, ForeignKey('snapshots.id')) - question_id = Column(Integer, ForeignKey('questions.id')) + question_id = Column( + Integer, ForeignKey('questions.id', ondelete='CASCADE')) + report_id = Column(UUIDType, ForeignKey('reports.id', ondelete='CASCADE')) type = Column(String) __mapper_args__ = { 'polymorphic_on': type } - def __repr__(self): - return '''<{self.__name__}( - id={self.id}, snapshot_id={self.snapshot_id}, - question_id={self.question_id}, type={self.type} - )>'''.format(self) + def __str__(self): + attrs = ['id', 'report_id', 'question_id', 'type'] + super(Response, self).__str__(attrs) class BooleanResponse(Response): - boolean_response = Column(Boolean) # answeredOptions + boolean_response = Column(Boolean) __mapper_args__ = { 'polymorphic_identity': 'boolean', @@ -41,8 +42,8 @@ class BooleanResponse(Response): class LocationResponse(Response): - location_response = Column(String) # text - venue_id = Column(String, nullable=True) # foursquareVenueId + location_response = Column(String) + venue_id = Column(String, nullable=True) __mapper_args__ = { 'polymorphic_identity': 'location', @@ -51,7 +52,7 @@ class LocationResponse(Response): class MultiResponse(Response): - multi_response = Column(postgresql.ARRAY(String)) # answeredOptions + multi_response = Column(postgresql.ARRAY(String)) __mapper_args__ = { 'polymorphic_identity': 'multi', @@ -69,7 +70,7 @@ class NoteResponse(Response): class NumericResponse(Response): - numeric_response = Column(Float) # numericResponse + numeric_response = Column(Float) __mapper_args__ = { 'polymorphic_identity': 'numeric', @@ -78,7 +79,7 @@ class NumericResponse(Response): class PeopleResponse(Response): - people_response = Column(postgresql.ARRAY(String)) # text + people_response = Column(postgresql.ARRAY(String)) __mapper_args__ = { 'polymorphic_identity': 'people', @@ -87,7 +88,7 @@ class PeopleResponse(Response): class TokenResponse(Response): - tokens_response = Column(postgresql.ARRAY(String)) # text + tokens_response = Column(postgresql.ARRAY(String)) __mapper_args__ = { 'polymorphic_identity': 'tokens', diff --git a/datums/models/snapshots.py b/datums/models/snapshots.py deleted file mode 100644 index c40fe2e..0000000 --- a/datums/models/snapshots.py +++ /dev/null @@ -1,143 +0,0 @@ -from sqlalchemy import Column, ForeignKey -from sqlalchemy import Integer, Numeric, String, DateTime, Boolean -from sqlalchemy.orm import relationship, backref -from sqlalchemy_utils import UUIDType - -from base import GhostBase - -__all__ = ['Snapshot', 'AudioSnapshot', 'LocationSnapshot', - 'PlacemarkSnapshot', 'WeatherSnapshot'] - - -class Snapshot(GhostBase): - - __tablename__ = 'snapshots' - - id = Column(UUIDType, primary_key=True) # uniqueIdentifier - created_at = Column(DateTime) # date - report_impetus = Column(Integer) # reportImpetus - battery = Column(Numeric) # battery - steps = Column(Integer) # steps - section_identifier = Column(String) # sectionIdentifier - background = Column(Numeric) # background - connection = Column(Numeric) # connection - draft = Column(Boolean) # draft - - responses = relationship( - 'Response', backref=backref('snapshot', order_by=id)) - audio_snapshot = relationship( - 'AudioSnapshot', backref=backref('snapshot')) - location_snapshot = relationship( - 'LocationSnapshot', backref=backref('snapshot')) - weather_snapshot = relationship( - 'WeatherSnapshot', backref=backref('snapshot')) - - def __repr__(self): - attrs = ['id', 'created_at', 'report_impetus', 'battery', 'steps', - 'section_identifier', 'background', 'connection', 'draft'] - super(self.__name__).__repr__(attrs) - -class AudioSnapshot(GhostBase): - - __tablename__ = 'audio_snapshots' - - id = Column(UUIDType, primary_key=True) # uniqueIdentifier - snapshot_id = Column(UUIDType, ForeignKey( - 'snapshots.id')) # uniqueIdentifier - average = Column(Numeric) # avg - peak = Column(Numeric) # peak - - def __repr__(self): - attrs = ['id', 'snapshot_id', 'average', 'peak'] - super(self.__name__).__repr__(attrs) - - -class LocationSnapshot(GhostBase): - - __tablename__ = 'location_snapshots' - - id = Column(UUIDType, primary_key=True) # uniqueIdentifier - snapshot_id = Column(UUIDType, ForeignKey( - 'snapshots.id')) # uniqueIdentifier - created_at = Column(DateTime) # timestamp - latitude = Column(Numeric) # latitude - longitude = Column(Numeric) # longitude - altitude = Column(Numeric) # altitude - speed = Column(Numeric) # speed - course = Column(Numeric) # course - vertical_accuracy = Column(Numeric) # verticalAccuracy - horizontal_accuracy = Column(Numeric) # horizontalAccuracy - - placemark = relationship( - 'PlacemarkSnapshot', backref=backref('location_snapshots', order_by=id)) - - def __repr__(self): - attrs = ['id', 'snapshot_id', 'created_at', 'latitude', - 'longitudue', 'altitude', 'speed', 'course', - 'vertical_accuracy', 'horizontal_accuracy'] - super(self.__name__).__repr__(attrs) - - -class PlacemarkSnapshot(GhostBase): - - __tablename__ = 'placemark_snapshots' - - id = Column(UUIDType, primary_key=True) # uniqueIdentifier - location_snapshot_id = Column(UUIDType, ForeignKey( - 'location_snapshots.id')) # uniqueIdentifier - street_number = Column(String) # subThoroughfare - street_name = Column(String) # thoroughfare - address = Column(String) # name - neighborhood = Column(String) # subLocality - city = Column(String) # locality - county = Column(String) # subAdministrativeArea - state = Column(String) # administrativeArea - country = Column(String) # country - postal_code = Column(String) # postalCode - region = Column(String) # region - - def __repr__(self): - attrs = ['id', 'location_snapshot_id', 'street_number', - 'street_name', 'address', 'neighborhood', 'city', 'county', - 'state', 'country', 'postal_code', 'region'] - super(self.__name__).__repr__(attrs) - - -class WeatherSnapshot(GhostBase): - - __tablename__ = 'weather_snapshots' - - id = Column(UUIDType, primary_key=True) # uniqueIdentifier - snapshot_id = Column(UUIDType, ForeignKey( - 'snapshots.id')) # uniqueIdentifier - station_id = Column(String) # stationID - latitude = Column(Numeric) # latitude - longitude = Column(Numeric) # longitude - weather = Column(String) # weather - temperature_fahrenheit = Column(Numeric) # tempF - temperature_celsius = Column(Numeric) # tempC - feels_like_fahrenheit = Column(Numeric) # feelslikeF - feels_like_celsius = Column(Numeric) # feelslikeC - wind_direction = Column(String) # windDirection - wind_degrees = Column(Integer) # windDegrees - wind_mph = Column(Numeric) # windMPH - wind_kph = Column(Numeric) # windKPH - wind_gust_mph = Column(Numeric) # windGustMPH - wind_gust_kph = Column(Numeric) # windGustKPH - relative_humidity = Column(String) # relativeHumidity - precipitation_in = Column(Numeric) # precipTodayIn - precipitation_mm = Column(Numeric) # precipTodayMetric - dewpoint_celsius = Column(Numeric) # dewpointC - visibility_mi = Column(Numeric) # visibilityMi - visibility_km = Column(Numeric) # visibilityKM - uv = Column(Numeric) # uv - - def __repr__(self): - attrs = ['id', 'snapshot_id', 'station_id', 'latitude', - 'longitude', 'weather', 'temperature_fahrenheit', - 'temperature_celsius', 'feels_like_fahrenheit', - 'feels_like_celsius', 'wind_direction', 'wind_degrees', - 'wind_mph', 'wind_kph', 'wind_gust_mph', 'wind_gust_kph', - 'relative_humidity', 'precipitation_in', 'precipitation_mm', - 'dewpoint_celsius', 'visibility_mi', 'visibility_km', 'uv'] - super(self.__name__).__repr__(attrs) diff --git a/datums/pipeline/__init__.py b/datums/pipeline/__init__.py index 83a9975..89502e0 100644 --- a/datums/pipeline/__init__.py +++ b/datums/pipeline/__init__.py @@ -1 +1,168 @@ -__all__ = ['add', 'codec', 'delete', 'update'] \ No newline at end of file +# -*- coding: utf-8 -*- + +import codec +import json +import mappers +import uuid +import warnings +from datums import models + +__all__ = ['codec', 'mappers'] + + +def _traverse_report(): + pass + + +class QuestionPipeline(object): + + def __init__(self, question): + self.question = question + + self.question_dict = {'type': self.question['questionType'], + 'prompt': self.question['prompt']} + + def add(self): + models.Question.get_or_create(**self.question_dict) + + def update(self): + models.Question.update(**self.question_dict) + + def delete(self): + models.Question.delete(**self.question_dict) + + +class ResponsePipeline(object): + + def __init__(self, response, report): + self.response = response + self.report = report + + self.accessor, self.ids = codec.get_response_accessor( + self.response, self.report) + + def add(self): + self.accessor.get_or_create_from_legacy_response( + self.response, **self.ids) + + def update(self): + self.accessor.update(self.response, **self.ids) + + def delete(self): + self.accessor.delete(self.response, **self.ids) + + +class ReportPipeline(object): + + def __init__(self, report): + self.report = report + + def _report(self, action, key_mapper=mappers._report_key_mapper): + '''Return the dictionary of **kwargs with the correct datums attribute + names and data types for the top level of the report, and return the + nested levels separately. + ''' + _top_level = [ + k for k, v in self.report.items() if not isinstance(v, dict)] + _nested_level = [ + k for k, v in self.report.items() if isinstance(v, dict)] + top_level_dict = {} + nested_levels_dict = {} + for key in _top_level: + try: + if key == 'date' or key == 'timestamp': + item = mappers._key_type_mapper[key]( + str(self.report[key]), **{'ignoretz': True}) + else: + item = mappers._key_type_mapper[key](str( + self.report[key]) if key != 'draft' else self.report[key]) + except KeyError: + item = self.report[key] + finally: + try: + top_level_dict[key_mapper[key]] = item + except KeyError: + warnings.warn(''' + {0} is not currently supported by datums and will be ignored. + Would you consider submitting an issue to add support? + https://www.github.com/thejunglejane/datums/issues + '''.format(key)) + for key in _nested_level: + nested_levels_dict[key] = self.report[key] + # Add the parent report ID + nested_levels_dict[key][ + 'reportUniqueIdentifier'] = mappers._key_type_mapper[ + 'uniqueIdentifier'](str(self.report['uniqueIdentifier'])) + if key == 'placemark': + # Add the parent location report UUID + nested_levels_dict[key][ + 'locationUniqueIdentifier'] = nested_levels_dict[key].pop( + 'reportUniqueIdentifier') + # Create UUID for altitude report if there is not one and the action + # is get_or_create, else delete the altitude report from the nested + # levels and warn that it will not be updated + if 'uniqueIdentifier' not in nested_levels_dict[key]: + if action.__func__.func_name == 'get_or_create': + nested_levels_dict[key]['uniqueIdentifier'] = uuid.uuid4() + else: + del nested_levels_dict[key] + warnings.warn(''' + No uniqueIdentifier found for AltitudeReport in {0}. + Existing altitude report will not be updated. + '''.format(self.report['uniqueIdentifier'])) + return top_level_dict, nested_levels_dict + + def add(self, action=models.Report.get_or_create, + key_mapper=mappers._report_key_mapper): + top_level, nested_levels = self._report(action, key_mapper) + action(**top_level) + for nested_level in nested_levels: + try: + key_mapper = mappers._report_key_mapper[nested_level] + except KeyError: + key_mapper = mappers._report_key_mapper[ + 'location'][nested_level] + ReportPipeline(nested_levels[nested_level]).add( + mappers._model_type_mapper[ + nested_level].get_or_create, key_mapper) + + def update(self, action=models.Report.update, + key_mapper=mappers._report_key_mapper): + top_level, nested_levels = self._report(action, key_mapper) + action(**top_level) + for nested_level in nested_levels: + try: + key_mapper = mappers._report_key_mapper[nested_level] + except KeyError: + key_mapper = mappers._report_key_mapper[ + 'location'][nested_level] + ReportPipeline(nested_levels[nested_level]).update( + mappers._model_type_mapper[nested_level].update, key_mapper) + + def delete(self): + models.Report.delete(**{'id': mappers._key_type_mapper[ + 'uniqueIdentifier'](str(self.report['uniqueIdentifier']))}) + + +class SnapshotPipeline(object): + + def __init__(self, snapshot): + self.snapshot = snapshot + + self.report = self.snapshot.copy() + self.responses = self.report.pop('responses') + + _ = self.report.pop('photoSet', None) # TODO (jsa): add support + + def add(self): + ReportPipeline(self.report).add() + for response in self.responses: + ResponsePipeline(response, self.report).add() + + def update(self): + ReportPipeline(self.report).update() + for response in self.responses: + ResponsePipeline(response, self.report).update() + + def delete(self): + ReportPipeline(self.report).delete() diff --git a/datums/pipeline/add.py b/datums/pipeline/add.py deleted file mode 100644 index 67dac9c..0000000 --- a/datums/pipeline/add.py +++ /dev/null @@ -1,121 +0,0 @@ -from dateutil.parser import parse -import uuid -import json - -from datums import models -import codec - - -def add_question(question): - question_dict = {'type': question['questionType'], - 'prompt': question['prompt']} - models.Question.get_or_create(**question_dict) - - -def add_snapshot(snapshot): - snapshot_dict = { - 'id': uuid.UUID(snapshot['uniqueIdentifier']), - 'created_at': parse(snapshot.get('date')), - 'report_impetus': snapshot.get('reportImpetus'), - 'battery': snapshot.get('battery'), 'steps': snapshot.get('steps'), - 'section_identifier': snapshot.get('sectionIdentifier'), - 'background': snapshot.get('background'), - 'connection': snapshot.get('connection'), - 'draft': bool(snapshot.get('draft'))} - models.Snapshot.get_or_create(**snapshot_dict) - - -def add_audio_snapshot(snapshot): - audio_snapshot = snapshot['audio'] - audio_snapshot_dict = { - 'id': uuid.UUID(audio_snapshot['uniqueIdentifier']), - 'snapshot_id': uuid.UUID(snapshot['uniqueIdentifier']), - 'average': audio_snapshot.get('avg'), - 'peak': audio_snapshot.get('peak')} - models.AudioSnapshot.get_or_create(**audio_snapshot_dict) - - -def add_location_snapshot(snapshot): - location_snapshot = snapshot.get('location') - if location_snapshot is not None: - location_snapshot_dict = { - 'id': uuid.UUID(location_snapshot['uniqueIdentifier']), - 'snapshot_id': uuid.UUID(snapshot['uniqueIdentifier']), - 'created_at': parse(str(location_snapshot.get('timestamp'))), - 'latitude': location_snapshot.get('latitude'), - 'longitude': location_snapshot.get('longitude'), - 'altitude': location_snapshot.get('altitude'), - 'speed': location_snapshot.get('speed'), - 'course': location_snapshot.get('course'), - 'vertical_accuracy': location_snapshot.get('verticalAccuracy'), - 'horizontal_accuracy': location_snapshot.get('horizontalAccuracy')} - models.LocationSnapshot.get_or_create(**location_snapshot_dict) - - -def add_placemark_snapshot(snapshot): - location_snapshot = snapshot.get('location') - if location_snapshot is not None: - placemark_snapshot = location_snapshot.get('placemark') - if placemark_snapshot is not None: - placemark_snapshot_dict = { - 'id': uuid.UUID(placemark_snapshot.get('uniqueIdentifier')), - 'location_snapshot_id': uuid.UUID( - snapshot['location']['uniqueIdentifier']), - 'street_number': placemark_snapshot.get('subThoroughfare'), - 'street_name': placemark_snapshot.get('thoroughfare'), - 'address': placemark_snapshot.get('name'), - 'neighborhood': placemark_snapshot.get('subLocality'), - 'city': placemark_snapshot.get('locality'), - 'county': placemark_snapshot.get('subAdministrativeArea'), - 'state': placemark_snapshot.get('administrativeArea'), - 'country': placemark_snapshot.get('country'), - 'postal_code': placemark_snapshot.get('postalCode'), - 'region': placemark_snapshot.get('region')} - models.PlacemarkSnapshot.get_or_create(**placemark_snapshot_dict) - - -def add_weather_snapshot(snapshot): - weather_snapshot = snapshot.get('weather') - if weather_snapshot is not None: - weather_snapshot_dict = { - 'id': weather_snapshot['uniqueIdentifier'], - 'snapshot_id': snapshot['uniqueIdentifier'], - 'station_id': weather_snapshot.get('stationID'), - 'latitude': weather_snapshot.get('latitude'), - 'longitude': weather_snapshot.get('longitude'), - 'weather': weather_snapshot.get('weather'), - 'temperature_fahrenheit': weather_snapshot.get('tempF'), - 'temperature_celsius': weather_snapshot.get('tempC'), - 'feels_like_fahrenheit': weather_snapshot.get('feelslikeF'), - 'feels_like_celsius': weather_snapshot.get('feelslikeC'), - 'wind_direction': weather_snapshot.get('windDirection'), - 'wind_degrees': weather_snapshot.get('windDegrees'), - 'wind_mph': weather_snapshot.get('windMPH'), - 'wind_kph': weather_snapshot.get('windKPH'), - 'wind_gust_mph': weather_snapshot.get('windGustMPH'), - 'wind_gust_kph': weather_snapshot.get('windGustKPH'), - 'relative_humidity': weather_snapshot.get('relativeHumidity'), - 'precipitation_in': weather_snapshot.get('precipTodayIn'), - 'precipitation_mm': weather_snapshot.get('precipTodayMetric'), - 'dewpoint_celsius': weather_snapshot.get('dewpointC'), - 'visibility_mi': weather_snapshot.get('visibilityMi'), - 'visibility_km': weather_snapshot.get('visibilityKM'), - 'uv': weather_snapshot.get('uv')} - models.WeatherSnapshot.get_or_create(**weather_snapshot_dict) - - -def add_response(response, snapshot): - accessor, ids = codec.get_response_accessor(response, snapshot) - accessor.get_or_create_from_legacy_response(response, **ids) - - -def add_report(snapshot): - # Add snapshots - add_snapshot(snapshot) - add_audio_snapshot(snapshot) - add_location_snapshot(snapshot) - add_placemark_snapshot(snapshot) - add_weather_snapshot(snapshot) - # Add responses - for response in snapshot['responses']: - add_response(response, snapshot) diff --git a/datums/pipeline/codec.py b/datums/pipeline/codec.py index 0399f62..2836b6f 100644 --- a/datums/pipeline/codec.py +++ b/datums/pipeline/codec.py @@ -1,7 +1,20 @@ +# -*- coding: utf-8 -*- + from datums import models def human_to_boolean(human): + '''Convert a boolean string ('Yes' or 'No') to True or False. + + PARAMETERS + ---------- + human : list + a list containing the "human" boolean string to be converted to + a Python boolean object. If a non-list is passed, or if the list + is empty, None will be returned. Only the first element of the + list will be used. Anything other than 'Yes' will be considered + False. + ''' if not isinstance(human, list) or len(human) == 0: return None if human[0].lower() == 'yes': @@ -41,14 +54,14 @@ def human_to_boolean(human): (lambda x: [i.get('text') for i in x.get('textResponses', [])])) -def get_response_accessor(response, snapshot): +def get_response_accessor(response, report): # Determine the question ID and response type based on the prompt question_id, response_type = models.session.query( models.Question.id, models.Question.type).filter( models.Question.prompt == response['questionPrompt']).first() ids = {'question_id': question_id, # set the question ID - 'snapshot_id': snapshot['uniqueIdentifier']} # set the snapshot ID + 'report_id': report['uniqueIdentifier']} # set the report ID # Dictionary mapping response type to response class, column, and accessor # mapper diff --git a/datums/pipeline/delete.py b/datums/pipeline/delete.py deleted file mode 100644 index 4e469ca..0000000 --- a/datums/pipeline/delete.py +++ /dev/null @@ -1,45 +0,0 @@ -import json - -from datums import models -import codec - - -models = [models.Snapshot, models.AudioSnapshot, models.LocationSnapshot, - models.PlacemarkSnapshot, models.WeatherSnapshot] - - -def delete_snapshot(snapshot, model): - target = model.__name__.lower().strip('snapshot') - try: - ids = {'id': snapshot[target]['uniqueIdentifier']} - except KeyError: - if target == '': - ids = {'id': snapshot['uniqueIdentifier']} - elif target == 'placemark': - try: - ids = {'id': snapshot.get( - 'location')['placemark']['uniqueIdentifier']} - except KeyError: - pass - else: - model.delete(**ids) - - -def delete_question(question): - question_dict = {'type': question['questionType'], - 'prompt': question['prompt']} - models.Question.delete(**question_dict) - - -def delete_response(response, snapshot): - accessor, ids = codec.get_response_accessor(response, snapshot) - accessor.delete(response, **ids) - - -def delete_report(snapshot): - # Delete snapshots - for model in models: - delete_snapshot(snapshot, model) - # Delete responses - for response in snapshot['responses']: - delete_response(response, snapshot) diff --git a/datums/pipeline/mappers.py b/datums/pipeline/mappers.py new file mode 100644 index 0000000..6b6ebee --- /dev/null +++ b/datums/pipeline/mappers.py @@ -0,0 +1,107 @@ +# -*- coding: utf-8 -*- + +import uuid +from dateutil.parser import parse +from datums import models + + +_model_type_mapper = { + 'altitude': models.AltitudeReport, + 'audio': models.AudioReport, + 'location': models.LocationReport, + 'placemark': models.PlacemarkReport, + 'report': models.Report, + 'weather': models.WeatherReport +} + +_key_type_mapper = { + 'date': parse, + 'draft': bool, + 'timestamp': parse, + 'uniqueIdentifier': uuid.UUID +} + +# TODO (jsa): just snakeify Reporter's attribute names and call it a day, other- +# wise datums will break every time Reporter updates attributes +_report_key_mapper = { + 'altitude': { + 'adjustedPressure': 'pressure_adjusted', + 'floorsAscended': 'floors_ascended', + 'floorsDescended': 'floors_descended', + 'gpsAltitudeFromLocation': 'gps_altitude_from_location', + 'gpsRawAltitude': 'gps_altitude_raw', + 'pressure': 'pressure', + 'reportUniqueIdentifier': 'report_id', # added + 'uniqueIdentifier': 'id' + }, + 'audio': { + 'avg': 'average', + 'peak': 'peak', + 'reportUniqueIdentifier': 'report_id', # added + 'uniqueIdentifier': 'id' + }, + 'background': 'background', + 'battery': 'battery', + 'connection': 'connection', + 'date': 'created_at', + 'draft': 'draft', + 'location': { + 'altitude': 'altitude', + 'course': 'course', + 'horizontalAccuracy': 'horizontal_accuracy', + 'latitude': 'latitude', + 'longitude': 'longitude', + 'placemark': { + # TODO (jsa): don't assume U.S. addresses + 'administrativeArea': 'state', + 'country': 'country', + 'inlandWater': 'inland_water', + 'locality': 'city', + 'locationUniqueIdentifier': 'location_report_id', # added + 'name': 'address', + 'postalCode': 'postal_code', + 'region': 'region', + 'subAdministrativeArea': 'county', + 'subLocality': 'neighborhood', + 'subThoroughfare': 'street_number', + 'thoroughfare': 'street_name', + 'uniqueIdentifier': 'id' + }, + 'reportUniqueIdentifier': 'report_id', # added + 'speed': 'speed', + 'timestamp': 'created_at', + 'uniqueIdentifier': 'id', + 'verticalAccuracy': 'vertical_accuracy' + }, + 'reportImpetus': 'report_impetus', + 'sectionIdentifier': 'section_identifier', + 'steps': 'steps', + 'uniqueIdentifier': 'id', + 'weather': { + 'dewpointC': 'dewpoint_celsius', + 'feelslikeC': 'feels_like_celsius', + 'feelslikeF': 'feels_like_fahrenheit', + 'latitude': 'latitude', + 'longitude': 'longitude', + 'precipTodayIn': 'precipitation_in', + 'precipTodayMetric': 'precipitation_mm', + 'pressureIn': 'pressure_in', + 'pressureMb': 'pressure_mb', + 'relativeHumidity': 'relative_humidity', + 'reportUniqueIdentifier': 'report_id', # added + 'stationID': 'station_id', + 'tempC': 'temperature_celsius', + 'tempF': 'temperature_fahrenheit', + 'uniqueIdentifier': 'id', + 'uv': 'uv', + 'visibilityKM': 'visibility_km', + 'visibilityMi': 'visibility_mi', + 'weather': 'weather', + 'windDegrees': 'wind_degrees', + 'windDirection': 'wind_direction', + 'windGustKPH': 'wind_gust_kph', + 'windGustMPH': 'wind_gust_mph', + 'windKPH': 'wind_kph', + 'windMPH': 'wind_mph' + } +} diff --git a/datums/pipeline/update.py b/datums/pipeline/update.py deleted file mode 100644 index bf83229..0000000 --- a/datums/pipeline/update.py +++ /dev/null @@ -1,45 +0,0 @@ -import json - -from datums import models -import codec - - -models = [models.Snapshot, models.AudioSnapshot, models.LocationSnapshot, - models.PlacemarkSnapshot, models.WeatherSnapshot] - - -def update_snapshot(snapshot, model): - target = model.__name__.lower().strip('snapshot') - try: - ids = {'id': snapshot[target]['uniqueIdentifier']} - except KeyError: - if target == '': - ids = {'id': snapshot['uniqueIdentifier']} - elif target == 'placemark': - try: - ids = {'id': snapshot.get( - 'location')['placemark']['uniqueIdentifier']} - except KeyError: - pass - else: - model.update(snapshot, **ids) - - -def update_question(question): - question_dict = {'type': question['questionType'], - 'prompt': question['prompt']} - models.Question.update(question, **question_dict) - - -def update_response(response, snapshot): - accessor, ids = codec.get_response_accessor(response, snapshot) - accessor.update(response, **ids) - - -def update_report(snapshot): - # Update snapshots - for model in models: - update_snapshot(snapshot, model) - # Update response - for response in snapshot['responses']: - update_response(response, snapshot) diff --git a/examples/add_reports_from_yesterday.sh b/examples/add_reports_from_yesterday.sh index af92c1b..ddd8e06 100644 --- a/examples/add_reports_from_yesterday.sh +++ b/examples/add_reports_from_yesterday.sh @@ -2,6 +2,6 @@ REPORTER_PATH=$HOME/Dropbox/Apps/Reporter-App # Get yesterday's date -yesterday=$(date -v -1d +"%Y-%m-%d") +YESTERDAY=$(date -v -1d +"%Y-%m-%d") # Add the reports from the file dated yesterday to the database -datums --add "$REPORTER_PATH/$yesterday-reporter-export.json" +datums --add $REPORTER_PATH/$YESTERDAY-reporter-export.json diff --git a/images/data_model.png b/images/data_model.png index 9134b55..e2dc0dd 100644 Binary files a/images/data_model.png and b/images/data_model.png differ diff --git a/requirements.txt b/requirements.txt new file mode 100644 index 0000000..8dbe0ba --- /dev/null +++ b/requirements.txt @@ -0,0 +1,10 @@ +SQLAlchemy==1.0.11 +SQLAlchemy-Utils==0.29.9 +datums==1.0.0 +funcsigs==0.4 +mock==1.3.0 +pbr==1.8.1 +psycopg2==2.6.1 +python-dateutil==2.4.2 +six==1.10.0 +wsgiref==0.1.2 diff --git a/setup.py b/setup.py index 1777b47..ae36eb5 100644 --- a/setup.py +++ b/setup.py @@ -12,7 +12,9 @@ def readme(): packages = ['datums', 'datums.pipeline', 'datums.models'], version = __version__, scripts = ['bin/datums'], - install_requires = ['sqlalchemy', 'sqlalchemy-utils', 'python-dateutil'], + install_requires = [ + 'alembic', 'sqlalchemy', 'sqlalchemy-utils', 'python-dateutil'], + tests_require = ['mock'], description = 'A PostgreSQL pipeline for Reporter.', author = 'Jane Stewart Adams', author_email = 'jane@thejunglejane.com', diff --git a/tests/__init__.py b/tests/__init__.py new file mode 100644 index 0000000..4454c48 --- /dev/null +++ b/tests/__init__.py @@ -0,0 +1 @@ +__all__ = ['test_codec', 'test_models', 'test_pipeline'] \ No newline at end of file diff --git a/tests/test_codec.py b/tests/test_codec.py new file mode 100644 index 0000000..637e18a --- /dev/null +++ b/tests/test_codec.py @@ -0,0 +1,65 @@ +import mock +import unittest +from datums import models +from datums.pipeline import codec +from sqlalchemy.orm import query + + +class TestModelsBase(unittest.TestCase): + + def setUp(self): + self.response = {'questionPrompt': 'How anxious are you?', + 'uniqueIdentifier': '5496B14F-EAF1-4EF7-85DB-9531FDD7DC17', + 'numericResponse': '0'} + self.report = { + 'uniqueIdentifier': '1B7AADBF-C137-4F35-A099-D73ACE534CFC'} + + def tearDown(self): + del self.response + del self.report + + def test_human_to_boolean_none(self): + '''Does human_to_boolean return None if a non-list or an empty list is + passed? + ''' + self.assertIsNone(codec.human_to_boolean([])) + self.assertIsNone(codec.human_to_boolean(3)) + + def test_human_to_boolean_true(self): + '''Does human_to_boolean return True if a list containing 'Yes' is + passed? + ''' + self.assertTrue(codec.human_to_boolean(['Yes'])) + + def test_human_to_boolean_false(self): + '''Does human_to_boolean return False if a list containing 'No', or + something else that is not 'Yes', is passed? + ''' + self.assertFalse(codec.human_to_boolean(['No'])) + self.assertFalse(codec.human_to_boolean(['Foo'])) + + def test_human_to_boolean_true_followed_by_false(self): + '''Does human_to_boolean return True if the first element of the list + is 'Yes', ignoring other elements of the list? + ''' + self.assertTrue(codec.human_to_boolean(['Yes', 'No'])) + + @mock.patch.object(query.Query, 'first') + @mock.patch.object(query.Query, 'filter', return_value=query.Query( + models.Question)) + @mock.patch.object( + models.session, 'query', return_value=query.Query(models.Question)) + def test_get_response_accessor_valid_response_type( + self, mock_session_query, mock_query_filter, mock_query_first): + '''Does get_response_accessor() return the right response_mapper for + the prompt in the response, as well as the question and report ID? + ''' + mock_query_first.return_value = (1, 5) + mapper, ids = codec.get_response_accessor(self.response, self.report) + mock_session_query.assert_called_once_with( + models.Question.id, models.Question.type) + self.assertTrue(mock_query_filter.called) + self.assertEqual(mapper, codec.numeric_accessor) + self.assertIsInstance(mapper, models.base.ResponseClassLegacyAccessor) + self.assertDictEqual(ids, { + 'question_id': 1, 'report_id': self.report['uniqueIdentifier']}) diff --git a/tests/test_models.py b/tests/test_models.py new file mode 100644 index 0000000..e38298b --- /dev/null +++ b/tests/test_models.py @@ -0,0 +1,215 @@ +# -*- coding: utf-8 -*- + +import mock +import random +import unittest +from datums import models +from sqlalchemy.orm import query + + +class TestModelsBase(unittest.TestCase): + + def setUp(self): + self.GhostBaseInstance = models.base.GhostBase() + + def tearDown(self): + del self.GhostBaseInstance + + @mock.patch.object(models.base.metadata, 'create_all') + def test_database_setup(self, mock_create_all): + models.base.database_setup(models.engine) + mock_create_all.assert_called_once_with(models.engine) + + @mock.patch.object(models.base.metadata, 'drop_all') + def test_database_teardown(self, mock_drop_all): + models.base.database_teardown(models.engine) + mock_drop_all.assert_called_once_with(models.engine) + + @mock.patch.object(models.session, 'commit') + @mock.patch.object(models.session, 'add') + def test_action_and_commit_valid_kwargs( + self, mock_session_add, mock_session_commit): + '''Does the _action_and_commit() method commit the session if the + kwargs are valid? + ''' + kwargs = {'section_identifier': 'bar'} + obj = models.Report(**kwargs) + setattr(self.GhostBaseInstance, 'section_identifier', 'bar') + models.base._action_and_commit(obj, mock_session_add) + mock_session_add.assert_called_once_with(obj) + self.assertTrue(mock_session_commit.called) + + +@mock.patch.object(query.Query, 'first') +@mock.patch.object(query.Query, 'filter_by', return_value=query.Query( + models.Report)) +@mock.patch.object(models.session, 'query', return_value=query.Query( + models.Report)) +class TestGhostBase(unittest.TestCase): + + def setUp(self): + self.GhostBaseInstance = models.base.GhostBase() + + def tearDown(self): + del self.GhostBaseInstance + + def test_get_instance_exists( + self, mock_session_query, mock_query_filter, mock_query_first): + '''Does the _get_instance() method return an existing instance of the + class? + ''' + mock_query_first.return_value = models.base.GhostBase() + self.assertIsInstance( + self.GhostBaseInstance._get_instance( + **{'foo': 'bar'}), self.GhostBaseInstance.__class__) + mock_session_query.assert_called_once_with(models.base.GhostBase) + mock_query_filter.assert_called_once_with(**{'foo': 'bar'}) + self.assertTrue(mock_query_first.called) + + def test_get_instance_does_not_exist( + self, mock_session_query, mock_query_filter, mock_query_first): + '''Does the _get_instance() method return None if no instance of the + class exists? + ''' + mock_query_first.return_value = None + self.assertIsNone( + self.GhostBaseInstance._get_instance(**{'foo': 'bar'})) + mock_session_query.assert_called_once_with(models.base.GhostBase) + mock_query_filter.assert_called_once_with(**{'foo': 'bar'}) + self.assertTrue(mock_query_first.called) + + @mock.patch.object(models.session, 'add') + def test_get_or_create_get(self, mock_session_add, mock_session_query, + mock_query_filter, mock_query_first): + '''Does the get_or_create() method return an instance of the class + without adding it to the session if the instance already exists? + ''' + mock_query_first.return_value = True + self.assertTrue(self.GhostBaseInstance.get_or_create(**{'id': 'foo'})) + mock_session_add.assert_not_called() + + @mock.patch.object(models.session, 'add') + def test_get_or_create_add(self, mock_session_add, mock_session_query, + mock_query_filter, mock_query_first): + '''Does the get_or_create() method create a new instance and add it to + the session if the instance does not already exist? + ''' + mock_query_first.return_value = None + self.assertIsInstance( + models.Report.get_or_create( + **{'id': 'foo'}), models.base.GhostBase) + self.assertTrue(mock_session_add.called) + + @mock.patch.object(models.session, 'add') + def test_update_exists(self, mock_session_add, mock_session_query, + mock_query_filter, mock_query_first): + '''Does the update() method update the __dict__ attribute of an + existing instance of the class and add it to the session? + ''' + _ = models.Report + mock_query_first.return_value = _ + self.GhostBaseInstance.update(**{'id': 'bar'}) + self.assertTrue(hasattr(_, 'id')) + self.assertTrue(mock_session_add.called) + + @mock.patch.object(models.session, 'add') + def test_update_does_not_exist(self, mock_session_add, mock_session_query, + mock_query_filter, mock_query_first): + '''Does the update() method create a new instance and add it to the + session if the instance does not already exist? + ''' + mock_query_first.return_value = None + models.Report.update(**{'id': 'bar'}) + self.assertTrue(mock_session_add.called) + + @mock.patch.object(models.base, '_action_and_commit') + @mock.patch.object(models.session, 'delete') + def test_delete_exists( + self, mock_session_delete, mock_action_commit, + mock_session_query, mock_query_filter, mock_query_first): + '''Does the delete() method validate an existing instance of the class + before deleting from the session? + ''' + mock_query_first.return_value = True + self.GhostBaseInstance.delete() + self.assertTrue(mock_action_commit.called) + + @mock.patch.object(models.base, '_action_and_commit') + @mock.patch.object(models.session, 'delete') + def test_delete_does_not_exist( + self, mock_session_delete, mock_action_commit, + mock_session_query, mock_query_filter, mock_query_first): + '''Does the delete() method do nothing if the instance does not already + exists? + ''' + mock_query_first.return_value = None + self.GhostBaseInstance.delete() + mock_action_commit.assert_not_called() + + +@mock.patch.object(models.base, '_action_and_commit') +class TestResponseClassLegacyAccessor(unittest.TestCase): + + _response_classes = models.Response.__subclasses__() + _response_classes.remove(models.LocationResponse) # tested separately + + def setUp(self, mock_response=random.choice(_response_classes)): + self.LegacyInstance = models.base.ResponseClassLegacyAccessor( + response_class=mock_response, column='foo_response', + accessor=(lambda x: x.get('foo'))) + self.test_response = {'foo': 'bar'} + self.mock_response = mock_response + + def tearDown(self): + del self.LegacyInstance + + @mock.patch.object(models.base.ResponseClassLegacyAccessor, '_get_instance') + @mock.patch.object(models.base.ResponseClassLegacyAccessor, + 'get_or_create_from_legacy_response') + def test_update_exists( + self, mock_get_create, mock_get_instance, mock_action_commit): + '''Does the update() method call _confirm_or_add_response() if there + isn't an existing instance in the database, without calling + get_or_create_from_legacy_response()? + ''' + _ = models.Report() + mock_get_instance.return_value = _ + self.LegacyInstance.update(self.test_response) + self.assertTrue(mock_get_instance.called) + mock_action_commit.assert_called_once_with(_, models.session.add) + mock_get_create.assert_not_called() + + @mock.patch.object(models.base.ResponseClassLegacyAccessor, '_get_instance') + @mock.patch.object(models.base.ResponseClassLegacyAccessor, + 'get_or_create_from_legacy_response') + def test_update_does_not_exist( + self, mock_get_create, mock_get_instance, mock_action_commit): + '''Does the update() method call get_or_create_from_legacy_response() + if there isn't an existing instance in the database, without calling + _action_and_commit()? + ''' + mock_get_instance.return_value = None + self.LegacyInstance.update(self.test_response) + mock_action_commit.assert_not_called() + mock_get_create.assert_called_once_with(self.test_response) + + @mock.patch.object(models.base.ResponseClassLegacyAccessor, '_get_instance') + def test_delete_exists(self, mock_get_instance, mock_action_commit): + '''Does the delete() method call _action_and_commit() with + models.session.delete if an instance exists? + ''' + _ = models.Report() + mock_get_instance.return_value = _ + self.LegacyInstance.delete(self.test_response) + self.assertTrue(mock_get_instance.called) + mock_action_commit.assert_called_once_with(_, models.session.delete) + + @mock.patch.object(models.base.ResponseClassLegacyAccessor, '_get_instance') + def test_delete_does_not_exist( + self, mock_get_instance, mock_action_commit): + '''Does the delete() method do nothing if no instance exists? + ''' + mock_get_instance.return_value = None + self.LegacyInstance.delete(self.test_response) + self.assertTrue(mock_get_instance.called) + mock_action_commit.assert_not_called() diff --git a/tests/test_pipeline.py b/tests/test_pipeline.py new file mode 100644 index 0000000..4b0e69b --- /dev/null +++ b/tests/test_pipeline.py @@ -0,0 +1,363 @@ +# -*- coding: utf-8 -*- + +import datetime +import mock +import random +import unittest +import uuid +import warnings +from dateutil.parser import parse +from dateutil.tz import tzoffset +from datums import models +from datums import pipeline +from datums.pipeline import mappers, codec +from sqlalchemy.orm import query + + +class TestQuestionPipeline(unittest.TestCase): + + def setUp(self): + self.question = {'questionType': 'numeric', + 'prompt': 'How anxious are you?'} + self.question_dict = {'type': self.question['questionType'], + 'prompt': self.question['prompt']} + + def tearDown(self): + delattr(self, 'question') + delattr(self, 'question_dict') + + def test_question_pipeline_init(self): + '''Are QuestionPipeline objects fully initialized? + ''' + q = pipeline.QuestionPipeline(self.question) + self.assertTrue(hasattr(q, 'question_dict')) + self.assertDictEqual(q.question_dict, self.question_dict) + + @mock.patch.object(models.Question, 'get_or_create') + def test_question_pipeline_add(self, mock_get_create): + '''Does the add() method on QuestionPipeline objects call + models.Question.get_or_create with the question_dict attribute? + ''' + pipeline.QuestionPipeline(self.question).add() + mock_get_create.assert_called_once_with(**self.question_dict) + + @mock.patch.object(models.Question, 'update') + def test_question_pipeline_update(self, mock_update): + '''Does the update() method on QuestionPipeline objects call + models.Question.update with the question_dict attribute? + ''' + pipeline.QuestionPipeline(self.question).update() + mock_update.assert_called_once_with(**self.question_dict) + + @mock.patch.object(models.Question, 'delete') + def test_question_pipeline_delete(self, mock_delete): + '''Does the delete() method on QuestionPipeline objects call + models.Question.delete with the question_dict attribute? + ''' + pipeline.QuestionPipeline(self.question).delete() + mock_delete.assert_called_once_with(**self.question_dict) + + +@mock.patch.object(codec, 'get_response_accessor') +class TestResponsePipeline(unittest.TestCase): + + def setUp(self): + self.report = {'uniqueIdentifier': uuid.uuid4(), 'responses': [{ + 'questionPrompt': 'How anxious are you?', + 'uniqueIdentifier': uuid.uuid4(), + 'numericResponse': '1'}]} + self.response = self.report.pop('responses')[0] + self.accessor = codec.numeric_accessor + self.ids = {'report_id': self.report['uniqueIdentifier'], + 'question_id': 1} + + def tearDown(self): + delattr(self, 'report') + delattr(self, 'response') + delattr(self, 'accessor') + delattr(self, 'ids') + + def test_response_pipeline_init(self, mock_get_accessor): + '''Are ResponsePipeline objects fully initialized? + ''' + mock_get_accessor.return_value = return_value=( + codec.numeric_accessor, self.ids) + r = pipeline.ResponsePipeline(self.response, self.report) + self.assertTrue(hasattr(r, 'accessor')) + self.assertTrue(hasattr(r, 'ids')) + self.assertEquals(r.accessor, self.accessor) + self.assertDictEqual(r.ids, self.ids) + + @mock.patch.object( + codec.numeric_accessor, 'get_or_create_from_legacy_response') + def test_response_pipeline_add( + self, mock_get_create_legacy, mock_get_accessor): + '''Does the add() method on ResponsePipeline objects call + codec.numeric_accessor.get_or_create_from_legacy_response with the + reponse and the ids attribute? + ''' + mock_get_accessor.return_value = return_value=( + codec.numeric_accessor, self.ids) + pipeline.ResponsePipeline(self.response, self.report).add() + mock_get_accessor.assert_called_once_with(self.response, self.report) + mock_get_create_legacy.assert_called_once_with( + self.response, **self.ids) + + @mock.patch.object(codec.numeric_accessor, 'update') + def test_response_pipeline_update(self, mock_update, mock_get_accessor): + '''Does the update() method on ResponsePipeline objects call + codec.numeric_accessor.update with the reponse and the ids attribute? + ''' + mock_get_accessor.return_value = return_value=( + codec.numeric_accessor, self.ids) + pipeline.ResponsePipeline(self.response, self.report).update() + mock_update.assert_called_once_with(self.response, **self.ids) + + @mock.patch.object(codec.numeric_accessor, 'delete') + def test_response_pipeline_delete(self, mock_delete, mock_get_accessor): + '''Does the delete() method on ResponsePipeline objects call + codec.numeric_accessor.delete with the reponse and the ids attribute? + ''' + mock_get_accessor.return_value = return_value=( + codec.numeric_accessor, self.ids) + pipeline.ResponsePipeline(self.response, self.report).delete() + mock_delete.assert_called_once_with(self.response, **self.ids) + + +class TestReportPipeline(unittest.TestCase): + + def setUp(self): + self.report = {'uniqueIdentifier': uuid.uuid4(), 'audio': { + 'uniqueIdentifier': uuid.uuid4(), 'avg': -59.8, 'peak': -57, }, + 'connection': 0, 'battery': 0.89, 'location': { + 'uniqueIdentifier': uuid.uuid4(), 'speed': -1, 'longitude': -73.9, + 'latitude': 40.8, 'altitude': 11.2, 'placemark': { + 'uniqueIdentifier': uuid.uuid4(), 'country': 'United States', + 'locality': 'New York'}}} + self.maxDiff = None + + def tearDown(self): + delattr(self, 'report') + + def test_report_pipeline_init(self): + '''Are ReportPipeline objects fully initialized? + ''' + r = pipeline.ReportPipeline(self.report) + self.assertTrue(hasattr(r, 'report')) + self.assertDictEqual(r.report, self.report) + + def test_report_pipeline_report_add(self): + '''Does the _report() method on ReportPipeline objects return a + dictionary of top-level report attributes mapped to the correct datums + attribute names, and an unmapped dictionary of nested report attributes + with the parent level report's uniqueIdentifier added, when the action + specified is models.Report.get_or_create? + ''' + top_level, nested_level = pipeline.ReportPipeline(self.report)._report( + models.Report.get_or_create) + self.assertDictEqual(top_level, { + 'id': self.report['uniqueIdentifier'], 'connection': 0, + 'battery': 0.89}) + self.assertDictEqual(nested_level, {'audio': { + 'reportUniqueIdentifier': self.report['uniqueIdentifier'], + 'uniqueIdentifier': self.report['audio']['uniqueIdentifier'], + 'avg': -59.8, 'peak': -57}, 'location': { + 'reportUniqueIdentifier': self.report['uniqueIdentifier'], + 'uniqueIdentifier': self.report['location']['uniqueIdentifier'], + 'latitude': 40.8, 'longitude': -73.9, 'altitude': 11.2, + 'speed': -1, 'placemark': { + 'uniqueIdentifier': self.report['location'][ + 'placemark']['uniqueIdentifier'], + 'country': 'United States', 'locality': 'New York'}}}) + + def test_report_pipeline_report_add_altitude_no_uuid(self): + '''Does the _report() method on ReportPipeline objects add a + uniqueIdentifier to a nested AltitudeReport if there isn't one and the + action specified is models.Report.get_or_create? + ''' + self.report['altitude'] = { + 'floorsDescended': 0, 'pressure': 101.5, 'floorsAscended': 0} + top_level, nested_level = pipeline.ReportPipeline(self.report)._report( + models.Report.get_or_create) + self.assertSetEqual(set(nested_level['altitude'].keys()), + set(['uniqueIdentifier', 'floorsAscended', + 'floorsDescended', 'pressure', + 'reportUniqueIdentifier'])) + + def test_report_pipeline_report_attr_not_supported(self): + '''Does the _report() method on ReportPipeline objects generate a + warning if there is an attribute in the report that is not yet + supported (i.e., not yet in mappers._report_key_mapper)? + ''' + r = {'foo': 'bar'} + with warnings.catch_warnings(record=True) as w: + warnings.simplefilter('always') + pipeline.ReportPipeline(r)._report(models.Report.get_or_create) + self.assertEquals(len(w), 1) + self.assertEquals(w[-1].category, UserWarning) + + def test_report_pipeline_report_update_altitude_no_uuid(self): + '''Does the _report() method on ReportPipeline objects NOT add a + uniqueIdentifier to a nested AltitudeReport and generate a warning if + there isn't one and the action specified is models.Report.update? + ''' + self.report['altitude'] = { + 'floorsDescended': 0, 'pressure': 101.5, 'floorsAscended': 0} + with warnings.catch_warnings(record=True) as w: + warnings.simplefilter('always') + top_level, nested_level = pipeline.ReportPipeline( + self.report)._report(models.Report.update) + self.assertEquals(len(w), 1) + self.assertEquals(w[-1].category, UserWarning) + self.assertSetEqual( + set(nested_level.keys()), set(['audio', 'location'])) + + def test_report_pipeline_report_update_altitude_uuid(self): + '''Does the _report() method on ReportPipeline objects NOT generate a + warning if there is a nested AltitudeReport has a uniqueIdentifier and + the action specified is models.Report.update? + ''' + self.report['altitude'] = { + 'floorsDescended': 0, 'pressure': 101.5, + 'floorsAscended': 0, 'uniqueIdentifier': uuid.uuid4()} + with warnings.catch_warnings(record=True) as w: + warnings.simplefilter('always') + top_level, nested_level = pipeline.ReportPipeline( + self.report)._report(models.Report.update) + self.assertEquals(len(w), 0) + self.assertSetEqual(set(nested_level.keys()), + set(['audio', 'location', 'altitude'])) + + # TODO (jsa): test recursion + @mock.patch.object(models.Report, 'get_or_create') + @mock.patch.object(pipeline.ReportPipeline, '_report') + def test_report_pipeline_add(self, mock_report, mock_get_create): + '''Does the add() method on ReportPipeline objects call the _report() + method on the object and then call models.Report.get_or_create() with + the top level dictionary returned, then recurse the keys in the nested + dictionary returned? + ''' + top_level = {'id': uuid.uuid4()} + mock_report.return_value = (top_level, {}) + pipeline.ReportPipeline(self.report).add(models.Report.get_or_create) + mock_report.assert_called_once_with( + mock_get_create, mappers._report_key_mapper) + mock_get_create.assert_called_once_with(**top_level) + + # TODO (jsa): test recursion + @mock.patch.object(models.Report, 'update') + @mock.patch.object(pipeline.ReportPipeline, '_report') + def test_report_pipeline_update(self, mock_report, mock_update): + '''Does the update() method on ReportPipeline objects call the _report() + method on the object and then call models.Report.update() with the top + level dictionary returned, then recurse the keys in the nested + dictionary returned? + ''' + top_level = {'id': uuid.uuid4()} + mock_report.return_value = (top_level, {}) + pipeline.ReportPipeline(self.report).update(models.Report.update) + mock_report.assert_called_once_with( + mock_update, mappers._report_key_mapper) + mock_update.assert_called_once_with(**top_level) + + @mock.patch.object(models.Report, 'delete') + def test_report_pipeline_delete(self, mock_report_delete): + '''Does the delete() method on ReportPipeline objects call the + models.Report.delete() method with the uniqueIdentifier of the report? + ''' + pipeline.ReportPipeline(self.report).delete() + mock_report_delete.assert_called_once_with( + **{'id': self.report['uniqueIdentifier']}) + + +class TestSnapshotPipeline(unittest.TestCase): + + def setUp(self): + self.snapshot = {'uniqueIdentifier': uuid.uuid4(), 'responses': [ + {'questionPrompt': 'How tired are you?', + 'uniqueIdentifier': uuid.uuid4(), 'numericResponse': '2'}, + {'questionPrompt': 'Where are you?', + 'uniqueIdentifier': uuid.uuid4(), + 'locationResponse': { + 'longitude': -73.9, + 'latitude': 40.8, + 'uniqueIdentifier': uuid.uuid4()}, 'text': 'Home'}]} + self.report = self.snapshot.copy() + self.responses = self.report.pop('responses') + self.ids = {'report_id': self.report['uniqueIdentifier'], + 'question_id': 1} + self._ = None + + def tearDown(self): + delattr(self, 'snapshot') + delattr(self, 'report') + delattr(self, 'responses') + delattr(self, '_') + + def test_snapshot_pipeline_init_no_photoset(self): + '''Are SnapshotPipeline objects fully initialized when no photoSet is + present? + ''' + s = pipeline.SnapshotPipeline(self.snapshot) + self.assertTrue(hasattr(s, 'report')) + self.assertTrue(hasattr(s, 'responses')) + self.assertFalse(hasattr(s, '_')) + self.assertDictEqual(s.report, self.report) + self.assertListEqual(s.responses, self.responses) + + def test_snapshot_pipeline_init_photoset(self): + '''Are SnapshotPipeline objects fully initialized when a photoSet is + present? + ''' + self.snapshot['photoSet'] = {'photos': [ + {'uniqueIdentifier': uuid.uuid4()}]} + s = pipeline.SnapshotPipeline(self.snapshot) + self.assertTrue(hasattr(s, 'report')) + self.assertTrue(hasattr(s, 'responses')) + self.assertFalse(hasattr(s, '_')) + self.assertDictEqual(s.report, self.report) + self.assertListEqual(s.responses, self.responses) + + @mock.patch.object(codec, 'get_response_accessor') + @mock.patch.object(pipeline.ResponsePipeline, 'add') + @mock.patch.object(pipeline.ReportPipeline, 'add') + def test_snapshot_pipeline_add( + self, mock_report_add, mock_response_add, mock_get_accessor): + '''Does the add() method on SnapshotPipeline objects initialize a + ReportPipeline object and call its add() method, and initialize n + ResponsePipeline objects and call their add() methods, where n is the + number of responses included in the snapshot? + ''' + mock_get_accessor.return_value = return_value=( + codec.numeric_accessor, self.ids) + pipeline.SnapshotPipeline(self.snapshot).add() + self.assertTrue(mock_report_add.call_count, 1) + self.assertEquals(mock_response_add.call_count, 2) + + @mock.patch.object(codec, 'get_response_accessor') + @mock.patch.object(pipeline.ResponsePipeline, 'update') + @mock.patch.object(pipeline.ReportPipeline, 'update') + def test_snapshot_pipeline_update( + self, mock_report_update, mock_response_update, mock_get_accessor): + '''Does the update() method on SnapshotPipeline objects initialize a + ReportPipeline object and call its update() method, and initialize n + ResponsePipeline objects and call their update() methods, where n is + the number of responses included in the snapshot? + ''' + mock_get_accessor.return_value = return_value=( + codec.numeric_accessor, self.ids) + pipeline.SnapshotPipeline(self.snapshot).update() + self.assertTrue(mock_report_update.call_count, 1) + self.assertEquals(mock_response_update.call_count, 2) + + @mock.patch.object(pipeline.ResponsePipeline, 'delete') + @mock.patch.object(pipeline.ReportPipeline, 'delete') + def test_snapshot_pipeline_delete( + self, mock_report_delete, mock_response_delete): + '''Does the delete() method on SnapshotPipeline objects initialize a + ReportPipeline object and call its delete() method, without initializing + any ResponsePipeline objects and calling their delete() methods? + ''' + pipeline.SnapshotPipeline(self.snapshot).delete() + self.assertTrue(mock_report_delete.call_count, 1) + mock_response_delete.assert_not_called()