From 7554a34c009eb722c9e38ead837096e3e249db31 Mon Sep 17 00:00:00 2001 From: josephjclark Date: Thu, 15 Feb 2024 15:55:29 +0000 Subject: [PATCH] Release: 1.0 (worker, engine, cli, runtime) (#592) * engine: spike mapping console logs to an adaptor logger * runtime: messy tweak to module loading * engine,runtime: revert linker change and fix tests * engine: track test file * logger: dont stringify json output AND serialize errors This cause problems with the worker because errors get flattened to {}, and also we have to double parse. Now the logger will just emit whatever it logged to whatever the log emmiter is, so JSON stays as JSON. Which is good, but it no longer guarantees it'll be serializable * logger: tidy * engine: don't parse json logs coming out of the logger * engine, worker: better handling of objects coming from the logger The logger always sends raw json, but the log message is stringified by the engine, and rebuilt by the worker before sending to lightning this last bit needs work but its better * engine: fix tests * logger: tests and types * cli: update test * engine: types * worker: update tests * logger: set a special json emitter so that json logs get nicely printed in the CLI * logger: fix types * logger: log all json to .log * tests: fixes * logger: fix tests * logger: serialise print() properly * logger: types * engine: fix logs to gcp They were neglecting to parse the strings sent out by the new json logger * test: update log handling * engine: fix passing test It was secretly failing under the hood * runtime: add tests on job logger and errors * logger: improve detection of error objects * engine: tests on error logging * engine: restore adaptor logger * changesets * Tidy ups * engine: refactor log messages (and be a bit more lenient about structure) * worker: simplify logging * tiny tidyups * remove old docs * lexicon: start building a central lexicon of definitions * runtime: huge refactor of runtime core API * runtime: more refactoring * runtime: take initial state out of the execution plan * fix tests * runtime: changeset * runtime: extra type tweakings * runtime: readme * runtime: jobs -> steps (mostly) there are cases where job is more accurate and useful * cli: start refactoring towrads new runtime API Done a big chunk of execute but still a way to go * cli: basically get the CLI working again * cli: types * cli: fix a bunch of tests, update workflow parsing * cli: fix execute and compile tests * cli: more test fixes * fix more cli tests * cli: fix integration tests * cli: tidy * runtime: remove strict mode * remove strict mode * cli: default workflow name to the file name * runtime: tweak log output * cli: remove log * cli: types * docs * deploy: adjust logging * engine: update types * engine: update names and types This is 90% of the basic rename done. Tests may even pass * runtime: male statePropsToRemove a system options, rather than workflow specific If a workflow wants to remove props, it'll add an fn bock * engine: restore statePropsToRemove tests * mock: update to lexicon * worker: start mapping to lexicon. Handled run-> plan conversion * worker: typings * worker: fix all tests * engine: types * worker: fix cheeky test somehow missed it last time * tests: fix cli tests * worker: update test * package lock * tests: update test * changesets and housekeeping * more housekeeping * engine: tweak test * runtime: tweak error messages * worker: stricter type checkign on tests * fix test * typing in worker tests * worker: update channel mock * lexicon: docs * Run -> LightningPlan * version bumps for logger and mock * mock: return error if dataclip not found * worker: better handling of dataclip errors * lightning-mock: fix test * worker: changeset * worker: fix test Don't return the loaded dataclip after the refactor * worker: fix test again * Backend renaming (1.0 version bumps plus the lexicon) (#585) * lexicon: start building a central lexicon of definitions * runtime: huge refactor of runtime core API * runtime: more refactoring * runtime: take initial state out of the execution plan * fix tests * runtime: changeset * runtime: extra type tweakings * runtime: readme * runtime: jobs -> steps (mostly) there are cases where job is more accurate and useful * cli: start refactoring towrads new runtime API Done a big chunk of execute but still a way to go * cli: basically get the CLI working again * cli: types * cli: fix a bunch of tests, update workflow parsing * cli: fix execute and compile tests * cli: more test fixes * fix more cli tests * cli: fix integration tests * cli: tidy * runtime: remove strict mode * remove strict mode * cli: default workflow name to the file name * runtime: tweak log output * cli: remove log * cli: types * docs * deploy: adjust logging * engine: update types * engine: update names and types This is 90% of the basic rename done. Tests may even pass * runtime: male statePropsToRemove a system options, rather than workflow specific If a workflow wants to remove props, it'll add an fn bock * engine: restore statePropsToRemove tests * mock: update to lexicon * worker: start mapping to lexicon. Handled run-> plan conversion * worker: typings * worker: fix all tests * engine: types * worker: fix cheeky test somehow missed it last time * tests: fix cli tests * worker: update test * package lock * tests: update test * changesets and housekeeping * more housekeeping * engine: tweak test * runtime: tweak error messages * worker: stricter type checkign on tests * fix test * typing in worker tests * worker: update channel mock * lexicon: docs * Run -> LightningPlan * version bumps for logger and mock * Send worker versions (#593) * worker: send worker and API versions to Lightning * lexicon: fix API_VERSION export * cli: dont print compiler,runtime versions, also show monorepo for adaptor * cli tweak output to optionally show components * worker: simplify version output * mock: resolve conflict * Autoinstall by default (#594) * lexicon: start building a central lexicon of definitions * runtime: huge refactor of runtime core API * runtime: more refactoring * runtime: take initial state out of the execution plan * fix tests * runtime: changeset * runtime: extra type tweakings * runtime: readme * runtime: jobs -> steps (mostly) there are cases where job is more accurate and useful * cli: start refactoring towrads new runtime API Done a big chunk of execute but still a way to go * cli: basically get the CLI working again * cli: types * cli: fix a bunch of tests, update workflow parsing * cli: fix execute and compile tests * cli: more test fixes * fix more cli tests * cli: fix integration tests * cli: tidy * runtime: remove strict mode * remove strict mode * cli: default workflow name to the file name * runtime: tweak log output * cli: remove log * cli: types * docs * deploy: adjust logging * engine: update types * engine: update names and types This is 90% of the basic rename done. Tests may even pass * runtime: male statePropsToRemove a system options, rather than workflow specific If a workflow wants to remove props, it'll add an fn bock * engine: restore statePropsToRemove tests * mock: update to lexicon * worker: start mapping to lexicon. Handled run-> plan conversion * worker: typings * worker: fix all tests * engine: types * worker: fix cheeky test somehow missed it last time * tests: fix cli tests * worker: update test * package lock * tests: update test * changesets and housekeeping * more housekeeping * engine: tweak test * runtime: tweak error messages * worker: stricter type checkign on tests * fix test * typing in worker tests * worker: update channel mock * lexicon: docs * Run -> LightningPlan * version bumps for logger and mock * cli: autoinstall by default * cli: docs * changeset * cli: fix tests Need to disable autoinstall now or some tests will blow up! * openfnx: update console output * runtime: fix tests * worker: support output_dataclips on run options * worker: additioonal test of output_dataclips * types * mock: error if a credential does not exist * engine: throw nice exception if credentials fail to load * tests: add tset for bad credential * worker: bad credential test * mock: update dev endpoint to allow invalid credentials * changeset * worker: move tesdt into reasons * worker: tweak logs * Verify run token (#598) * worker: start trying to verify the attempt token * worker: roughly verify the run token * mock: generate a real jwt for runs * mock: tweak key handling * worker: verify the run token * changesets * todo * worker: support public key from env * worker: better cli handling * error handling * worker: destroy server if run token is invalid * test: add integration test for errors * tests: add keys to more tests * test: fix privateKey * tidyups * more tidyups * version lock pheonix to 1.7.10 1.7.11 introduces a compatability issue * logger: add proxy function to the mock * engine: don't send adaptor logs to stdout * tests: add test for adaptor logs * changeset * tests: remove logging * types * logger: rethink mock proxy. It's still not working. * logger: fix mock proxy function * engine: fix tests * tests: update tests * worker: fixed a tricky issue with server shutdown If a server is destroyed before the lightning connection returned, the workloop will still fire even if the server is technically destroyed * package lock * package lock * tests: tweak output * tests: run serially * tests: reorganise * version: worker@1.0.0 cli@1.0.0 --------- Co-authored-by: Taylor Downs --- build/install-global.js | 8 +- integration-tests/cli/test/cli.test.ts | 2 +- integration-tests/cli/test/errors.test.ts | 7 +- .../cli/test/execute-workflow.test.ts | 45 +- .../cli/test/fixtures/circular.json | 26 +- .../test/fixtures/invalid-config-path.json | 13 +- .../cli/test/fixtures/invalid-exp-path.json | 12 +- .../cli/test/fixtures/invalid-start.json | 18 +- .../cli/test/fixtures/invalid-syntax.json | 12 +- .../cli/test/fixtures/multiple-inputs.json | 34 +- integration-tests/cli/test/fixtures/plan.json | 19 + .../cli/test/fixtures/wf-array.json | 40 +- .../cli/test/fixtures/wf-conditional.json | 50 +- .../cli/test/fixtures/wf-count.json | 26 +- .../cli/test/fixtures/wf-errors.json | 44 +- .../cli/test/fixtures/wf-simple.json | 14 +- .../cli/test/fixtures/wf-strict.json | 30 +- integration-tests/worker/CHANGELOG.md | 20 + .../@openfn/test-adaptor_1.0.0/index.js | 6 + .../@openfn/test-adaptor_1.0.0/package.json | 7 + .../worker/dummy-repo/package.json | 4 +- integration-tests/worker/package.json | 2 +- integration-tests/worker/src/init.ts | 13 +- .../worker/test/autoinstall.test.ts | 20 +- .../worker/test/benchmark.test.ts | 2 +- .../worker/test/integration.test.ts | 293 ++-- integration-tests/worker/test/runs.test.ts | 16 +- packages/cli/CHANGELOG.md | 23 + packages/cli/README.md | 89 +- packages/cli/package.json | 3 +- packages/cli/src/commands.ts | 22 +- packages/cli/src/compile/command.ts | 4 +- packages/cli/src/compile/compile.ts | 42 +- packages/cli/src/compile/handler.ts | 28 +- packages/cli/src/docs/handler.ts | 2 +- packages/cli/src/execute/command.ts | 13 +- packages/cli/src/execute/execute.ts | 48 +- .../src/execute/get-autoinstall-targets.ts | 30 +- packages/cli/src/execute/handler.ts | 27 +- packages/cli/src/execute/serialize-output.ts | 13 +- packages/cli/src/options.ts | 80 +- packages/cli/src/test/handler.ts | 66 +- packages/cli/src/types.ts | 6 + packages/cli/src/util/expand-adaptors.ts | 32 +- packages/cli/src/util/index.d.ts | 1 - packages/cli/src/util/index.ts | 6 + packages/cli/src/util/load-input.ts | 144 -- packages/cli/src/util/load-plan.ts | 252 ++++ .../cli/src/util/map-adaptors-to-monorepo.ts | 45 +- packages/cli/src/util/print-versions.ts | 29 +- packages/cli/src/util/validate-adaptors.ts | 6 +- packages/cli/test/commands.test.ts | 175 +-- packages/cli/test/compile/compile.test.ts | 82 +- packages/cli/test/compile/options.test.ts | 4 +- packages/cli/test/docgen/handler.test.ts | 2 +- packages/cli/test/execute/execute.test.ts | 293 ++-- .../execute/get-autoinstall-targets.test.ts | 198 +-- packages/cli/test/execute/options.test.ts | 17 +- .../cli/test/execute/parse-adaptors.test.ts | 101 +- packages/cli/test/integration.test.ts | 2 +- .../cli/test/options/ensure/inputPath.test.ts | 14 +- .../cli/test/options/ensure/strict.test.ts | 51 - packages/cli/test/options/execute.test.ts | 4 +- packages/cli/test/util.ts | 18 + .../cli/test/util/expand-adaptors.test.ts | 103 +- packages/cli/test/util/load-input.test.ts | 322 ----- packages/cli/test/util/load-plan.test.ts | 273 ++++ .../util/map-adaptors-to-monorepo.test.ts | 26 +- packages/cli/test/util/print-versions.test.ts | 73 +- packages/compiler/CHANGELOG.md | 9 + packages/compiler/package.json | 2 +- packages/compiler/src/compile.ts | 4 +- packages/deploy/CHANGELOG.md | 10 + packages/deploy/package.json | 2 +- packages/deploy/src/index.ts | 2 +- packages/engine-multi/CHANGELOG.md | 20 + packages/engine-multi/ava | 0 packages/engine-multi/package.json | 3 +- packages/engine-multi/src/api/autoinstall.ts | 14 +- packages/engine-multi/src/api/compile.ts | 15 +- packages/engine-multi/src/api/execute.ts | 9 +- packages/engine-multi/src/api/lifecycle.ts | 8 +- .../src/api/preload-credentials.ts | 31 +- .../src/classes/ExecutionContext.ts | 15 +- packages/engine-multi/src/engine.ts | 41 +- packages/engine-multi/src/errors.ts | 18 +- packages/engine-multi/src/test/util.ts | 39 +- .../engine-multi/src/test/worker-functions.ts | 5 +- packages/engine-multi/src/types.ts | 36 +- .../engine-multi/src/util/create-state.ts | 13 +- packages/engine-multi/src/worker/events.ts | 2 - packages/engine-multi/src/worker/pool.ts | 9 +- .../engine-multi/src/worker/thread/helpers.ts | 5 +- .../src/worker/thread/mock-run.ts | 17 +- .../engine-multi/src/worker/thread/run.ts | 9 +- packages/engine-multi/test/api.test.ts | 47 +- .../engine-multi/test/api/autoinstall.test.ts | 81 +- .../engine-multi/test/api/call-worker.test.ts | 14 +- .../engine-multi/test/api/execute.test.ts | 107 +- .../engine-multi/test/api/lifecycle.test.ts | 124 +- .../test/api/preload-credentials.test.ts | 138 +- packages/engine-multi/test/engine.test.ts | 142 +- packages/engine-multi/test/errors.test.ts | 119 +- .../engine-multi/test/integration.test.ts | 148 +- packages/engine-multi/test/security.test.ts | 6 +- .../test/worker/mock-worker.test.ts | 28 +- .../engine-multi/test/worker/pool.test.ts | 14 +- packages/engine-multi/tsconfig.json | 2 +- packages/lexicon/README.md | 42 + packages/lexicon/core.d.ts | 137 ++ packages/lexicon/index.d.ts | 2 + packages/lexicon/index.js | 1 + packages/lexicon/lightning.d.ts | 185 +++ packages/lexicon/lightning.js | 6 + packages/lexicon/package.json | 26 + packages/lightning-mock/CHANGELOG.md | 25 + packages/lightning-mock/package.json | 3 +- packages/lightning-mock/src/api-dev.ts | 22 +- packages/lightning-mock/src/api-sockets.ts | 105 +- packages/lightning-mock/src/index.ts | 2 + packages/lightning-mock/src/server.ts | 24 +- packages/lightning-mock/src/tokens.ts | 29 + packages/lightning-mock/src/types.ts | 117 +- packages/lightning-mock/src/util.ts | 19 +- .../test/channels/claim.test.ts | 10 +- .../lightning-mock/test/channels/run.test.ts | 64 +- .../lightning-mock/test/events/log.test.ts | 8 +- .../test/events/run-complete.test.ts | 4 +- .../test/events/run-start.test.ts | 4 +- .../test/events/step-complete.test.ts | 10 +- .../test/events/step-start.test.ts | 4 +- packages/lightning-mock/test/server.test.ts | 14 +- .../lightning-mock/test/socket-server.test.ts | 13 +- packages/lightning-mock/test/tokens.test.ts | 56 + packages/lightning-mock/test/util.ts | 2 +- packages/lightning-mock/tsconfig.json | 2 +- packages/logger/CHANGELOG.md | 12 + packages/logger/package.json | 2 +- packages/logger/src/logger.ts | 1 + packages/logger/test/logger.test.ts | 14 +- packages/runtime/CHANGELOG.md | 17 + packages/runtime/README.md | 44 +- packages/runtime/package.json | 3 +- packages/runtime/src/execute/compile-plan.ts | 81 +- packages/runtime/src/execute/context.ts | 7 +- packages/runtime/src/execute/expression.ts | 73 +- packages/runtime/src/execute/plan.ts | 35 +- .../runtime/src/execute/{job.ts => step.ts} | 103 +- packages/runtime/src/modules/module-loader.ts | 2 +- packages/runtime/src/runtime.ts | 93 +- packages/runtime/src/types.ts | 104 +- packages/runtime/src/util/assemble-state.ts | 13 +- packages/runtime/src/util/clone.ts | 2 +- packages/runtime/src/util/default-state.ts | 1 + packages/runtime/src/util/execute.ts | 2 +- packages/runtime/src/util/index.ts | 19 + packages/runtime/src/util/log-error.ts | 12 +- packages/runtime/src/util/validate-plan.ts | 17 +- packages/runtime/test/context.test.ts | 26 +- packages/runtime/test/errors.test.ts | 44 +- .../runtime/test/execute/compile-plan.test.ts | 363 +++-- .../runtime/test/execute/expression.test.ts | 61 +- packages/runtime/test/execute/plan.test.ts | 1248 ++++++++--------- .../execute/{job.test.ts => step.test.ts} | 99 +- packages/runtime/test/memory.test.ts | 26 +- packages/runtime/test/runtime.test.ts | 437 +++--- packages/runtime/test/security.test.ts | 95 +- .../runtime/test/util/assemble-state.test.ts | 70 +- .../test/util/{regex.ts => regex.test.ts} | 0 .../runtime/test/util/validate-plan.test.ts | 131 +- packages/ws-worker/CHANGELOG.md | 20 + packages/ws-worker/package.json | 5 +- packages/ws-worker/src/api/claim.ts | 48 +- packages/ws-worker/src/api/destroy.ts | 7 +- packages/ws-worker/src/api/execute.ts | 100 +- packages/ws-worker/src/api/reasons.ts | 19 +- packages/ws-worker/src/api/workloop.ts | 14 +- packages/ws-worker/src/channels/run.ts | 22 +- .../ws-worker/src/channels/worker-queue.ts | 16 +- packages/ws-worker/src/events.ts | 86 +- packages/ws-worker/src/events/run-complete.ts | 3 +- packages/ws-worker/src/events/run-error.ts | 8 +- .../ws-worker/src/events/step-complete.ts | 16 +- packages/ws-worker/src/events/step-start.ts | 10 +- packages/ws-worker/src/mock/resolvers.ts | 3 +- packages/ws-worker/src/mock/runtime-engine.ts | 32 +- packages/ws-worker/src/mock/sockets.ts | 5 +- packages/ws-worker/src/server.ts | 27 +- packages/ws-worker/src/start.ts | 29 +- packages/ws-worker/src/types.d.ts | 79 +- .../src/util/convert-lightning-plan.ts | 172 +++ packages/ws-worker/src/util/convert-run.ts | 134 -- .../ws-worker/src/util/create-run-state.ts | 24 +- packages/ws-worker/src/util/get-with-reply.ts | 17 +- packages/ws-worker/src/util/index.ts | 2 +- .../ws-worker/src/util/log-final-reason.ts | 2 +- packages/ws-worker/src/util/versions.ts | 12 +- packages/ws-worker/src/util/worker-token.ts | 3 +- packages/ws-worker/test/api/destroy.test.ts | 75 +- packages/ws-worker/test/api/execute.test.ts | 136 +- packages/ws-worker/test/api/reasons.test.ts | 14 +- packages/ws-worker/test/api/workloop.test.ts | 21 +- packages/ws-worker/test/channels/run.test.ts | 20 +- .../test/channels/worker-queue.test.ts | 57 +- .../test/events/run-complete.test.ts | 39 +- .../ws-worker/test/events/run-error.test.ts | 33 +- .../test/events/step-complete.test.ts | 84 +- .../ws-worker/test/events/step-start.test.ts | 53 +- packages/ws-worker/test/lightning.test.ts | 328 +++-- .../test/mock/runtime-engine.test.ts | 259 ++-- packages/ws-worker/test/mock/sockets.test.ts | 2 +- packages/ws-worker/test/reasons.test.ts | 42 +- packages/ws-worker/test/server.test.ts | 2 +- packages/ws-worker/test/util.ts | 38 +- ...test.ts => convert-lightning-plan.test.ts} | 343 +++-- .../test/util/create-run-state.test.ts | 117 +- packages/ws-worker/test/util/throttle.test.ts | 2 +- packages/ws-worker/test/util/versions.test.ts | 19 +- packages/ws-worker/tsconfig.json | 2 +- pnpm-lock.yaml | 133 +- 220 files changed, 6620 insertions(+), 5167 deletions(-) create mode 100644 integration-tests/cli/test/fixtures/plan.json create mode 100644 integration-tests/worker/dummy-repo/node_modules/@openfn/test-adaptor_1.0.0/index.js create mode 100644 integration-tests/worker/dummy-repo/node_modules/@openfn/test-adaptor_1.0.0/package.json delete mode 100644 packages/cli/src/util/index.d.ts create mode 100644 packages/cli/src/util/index.ts delete mode 100644 packages/cli/src/util/load-input.ts create mode 100644 packages/cli/src/util/load-plan.ts delete mode 100644 packages/cli/test/options/ensure/strict.test.ts create mode 100644 packages/cli/test/util.ts delete mode 100644 packages/cli/test/util/load-input.test.ts create mode 100644 packages/cli/test/util/load-plan.test.ts create mode 100644 packages/engine-multi/ava create mode 100644 packages/lexicon/README.md create mode 100644 packages/lexicon/core.d.ts create mode 100644 packages/lexicon/index.d.ts create mode 100644 packages/lexicon/index.js create mode 100644 packages/lexicon/lightning.d.ts create mode 100644 packages/lexicon/lightning.js create mode 100644 packages/lexicon/package.json create mode 100644 packages/lightning-mock/src/tokens.ts create mode 100644 packages/lightning-mock/test/tokens.test.ts rename packages/runtime/src/execute/{job.ts => step.ts} (69%) create mode 100644 packages/runtime/src/util/default-state.ts create mode 100644 packages/runtime/src/util/index.ts rename packages/runtime/test/execute/{job.test.ts => step.test.ts} (73%) rename packages/runtime/test/util/{regex.ts => regex.test.ts} (100%) create mode 100644 packages/ws-worker/src/util/convert-lightning-plan.ts delete mode 100644 packages/ws-worker/src/util/convert-run.ts rename packages/ws-worker/test/util/{convert-run.test.ts => convert-lightning-plan.test.ts} (55%) diff --git a/build/install-global.js b/build/install-global.js index f1d47a448..19c00c702 100644 --- a/build/install-global.js +++ b/build/install-global.js @@ -14,6 +14,8 @@ const outputPath = process.argv[2] || './dist'; // Package everything up like a local build exec('git branch --show-current', {}, async (err, branchName) => { + console.log('Installing openfnx for branch:', branchName); + console.log(); const files = await findPackages(); const pkgs = mapPackages(files); await ensureOutputPath(outputPath); @@ -37,9 +39,13 @@ exec('git branch --show-current', {}, async (err, branchName) => { ).then(async () => { const cliPath = getLocalTarballName(pkgs['@openfn/cli']); const command = `npm install -g ${path.resolve(outputPath, cliPath)}`; - console.log(command); + //console.log(command); await exec(command); // install the local CLI globally + + console.log(); + console.log('openfnx installed successfully! To test:'); + console.log(' openfnx --version'); }); }); diff --git a/integration-tests/cli/test/cli.test.ts b/integration-tests/cli/test/cli.test.ts index 6d9b72d32..05778ff2d 100644 --- a/integration-tests/cli/test/cli.test.ts +++ b/integration-tests/cli/test/cli.test.ts @@ -14,7 +14,7 @@ test.serial('openfn version', async (t) => { test.serial('openfn test', async (t) => { const { stdout } = await run(t.title); t.regex(stdout, /Versions:/); - t.regex(stdout, /Running test job.../); + t.regex(stdout, /Running test workflow/); t.regex(stdout, /Result: 42/); }); diff --git a/integration-tests/cli/test/errors.test.ts b/integration-tests/cli/test/errors.test.ts index 1c3e66a43..410afbe56 100644 --- a/integration-tests/cli/test/errors.test.ts +++ b/integration-tests/cli/test/errors.test.ts @@ -2,19 +2,20 @@ import test from 'ava'; import path from 'node:path'; import run from '../src/run'; import { extractLogs, assertLog } from '../src/util'; +import { stderr } from 'node:process'; const jobsPath = path.resolve('test/fixtures'); // These are all errors that will stop the CLI from even running -test.serial('job not found', async (t) => { +test.serial('expression not found', async (t) => { const { stdout, err } = await run('openfn blah.js --log-json'); t.is(err.code, 1); const stdlogs = extractLogs(stdout); - assertLog(t, stdlogs, /job not found/i); - assertLog(t, stdlogs, /failed to load the job from blah.js/i); + assertLog(t, stdlogs, /expression not found/i); + assertLog(t, stdlogs, /failed to load the expression from blah.js/i); assertLog(t, stdlogs, /critical error: aborting command/i); }); diff --git a/integration-tests/cli/test/execute-workflow.test.ts b/integration-tests/cli/test/execute-workflow.test.ts index 57f53ee17..550128c7c 100644 --- a/integration-tests/cli/test/execute-workflow.test.ts +++ b/integration-tests/cli/test/execute-workflow.test.ts @@ -83,6 +83,15 @@ test.serial( } ); +// Run a new-style execution plan with custom start +test.serial(`openfn ${jobsPath}/plan.json -i`, async (t) => { + const { err } = await run(t.title); + t.falsy(err); + + const out = getJSON(); + t.deepEqual(out.data.userId, 1); +}); + test.serial(`openfn ${jobsPath}/wf-conditional.json`, async (t) => { const { err } = await run(t.title); t.falsy(err); @@ -124,36 +133,6 @@ test.serial( } ); -test.serial(`openfn ${jobsPath}/wf-strict.json --strict`, async (t) => { - const { err } = await run(t.title); - t.falsy(err); - - const out = getJSON(); - t.deepEqual(out, { - data: { - name: 'jane', - }, - }); -}); - -test.serial(`openfn ${jobsPath}/wf-strict.json --no-strict`, async (t) => { - const { err } = await run(t.title); - t.falsy(err); - - const out = getJSON(); - t.deepEqual(out, { - x: 22, - data: { - name: 'jane', - }, - references: [ - { - name: 'bob', - }, - ], - }); -}); - test.serial( `openfn ${jobsPath}/wf-errors.json -S "{ \\"data\\": { \\"number\\": 2 } }"`, async (t) => { @@ -169,8 +148,8 @@ test.serial( } ); -test.serial( - `openfn ${jobsPath}/wf-errors.json -S "{ \\"data\\": { \\"number\\": 32 } }"`, +test.serial.only( + `openfn ${jobsPath}/wf-errors.json -iS "{ \\"data\\": { \\"number\\": 32 } }"`, async (t) => { const { err } = await run(t.title); t.falsy(err); @@ -189,7 +168,7 @@ test.serial( severity: 'fail', source: 'runtime', }, - jobId: 'start', + stepId: 'start', message: 'abort', type: 'JobError', }, diff --git a/integration-tests/cli/test/fixtures/circular.json b/integration-tests/cli/test/fixtures/circular.json index 2b3077d7a..d209b2a85 100644 --- a/integration-tests/cli/test/fixtures/circular.json +++ b/integration-tests/cli/test/fixtures/circular.json @@ -1,14 +1,16 @@ { - "jobs": [ - { - "id": "a", - "expression": "x", - "next": { "b": true } - }, - { - "id": "b", - "expression": "x", - "next": { "a": true } - } - ] + "workflow": { + "steps": [ + { + "id": "a", + "expression": "x", + "next": { "b": true } + }, + { + "id": "b", + "expression": "x", + "next": { "a": true } + } + ] + } } diff --git a/integration-tests/cli/test/fixtures/invalid-config-path.json b/integration-tests/cli/test/fixtures/invalid-config-path.json index e3ed709a7..1a343436e 100644 --- a/integration-tests/cli/test/fixtures/invalid-config-path.json +++ b/integration-tests/cli/test/fixtures/invalid-config-path.json @@ -1,7 +1,10 @@ { - "jobs": [ - { - "configuration": "does-not-exist.json" - } - ] + "workflow": { + "steps": [ + { + "configuration": "does-not-exist.json", + "expression": "." + } + ] + } } diff --git a/integration-tests/cli/test/fixtures/invalid-exp-path.json b/integration-tests/cli/test/fixtures/invalid-exp-path.json index 7cff3440c..6ce8c42ac 100644 --- a/integration-tests/cli/test/fixtures/invalid-exp-path.json +++ b/integration-tests/cli/test/fixtures/invalid-exp-path.json @@ -1,7 +1,9 @@ { - "jobs": [ - { - "expression": "does-not-exist.js" - } - ] + "workflow": { + "steps": [ + { + "expression": "does-not-exist.js" + } + ] + } } diff --git a/integration-tests/cli/test/fixtures/invalid-start.json b/integration-tests/cli/test/fixtures/invalid-start.json index 6fc284da5..13f0f9ee1 100644 --- a/integration-tests/cli/test/fixtures/invalid-start.json +++ b/integration-tests/cli/test/fixtures/invalid-start.json @@ -1,9 +1,13 @@ { - "start": "nope", - "jobs": [ - { - "id": "x", - "expression": "fn((state) => state)" - } - ] + "options": { + "start": "nope" + }, + "workflow": { + "steps": [ + { + "id": "x", + "expression": "fn((state) => state)" + } + ] + } } diff --git a/integration-tests/cli/test/fixtures/invalid-syntax.json b/integration-tests/cli/test/fixtures/invalid-syntax.json index 7028961f2..651f73f93 100644 --- a/integration-tests/cli/test/fixtures/invalid-syntax.json +++ b/integration-tests/cli/test/fixtures/invalid-syntax.json @@ -1,7 +1,9 @@ { - "jobs": [ - { - "expression": "invalid.js" - } - ] + "workflow": { + "steps": [ + { + "expression": "invalid.js" + } + ] + } } diff --git a/integration-tests/cli/test/fixtures/multiple-inputs.json b/integration-tests/cli/test/fixtures/multiple-inputs.json index 25c28dd9b..59a33a755 100644 --- a/integration-tests/cli/test/fixtures/multiple-inputs.json +++ b/integration-tests/cli/test/fixtures/multiple-inputs.json @@ -1,18 +1,20 @@ { - "jobs": [ - { - "id": "a", - "expression": "x", - "next": { "b": true, "c": true } - }, - { - "id": "b", - "expression": "x", - "next": { "c": true } - }, - { - "id": "c", - "expression": "x" - } - ] + "workflow": { + "steps": [ + { + "id": "a", + "expression": "x", + "next": { "b": true, "c": true } + }, + { + "id": "b", + "expression": "x", + "next": { "c": true } + }, + { + "id": "c", + "expression": "x" + } + ] + } } diff --git a/integration-tests/cli/test/fixtures/plan.json b/integration-tests/cli/test/fixtures/plan.json new file mode 100644 index 000000000..bc21f7090 --- /dev/null +++ b/integration-tests/cli/test/fixtures/plan.json @@ -0,0 +1,19 @@ +{ + "options": { + "start": "b" + }, + "workflow": { + "steps": [ + { + "id": "a", + "adaptor": "common", + "expression": "fn((state) => { return state; });" + }, + { + "id": "b", + "adaptor": "http", + "expression": "get('https://jsonplaceholder.typicode.com/todos/1')" + } + ] + } +} diff --git a/integration-tests/cli/test/fixtures/wf-array.json b/integration-tests/cli/test/fixtures/wf-array.json index 763d15457..76b9decaf 100644 --- a/integration-tests/cli/test/fixtures/wf-array.json +++ b/integration-tests/cli/test/fixtures/wf-array.json @@ -1,21 +1,23 @@ { - "jobs": [ - { - "id": "a", - "adaptor": "common", - "expression": "fn((state) => { if (!state.data.items) { state.data.items = []; } return state; });", - "next": { "b": true } - }, - { - "id": "b", - "adaptor": "common", - "expression": "fn((state) => { state.data.items.push('b'); return state; });", - "next": { "c": true } - }, - { - "id": "c", - "adaptor": "common", - "expression": "fn((state) => { state.data.items.push('c'); return state; });" - } - ] + "workflow": { + "steps": [ + { + "id": "a", + "adaptor": "common", + "expression": "fn((state) => { if (!state.data.items) { state.data.items = []; } return state; });", + "next": { "b": true } + }, + { + "id": "b", + "adaptor": "common", + "expression": "fn((state) => { state.data.items.push('b'); return state; });", + "next": { "c": true } + }, + { + "id": "c", + "adaptor": "common", + "expression": "fn((state) => { state.data.items.push('c'); return state; });" + } + ] + } } diff --git a/integration-tests/cli/test/fixtures/wf-conditional.json b/integration-tests/cli/test/fixtures/wf-conditional.json index 203f7ce41..4aa758ab5 100644 --- a/integration-tests/cli/test/fixtures/wf-conditional.json +++ b/integration-tests/cli/test/fixtures/wf-conditional.json @@ -1,29 +1,31 @@ { - "start": "start", - "jobs": [ - { - "id": "start", - "state": { - "data": { - "number": 1 + "options": { "start": "start" }, + "workflow": { + "steps": [ + { + "id": "start", + "state": { + "data": { + "number": 1 + } + }, + "adaptor": "common", + "expression": "fn((state) => state);", + "next": { + "small": { "condition": "state.data.number < 10" }, + "large": { "condition": "state.data.number >= 10" } } }, - "adaptor": "common", - "expression": "fn((state) => state);", - "next": { - "small": { "condition": "state.data.number < 10" }, - "large": { "condition": "state.data.number >= 10" } + { + "id": "small", + "adaptor": "common", + "expression": "fn((state) => { state.data.result = \"small\"; return state; });" + }, + { + "id": "large", + "adaptor": "common", + "expression": "fn((state) => { state.data.result = \"large\"; return state; });" } - }, - { - "id": "small", - "adaptor": "common", - "expression": "fn((state) => { state.data.result = \"small\"; return state; });" - }, - { - "id": "large", - "adaptor": "common", - "expression": "fn((state) => { state.data.result = \"large\"; return state; });" - } - ] + ] + } } diff --git a/integration-tests/cli/test/fixtures/wf-count.json b/integration-tests/cli/test/fixtures/wf-count.json index e20e7a604..5d8e50c71 100644 --- a/integration-tests/cli/test/fixtures/wf-count.json +++ b/integration-tests/cli/test/fixtures/wf-count.json @@ -1,14 +1,16 @@ { - "jobs": [ - { - "adaptor": "common", - "expression": "fn((state) => (state.data.count ? state : { data: { count: 21 } }));", - "next": { "b": true } - }, - { - "id": "b", - "adaptor": "common", - "expression": "fn((state) => { state.data.count = state.data.count * 2; return state; });" - } - ] + "workflow": { + "steps": [ + { + "adaptor": "common", + "expression": "fn((state) => (state.data.count ? state : { data: { count: 21 } }));", + "next": { "b": true } + }, + { + "id": "b", + "adaptor": "common", + "expression": "fn((state) => { state.data.count = state.data.count * 2; return state; });" + } + ] + } } diff --git a/integration-tests/cli/test/fixtures/wf-errors.json b/integration-tests/cli/test/fixtures/wf-errors.json index 6464479d5..354feeab7 100644 --- a/integration-tests/cli/test/fixtures/wf-errors.json +++ b/integration-tests/cli/test/fixtures/wf-errors.json @@ -1,24 +1,26 @@ { - "start": "start", - "jobs": [ - { - "id": "start", - "adaptor": "common", - "expression": "fn((state) => { if (state.data.number > 10) { throw new Error('abort') }; return state; });", - "next": { - "increment": { "condition": "!state.errors" }, - "do nothing": { "condition": "state.errors" } + "options": { "start": "start" }, + "workflow": { + "steps": [ + { + "id": "start", + "adaptor": "common", + "expression": "fn((state) => { if (state.data.number > 10) { throw new Error('abort') }; return state; });", + "next": { + "increment": { "condition": "!state.errors" }, + "do nothing": { "condition": "state.errors" } + } + }, + { + "id": "increment", + "adaptor": "common", + "expression": "fn((state) => { state.data.number += 1; return state; });" + }, + { + "id": "do nothing", + "adaptor": "common", + "expression": "fn((state) => state);" } - }, - { - "id": "increment", - "adaptor": "common", - "expression": "fn((state) => { state.data.number += 1; return state; });" - }, - { - "id": "do nothing", - "adaptor": "common", - "expression": "fn((state) => state);" - } - ] + ] + } } diff --git a/integration-tests/cli/test/fixtures/wf-simple.json b/integration-tests/cli/test/fixtures/wf-simple.json index dfd904068..07caaa188 100644 --- a/integration-tests/cli/test/fixtures/wf-simple.json +++ b/integration-tests/cli/test/fixtures/wf-simple.json @@ -1,8 +1,10 @@ { - "jobs": [ - { - "adaptor": "common", - "expression": "simple.js" - } - ] + "workflow": { + "steps": [ + { + "adaptor": "common", + "expression": "simple.js" + } + ] + } } diff --git a/integration-tests/cli/test/fixtures/wf-strict.json b/integration-tests/cli/test/fixtures/wf-strict.json index 7461a276a..370afd61f 100644 --- a/integration-tests/cli/test/fixtures/wf-strict.json +++ b/integration-tests/cli/test/fixtures/wf-strict.json @@ -1,17 +1,19 @@ { - "jobs": [ - { - "id": "a", - "adaptor": "common", - "expression": "fn((state) => ({ x: 22, data: { name: 'bob' }, references: [] }));", - "next": { - "b": true + "workflow": { + "steps": [ + { + "id": "a", + "adaptor": "common", + "expression": "fn((state) => ({ x: 22, data: { name: 'bob' }, references: [] }));", + "next": { + "b": true + } + }, + { + "id": "b", + "adaptor": "common", + "expression": "fn(state => composeNextState(state, { name: 'jane' }));" } - }, - { - "id": "b", - "adaptor": "common", - "expression": "fn(state => composeNextState(state, { name: 'jane' }));" - } - ] + ] + } } diff --git a/integration-tests/worker/CHANGELOG.md b/integration-tests/worker/CHANGELOG.md index 1d9079065..57ec04d55 100644 --- a/integration-tests/worker/CHANGELOG.md +++ b/integration-tests/worker/CHANGELOG.md @@ -1,5 +1,25 @@ # @openfn/integration-tests-worker +## 1.0.35 + +### Patch Changes + +- Updated dependencies [5f24294] +- Updated dependencies [649ca43] +- Updated dependencies [29bff41] +- Updated dependencies [823b471] +- Updated dependencies [9f6c35d] +- Updated dependencies [86dd668] +- Updated dependencies [a97eb26] +- Updated dependencies [ea6fc05] +- Updated dependencies [86dd668] +- Updated dependencies [823b471] +- Updated dependencies [29bff41] + - @openfn/engine-multi@1.0.0 + - @openfn/logger@1.0.0 + - @openfn/lightning-mock@2.0.0 + - @openfn/ws-worker@1.0.0 + ## 1.0.34 ### Patch Changes diff --git a/integration-tests/worker/dummy-repo/node_modules/@openfn/test-adaptor_1.0.0/index.js b/integration-tests/worker/dummy-repo/node_modules/@openfn/test-adaptor_1.0.0/index.js new file mode 100644 index 000000000..cedfb278f --- /dev/null +++ b/integration-tests/worker/dummy-repo/node_modules/@openfn/test-adaptor_1.0.0/index.js @@ -0,0 +1,6 @@ +export const fn = (f) => (s) => f(s); + +export const log = (message) => (s) => { + console.log(message); + return s +} \ No newline at end of file diff --git a/integration-tests/worker/dummy-repo/node_modules/@openfn/test-adaptor_1.0.0/package.json b/integration-tests/worker/dummy-repo/node_modules/@openfn/test-adaptor_1.0.0/package.json new file mode 100644 index 000000000..53acc9736 --- /dev/null +++ b/integration-tests/worker/dummy-repo/node_modules/@openfn/test-adaptor_1.0.0/package.json @@ -0,0 +1,7 @@ +{ + "name": "@openfn/test-adaptor", + "version": "1.0.0", + "type": "module", + "main": "index.js", + "private": true +} diff --git a/integration-tests/worker/dummy-repo/package.json b/integration-tests/worker/dummy-repo/package.json index 782ec5ed7..1945fc510 100644 --- a/integration-tests/worker/dummy-repo/package.json +++ b/integration-tests/worker/dummy-repo/package.json @@ -3,6 +3,8 @@ "private": true, "version": "1.0.0", "dependencies": { - "@openfn/stateful-test_1.0.0": "@npm:@openfn/stateful-test@1.0.0" + "@openfn/language-common_latest": "npm:@openfn/language-common@^1.12.0", + "@openfn/stateful-test_1.0.0": "@npm:@openfn/stateful-test@1.0.0", + "@openfn/test-adaptor_1.0.0": "@npm:@openfn/test-adaptor@1.0.0" } } diff --git a/integration-tests/worker/package.json b/integration-tests/worker/package.json index a5544372c..b9be215c8 100644 --- a/integration-tests/worker/package.json +++ b/integration-tests/worker/package.json @@ -1,7 +1,7 @@ { "name": "@openfn/integration-tests-worker", "private": true, - "version": "1.0.34", + "version": "1.0.35", "description": "Lightning WOrker integration tests", "author": "Open Function Group ", "license": "ISC", diff --git a/integration-tests/worker/src/init.ts b/integration-tests/worker/src/init.ts index 01926a22b..9e1c768a3 100644 --- a/integration-tests/worker/src/init.ts +++ b/integration-tests/worker/src/init.ts @@ -1,17 +1,22 @@ import path from 'node:path'; import crypto from 'node:crypto'; -import createLightningServer from '@openfn/lightning-mock'; +import createLightningServer, { toBase64 } from '@openfn/lightning-mock'; import createEngine from '@openfn/engine-multi'; import createWorkerServer from '@openfn/ws-worker'; -import createLogger, { createMockLogger } from '@openfn/logger'; +import { createMockLogger } from '@openfn/logger'; export const randomPort = () => Math.round(2000 + Math.random() * 1000); -export const initLightning = (port = 4000) => { +export const initLightning = (port = 4000, privateKey?: string) => { // TODO the lightning mock right now doesn't use the secret // but we may want to add tests against this - return createLightningServer({ port }); + const opts = { port }; + if (privateKey) { + // @ts-ignore + opts.runPrivateKey = toBase64(privateKey); + } + return createLightningServer(opts); }; export const initWorker = async ( diff --git a/integration-tests/worker/test/autoinstall.test.ts b/integration-tests/worker/test/autoinstall.test.ts index c060a2175..d04b702c8 100644 --- a/integration-tests/worker/test/autoinstall.test.ts +++ b/integration-tests/worker/test/autoinstall.test.ts @@ -1,9 +1,6 @@ -// stress test for autoinstall -// this could evolve into stress testing, benchmarking or artillery generally? -// Also I may skip this in CI after the issue is fixed - import test from 'ava'; import path from 'node:path'; +import { generateKeys } from '@openfn/lightning-mock'; import { initLightning, initWorker } from '../src/init'; import { createRun, createJob } from '../src/factories'; @@ -33,13 +30,20 @@ const run = async (attempt) => { }; test.before(async () => { + const keys = await generateKeys(); const lightningPort = 4321; - lightning = initLightning(lightningPort); + lightning = initLightning(lightningPort, keys.private); - ({ worker } = await initWorker(lightningPort, { - repoDir: path.resolve('tmp/repo/autoinstall'), - })); + ({ worker } = await initWorker( + lightningPort, + { + repoDir: path.resolve('tmp/repo/autoinstall'), + }, + { + runPublicKey: keys.public, + } + )); }); test.after(async () => { diff --git a/integration-tests/worker/test/benchmark.test.ts b/integration-tests/worker/test/benchmark.test.ts index 9c117e635..83990ccef 100644 --- a/integration-tests/worker/test/benchmark.test.ts +++ b/integration-tests/worker/test/benchmark.test.ts @@ -89,7 +89,7 @@ test.serial.skip('run 100 attempts', async (t) => { } lightning.on('step:complete', (evt) => { - // May want to disable this but it's nice feedback + // May want to disable this but it's nice feedback t.log('Completed ', evt.runId); if (evt.payload.reason !== 'success') { diff --git a/integration-tests/worker/test/integration.test.ts b/integration-tests/worker/test/integration.test.ts index 399968c81..3781a3bdf 100644 --- a/integration-tests/worker/test/integration.test.ts +++ b/integration-tests/worker/test/integration.test.ts @@ -2,6 +2,7 @@ import test from 'ava'; import path from 'node:path'; import crypto from 'node:crypto'; import Koa from 'koa'; +import { generateKeys } from '@openfn/lightning-mock'; import { initLightning, initWorker, randomPort } from '../src/init'; @@ -12,13 +13,21 @@ let engineLogger; let lightningPort; test.before(async () => { + const keys = await generateKeys(); lightningPort = randomPort(); - lightning = initLightning(lightningPort); - ({ worker, engine, engineLogger } = await initWorker(lightningPort, { + lightning = initLightning(lightningPort, keys.private); + + const engineArgs = { maxWorkers: 1, - purge: false, repoDir: path.resolve('tmp/repo/integration'), - })); + }; + const workerArgs = { runPublicKey: keys.public }; + + ({ worker, engine, engineLogger } = await initWorker( + lightningPort, + engineArgs, + workerArgs + )); }); test.afterEach(() => { @@ -30,7 +39,15 @@ test.after(async () => { await worker.destroy(); }); -test('should run a simple job with no compilation or adaptor', (t) => { +const createDummyWorker = () => { + const engineArgs = { + repoDir: path.resolve('./dummy-repo'), + maxWorkers: 1, + }; + return initWorker(lightningPort, engineArgs); +}; + +test.serial('should run a simple job with no compilation or adaptor', (t) => { return new Promise(async (done) => { lightning.once('run:complete', (evt) => { // This will fetch the final dataclip from the attempt @@ -53,7 +70,7 @@ test('should run a simple job with no compilation or adaptor', (t) => { }); }); -test('run a job with autoinstall of common', (t) => { +test.serial('run a job with autoinstall of common', (t) => { return new Promise(async (done) => { let autoinstallEvent; @@ -97,7 +114,7 @@ test('run a job with autoinstall of common', (t) => { }); // this depends on prior test! -test('run a job which does NOT autoinstall common', (t) => { +test.serial('run a job which does NOT autoinstall common', (t) => { return new Promise(async (done) => { lightning.once('run:complete', () => { try { @@ -134,40 +151,7 @@ test('run a job which does NOT autoinstall common', (t) => { }); }); -test("Don't send job logs to stdout", (t) => { - return new Promise(async (done) => { - const attempt = { - id: crypto.randomUUID(), - jobs: [ - { - adaptor: '@openfn/language-common@latest', - body: 'fn((s) => { console.log("@@@"); return s })', - }, - ], - }; - - lightning.once('run:complete', () => { - const jsonLogs = engineLogger._history; - - // The engine logger shouldn't print out any job logs - const jobLog = jsonLogs.find((l) => l.name === 'JOB'); - t.falsy(jobLog); - const jobLog2 = jsonLogs.find((l) => l.message[0] === '@@@'); - t.falsy(jobLog2); - - // But it SHOULD log engine stuff - const runtimeLog = jsonLogs.find( - (l) => l.name === 'R/T' && l.message[0].match(/completed job/i) - ); - t.truthy(runtimeLog); - done(); - }); - - lightning.enqueueRun(attempt); - }); -}); - -test('run a job with initial state (with data)', (t) => { +test.serial('run a job with initial state (with data)', (t) => { return new Promise(async (done) => { const attempt = { id: crypto.randomUUID(), @@ -184,7 +168,8 @@ test('run a job with initial state (with data)', (t) => { lightning.addDataclip('s1', initialState); - lightning.once('run:complete', () => { + lightning.once('run:complete', (evt) => { + t.log(evt.payload); const result = lightning.getResult(attempt.id); t.deepEqual(result, { ...initialState, @@ -201,7 +186,7 @@ test('run a job with initial state (with data)', (t) => { }); }); -test('run a job with initial state (no top level keys)', (t) => { +test.serial('run a job with initial state (no top level keys)', (t) => { return new Promise(async (done) => { const attempt = { id: crypto.randomUUID(), @@ -248,7 +233,6 @@ test.skip('run a job with credentials', (t) => { const app = new Koa(); app.use(async (ctx, next) => { - console.log('GET!'); // TODO check basic credential ctx.body = '{ message: "ok" }'; ctx.response.headers['Content-Type'] = 'application/json'; @@ -304,7 +288,36 @@ test.skip('run a job with credentials', (t) => { }); }); -test('blacklist a non-openfn adaptor', (t) => { +test.serial('run a job with bad credentials', (t) => { + return new Promise(async (done) => { + const attempt = { + id: crypto.randomUUID(), + dataclip_id: 's1', + jobs: [ + { + adaptor: '@openfn/language-common@latest', + body: 'fn((s) => s)', + credential: 'zzz', + }, + ], + }; + + const initialState = { name: 'Professor X' }; + + lightning.addDataclip('s1', initialState); + + lightning.once('run:complete', ({ payload }) => { + t.is(payload.reason, 'exception'); + t.is(payload.error_type, 'CredentialLoadError'); + t.regex(payload.error_message, /Failed to load credential zzz for step/); + done(); + }); + + lightning.enqueueRun(attempt); + }); +}); + +test.serial('blacklist a non-openfn adaptor', (t) => { return new Promise(async (done) => { const attempt = { id: crypto.randomUUID(), @@ -347,13 +360,11 @@ test.skip('a timeout error should still call step-complete', (t) => { }); lightning.once('step:complete', (event) => { - console.log(event); t.is(event.payload.reason, 'kill'); t.is(event.payload.error_type, 'TimeoutError'); }); lightning.once('run:complete', () => { - console.log('DONE!'); done(); }); @@ -361,7 +372,7 @@ test.skip('a timeout error should still call step-complete', (t) => { }); }); -test('an OOM error should still call step-complete', (t) => { +test.serial('an OOM error should still call step-complete', (t) => { return new Promise(async (done) => { const attempt = { id: crypto.randomUUID(), @@ -392,7 +403,7 @@ test('an OOM error should still call step-complete', (t) => { }); }); -// test('run a job with complex behaviours (initial state, branching)', (t) => { +// test.serial('run a job with complex behaviours (initial state, branching)', (t) => { // const attempt = { // id: 'a1', // initialState: 's1 @@ -426,58 +437,174 @@ test('an OOM error should still call step-complete', (t) => { // }); // }); // }); +test.serial("Don't send job logs to stdout", (t) => { + return new Promise(async (done) => { + const attempt = { + id: crypto.randomUUID(), + jobs: [ + { + adaptor: '@openfn/language-common@latest', + body: 'fn((s) => { console.log("@@@"); return s })', + }, + ], + }; + + lightning.once('run:complete', () => { + const jsonLogs = engineLogger._history; + // The engine logger shouldn't print out any job logs + const jobLog = jsonLogs.find((l) => l.name === 'JOB'); + t.falsy(jobLog); + const jobLog2 = jsonLogs.find((l) => l.message[0] === '@@@'); + t.falsy(jobLog2); + + // But it SHOULD log engine stuff + const runtimeLog = jsonLogs.find( + (l) => l.name === 'engine' && l.message[0].match(/complete workflow/i) + ); + t.truthy(runtimeLog); + done(); + }); -// TODO this test is a bit different now -// I think it's worth keeping -test('stateful adaptor should create a new client for each attempt', (t) => { + lightning.enqueueRun(attempt); + }); +}); + +test.serial("Don't send adaptor logs to stdout", (t) => { return new Promise(async (done) => { - // We want to create our own special worker here + // We have to create a new worker with a different repo for this one await worker.destroy(); + ({ worker, engineLogger } = await createDummyWorker()); - const attempt1 = { + const message = 've have been expecting you meester bond'; + const attempt = { id: crypto.randomUUID(), jobs: [ { - adaptor: '@openfn/stateful-test@1.0.0', - // manual import shouldn't be needed but its not important enough to fight over - body: `import { fn, threadId, clientId } from '@openfn/stateful-test'; - fn(() => { - return { threadId, clientId } - })`, + adaptor: '@openfn/test-adaptor@1.0.0', + body: `import { log } from '@openfn/test-adaptor'; log("${message}")`, }, ], }; - const attempt2 = { - ...attempt1, - id: crypto.randomUUID(), - }; - let results = {}; - lightning.on('run:complete', (evt) => { - const id = evt.runId; - results[id] = lightning.getResult(id); + lightning.once('run:complete', () => { + const jsonLogs = engineLogger._history; + // The engine logger shouldn't print out any adaptor logs + const jobLog = jsonLogs.find((l) => l.name === 'ADA'); + t.falsy(jobLog); + const jobLog2 = jsonLogs.find((l) => l.message[0] === message); + t.falsy(jobLog2); + + // But it SHOULD log engine stuff + const runtimeLog = jsonLogs.find( + (l) => l.name === 'engine' && l.message[0].match(/complete workflow/i) + ); + t.truthy(runtimeLog); + done(); + }); - if (id === attempt2.id) { - const one = results[attempt1.id]; - const two = results[attempt2.id]; + lightning.enqueueRun(attempt); + }); +}); - // The two attempts should run in different threads - t.not(one.threadId, two.threadId); - t.not(one.clientId, two.clientId); +test.serial( + 'stateful adaptor should create a new client for each attempt', + (t) => { + return new Promise(async (done) => { + // We want to create our own special worker here + await worker.destroy(); + ({ worker, engineLogger } = await createDummyWorker()); + + const attempt1 = { + id: crypto.randomUUID(), + jobs: [ + { + adaptor: '@openfn/stateful-test@1.0.0', + // manual import shouldn't be needed but its not important enough to fight over + body: `import { fn, threadId, clientId } from '@openfn/stateful-test'; + fn(() => { + return { threadId, clientId } + })`, + }, + ], + }; + const attempt2 = { + ...attempt1, + id: crypto.randomUUID(), + }; + let results = {}; + + lightning.on('run:complete', (evt) => { + const id = evt.runId; + results[id] = lightning.getResult(id); + + if (id === attempt2.id) { + const one = results[attempt1.id]; + const two = results[attempt2.id]; + + // The two attempts should run in different threads + t.not(one.threadId, two.threadId); + t.not(one.clientId, two.clientId); + + done(); + } + }); - done(); - } + lightning.enqueueRun(attempt1); + lightning.enqueueRun(attempt2); }); + } +); + +test.serial('worker should exit if it has an invalid key', (t) => { + return new Promise(async (done) => { + if (!worker.destroyed) { + await worker.destroy(); + } + + // generate a new, invalid, public key + const keys = await generateKeys(); + + ({ worker } = await initWorker( + lightningPort, + { + maxWorkers: 1, + repoDir: path.resolve('tmp/repo/integration'), + }, + { + runPublicKey: keys.public, + } + )); - const engineArgs = { - repoDir: path.resolve('./dummy-repo'), - maxWorkers: 1, - purge: false, + const run = { + id: crypto.randomUUID(), + jobs: [ + { + adaptor: '@openfn/language-http@latest', + body: `fn((s) => s`, + }, + ], }; - await initWorker(lightningPort, engineArgs); - lightning.enqueueRun(attempt1); - lightning.enqueueRun(attempt2); + // This should NOT run because the worker should + // not verify the token and destroy itself + lightning.once('run:start', () => { + t.fail('invalid run was start'); + done(); + }); + lightning.once('run:complete', () => { + t.fail('invalid run was completed'); + done(); + }); + + // TODO this run will, at the moment, be LOST to Lightning + lightning.enqueueRun(run); + + t.false(worker.destroyed); + setTimeout(() => { + // Ensure that the worker is destroyed after a brief interval + t.true(worker.destroyed); + done(); + }, 500); }); }); diff --git a/integration-tests/worker/test/runs.test.ts b/integration-tests/worker/test/runs.test.ts index 4a05ef5fc..6e9d75333 100644 --- a/integration-tests/worker/test/runs.test.ts +++ b/integration-tests/worker/test/runs.test.ts @@ -1,5 +1,6 @@ import test from 'ava'; import path from 'node:path'; +import { generateKeys } from '@openfn/lightning-mock'; import { createRun, @@ -13,13 +14,20 @@ let lightning; let worker; test.before(async () => { + const keys = await generateKeys(); const lightningPort = 4321; - lightning = initLightning(lightningPort); + lightning = initLightning(lightningPort, keys.private); - ({ worker } = await initWorker(lightningPort, { - repoDir: path.resolve('tmp/repo/attempts'), - })); + ({ worker } = await initWorker( + lightningPort, + { + repoDir: path.resolve('tmp/repo/attempts'), + }, + { + runPublicKey: keys.public, + } + )); }); test.afterEach(async () => { diff --git a/packages/cli/CHANGELOG.md b/packages/cli/CHANGELOG.md index 1b3cb9dc5..d63fb5564 100644 --- a/packages/cli/CHANGELOG.md +++ b/packages/cli/CHANGELOG.md @@ -1,5 +1,28 @@ # @openfn/cli +## 1.0.0 + +### Major Changes + +- 86dd668: The 1.0 Release of the CLI updates the language and input of the CLI to match the nomenclature of Lightning. + + See the readme for details of the new terminology. + + - Add support for execution plans + - Deprecate old workflow format (old workflows are supported and will be automatically converted into the new "execution plans") + - Update terminology across the codebase and docs + - Remove strict mode + +- 101f5a1: Autoinstall adaptors by default (pass `--no-autoinstall` to disable) + +### Patch Changes + +- Updated dependencies + - @openfn/logger@1.0.0 + - @openfn/deploy@0.4.2 + - @openfn/runtime@1.0.0 + - @openfn/compiler@0.0.40 + ## 0.4.16 ### Patch Changes diff --git a/packages/cli/README.md b/packages/cli/README.md index ca505dd0b..cd4435058 100644 --- a/packages/cli/README.md +++ b/packages/cli/README.md @@ -15,6 +15,7 @@ The CLI includes: - [Installation](#installation) - [Updating](#updating) +- [Terminology](#terminology) - [Migrating from devtools](#migrating-from-devtools) - [Basic Usage](#basic-usage) - [Advanced Usage](#advanced-usage) @@ -71,27 +72,36 @@ npm uninstall -g @openfn/cli And then re-installing. -## Migrating from devtools +## Terminology -If you're coming to the CLI from the old openfn devtools, here are a couple of key points to be aware of: +The CLI (and the wider OpenFn stack) has some very particular terminology -- The CLI has a shorter, sleeker syntax, so your command should be much shorter -- The CLI will automatically install adaptors for you (with full version control) +- An **Expression** is a string of Javascript (or Javascript-like code) written to be run in the CLI or Lightning. +- A **Job** is an expression plus some metadata required to run it - typically an adaptor and credentials. + The terms Job and Expression are often used interchangeably. +- A **Workflow** is a series of steps to be executed in sequence. Steps are usually Jobs (and so job and step are often used + interchangeably), but can be Triggers. +- An **Execution Plan** is a Workflow plus some options which inform how it should be executed (ie, start node, timeout). + The term "Execution plan" is mostly used internally and not exposed to users, and is usually interchangeable with Workflow. + +Note that an expression is not generally portable (ie, cannot run in other environments) unless it is compiled. +A compiled expression has imports and exports and, so long as packages are available, can run in a simple +JavaScript runtime. ## Basic Usage -You're probably here to run jobs (expressions) or workflows, which the CLI makes easy: +You're probably here to run Workflows (or individual jobs), which the CLI makes easy: ``` openfn path/to/workflow.json -openfn path/to/job.js -ia adaptor-name +openfn path/to/job.js -a adaptor-name ``` If running a single job, you MUST specify which adaptor to use. -Pass the `-i` flag to auto-install any required adaptors (it's safe to do this redundantly, although the run will be a little slower). +If the requested adaptor (or a matching version) is not already installed, it will be installed automatically. To disable this behaviour, pass the `--no-autoinstall` flag. -When the finished, the CLI will write the resulting state to disk. By default the CLI will create an `output.json` next to the job file. You can pass a path to output by passing `-o path/to/output.json` and state by adding `-s path/to/state.json`. You can use `-S` and `-O` to pass state through stdin and return the output through stdout. +When finished, the CLI will write the resulting state to disk. By default the CLI will create an `output.json` next to the job file. You can pass a path to output by passing `-o path/to/output.json` and state by adding `-s path/to/state.json`. You can use `-S` and `-O` to pass state through stdin and return the output through stdout. The CLI maintains a repo for auto-installed adaptors. Run `openfn repo list` to see where the repo is, and what's in it. Set the `OPENFN_REPO_DIR` env var to specify the repo folder. When autoinstalling, the CLI will check to see if a matching version is found in the repo. `openfn repo clean` will remove all adaptors from the repo. The repo also includes any documentation and metadata built with the CLI. @@ -103,14 +113,16 @@ You can pass `--log info` to get more feedback about what's happening, or `--log ## Advanced Usage -The CLI has a number of commands (the first argument after openfn) +The CLI has a number of commands (the first argument after `openfn`): - execute - run a job -- compile - compile a job to a .js file +- compile - compile a job to a .js file (prints to stdout by default) - docs - show documentation for an adaptor function - repo - manage the repo of installed modules - docgen - generate JSON documentation for an adaptor based on its typescript +For example, `openfn compile job.js -a common` will compile the code at `job.js` with the common adaptor. + If no command is specified, execute will run. To get more information about a command, including usage examples, run `openfn help`, ie, `openfn compile help`. @@ -253,38 +265,43 @@ Pass `--log-json` to the CLI to do this. You can also set the OPENFN_LOG_JSON en ## Workflows -As of v0.0.35 the CLI supports running workflows as well as jobs. - -A workflow is in execution plan for running several jobs in a sequence. It is defined as a JSON structure. +A workflow is an execution plan for running several jobs in a sequence. It is defined as a JSON structure. To see an example workflow, run the test command with `openfn test`. -A workflow has a structure like this (better documentation is coming soon): +A workflow has a structure like this: ```json { - "start": "a", // optionally specify the start node (defaults to jobs[0]) - "jobs": [ - { - "id": "a", - "expression": "fn((state) => state)", // code or a path - "adaptor": "@openfn/language-common@1.75", // specifiy the adaptor to use (version optional) - "data": {}, // optionally pre-populate the data object (this will be overriden by keys in in previous state) - "configuration": {}, // Use this to pass credentials - "next": { - // This object defines which jobs to call next - // All edges returning true will run - // If there are no next edges, the workflow will end - "b": true, - "c": { - "condition": "!state.error" // Note that this is an expression, not a function + "workflow": { + "name": "my-workflow", // human readable name used in logging + "steps": [ + { + "name": "a", // human readable name used in logging + "expression": "fn((state) => state)", // code or a path to an expression.js file + "adaptor": "@openfn/language-common@1.7.5", // specifiy the adaptor to use (version optional) + "data": {}, // optionally pre-populate the data object (this will be overriden by keys in in previous state) + "configuration": {}, // Use this to pass credentials + "next": { + // This object defines which jobs to call next + // All edges returning true will run + // If there are no next edges, the workflow will end + "b": true, + "c": { + "condition": "!state.error" // Note that this is a strict Javascript expression, not a function, and has no adaptor support + } } } - } - ] + ] + }, + "options": { + "start": "a" // optionally specify the start node (defaults to steps[0]) + } } ``` +See `packages/lexicon` for type definitions (the workflow format is covered by the `ExecutionPlan` type)/ + ## Compilation The CLI will compile your job code into regular Javascript. It does a number of things to make your code robust and portable: @@ -298,8 +315,6 @@ The result of this is a lightweight, modern JS module. It can be executed in any The CLI uses openfn's own runtime to execute jobs in a safe environment. -All jobs which work against `@openfn/core` will work in the new CLI and runtime environment (note: although this is a work in progress and we are actively looking for help to test this!). - If you want to see how the compiler is changing your job, run `openfn compile path/to/job -a ` to return the compiled code to stdout. Add `-o path/to/output.js` to save the result to disk. ## Contributing @@ -355,10 +370,10 @@ export OPENFN_ADAPTORS_REPO=~/repo/openfn/adaptors ### Contributing changes -Open a PR at https://github.com/openfn/kit. Include a changeset and a description of your change. - -See the root readme for more details about changests, +Include a changeset and a description of your change. Run this command and follow the interactive prompt (it's really easy, promise!) ``` - +pnpm changeset ``` + +Commit the changeset files and open a PR at https://github.com/openfn/kit. diff --git a/packages/cli/package.json b/packages/cli/package.json index 235678dd5..1b1f72ad0 100644 --- a/packages/cli/package.json +++ b/packages/cli/package.json @@ -1,6 +1,6 @@ { "name": "@openfn/cli", - "version": "0.4.16", + "version": "1.0.0", "description": "CLI devtools for the openfn toolchain.", "engines": { "node": ">=18", @@ -34,6 +34,7 @@ "license": "ISC", "devDependencies": { "@openfn/language-common": "2.0.0-rc3", + "@openfn/lexicon": "workspace:^", "@types/mock-fs": "^4.13.1", "@types/node": "^18.15.13", "@types/rimraf": "^3.0.2", diff --git a/packages/cli/src/commands.ts b/packages/cli/src/commands.ts index 328f0ce05..ecb7ed6f6 100644 --- a/packages/cli/src/commands.ts +++ b/packages/cli/src/commands.ts @@ -11,7 +11,7 @@ import { clean, install, pwd, list } from './repo/handler'; import createLogger, { CLI, Logger } from './util/logger'; import mapAdaptorsToMonorepo, { - MapAdaptorsToMonorepoOptions, + validateMonoRepo, } from './util/map-adaptors-to-monorepo'; import printVersions from './util/print-versions'; @@ -43,7 +43,8 @@ const handlers = { ['repo-install']: install, ['repo-pwd']: pwd, ['repo-list']: list, - version: async (opts: Opts, logger: Logger) => printVersions(logger, opts), + version: async (opts: Opts, logger: Logger) => + printVersions(logger, opts, true), }; // Top level command parser @@ -56,18 +57,25 @@ const parse = async (options: Opts, log?: Logger) => { await printVersions(logger, options); } - if (options.monorepoPath) { - if (options.monorepoPath === 'ERR') { + const { monorepoPath } = options; + if (monorepoPath) { + // TODO how does this occur? + if (monorepoPath === 'ERR') { logger.error( 'ERROR: --use-adaptors-monorepo was passed, but OPENFN_ADAPTORS_REPO env var is undefined' ); logger.error('Set OPENFN_ADAPTORS_REPO to a path pointing to the repo'); process.exit(9); // invalid argument } - await mapAdaptorsToMonorepo( - options as MapAdaptorsToMonorepoOptions, + + await validateMonoRepo(monorepoPath, logger); + logger.success(`Loading adaptors from monorepo at ${monorepoPath}`); + + options.adaptors = mapAdaptorsToMonorepo( + monorepoPath, + options.adaptors, logger - ); + ) as string[]; } // TODO it would be nice to do this in the repoDir option, but diff --git a/packages/cli/src/compile/command.ts b/packages/cli/src/compile/command.ts index 8b8320b41..5b9957b45 100644 --- a/packages/cli/src/compile/command.ts +++ b/packages/cli/src/compile/command.ts @@ -9,8 +9,7 @@ export type CompileOptions = Pick< | 'command' | 'expandAdaptors' | 'ignoreImports' - | 'jobPath' - | 'job' + | 'expressionPath' | 'logJson' | 'log' | 'outputPath' @@ -18,7 +17,6 @@ export type CompileOptions = Pick< | 'repoDir' | 'path' | 'useAdaptorsMonorepo' - | 'workflow' > & { repoDir?: string; }; diff --git a/packages/cli/src/compile/compile.ts b/packages/cli/src/compile/compile.ts index d08b34eda..a2f8285f6 100644 --- a/packages/cli/src/compile/compile.ts +++ b/packages/cli/src/compile/compile.ts @@ -1,27 +1,28 @@ import compile, { preloadAdaptorExports, Options } from '@openfn/compiler'; -import { getModulePath, ExecutionPlan } from '@openfn/runtime'; +import { getModulePath } from '@openfn/runtime'; +import type { ExecutionPlan, Job } from '@openfn/lexicon'; + import createLogger, { COMPILER, Logger } from '../util/logger'; import abort from '../util/abort'; import type { CompileOptions } from './command'; // Load and compile a job from a file, then return the result // This is designed to be re-used in different CLI steps -export default async (opts: CompileOptions, log: Logger) => { - log.debug('Compiling...'); - let job; - if (opts.workflow) { - // Note that the workflow will be loaded into an object by this point - job = compileWorkflow(opts.workflow as ExecutionPlan, opts, log); - } else { - job = await compileJob((opts.job || opts.jobPath) as string, opts, log); +export default async ( + planOrPath: ExecutionPlan | string, + opts: CompileOptions, + log: Logger +) => { + if (typeof planOrPath === 'string') { + const result = await compileJob(planOrPath as string, opts, log); + log.success(`Compiled expression from ${opts.expressionPath}`); + return result; } - if (opts.jobPath) { - log.success(`Compiled from ${opts.jobPath}`); - } else { - log.success('Compilation complete'); - } - return job; + const compiledPlan = compileWorkflow(planOrPath as ExecutionPlan, opts, log); + log.success('Compiled all expressions in workflow'); + + return compiledPlan; }; const compileJob = async ( @@ -29,7 +30,7 @@ const compileJob = async ( opts: CompileOptions, log: Logger, jobName?: string -) => { +): Promise => { try { const compilerOptions: Options = await loadTransformOptions(opts, log); return compile(job, compilerOptions); @@ -40,16 +41,19 @@ const compileJob = async ( e, 'Check the syntax of the job expression:\n\n' + job ); + // This will never actully execute + return ''; } }; // Find every expression in the job and run the compiler on it const compileWorkflow = async ( - workflow: ExecutionPlan, + plan: ExecutionPlan, opts: CompileOptions, log: Logger ) => { - for (const job of workflow.jobs) { + for (const step of plan.workflow.steps) { + const job = step as Job; const jobOpts = { ...opts, }; @@ -65,7 +69,7 @@ const compileWorkflow = async ( ); } } - return workflow; + return plan; }; // TODO this is a bit of a temporary solution diff --git a/packages/cli/src/compile/handler.ts b/packages/cli/src/compile/handler.ts index 2435ccd19..ac19752fb 100644 --- a/packages/cli/src/compile/handler.ts +++ b/packages/cli/src/compile/handler.ts @@ -3,33 +3,23 @@ import type { CompileOptions } from './command'; import type { Logger } from '../util/logger'; import compile from './compile'; -import loadInput from '../util/load-input'; -import expandAdaptors from '../util/expand-adaptors'; +import loadPlan from '../util/load-plan'; import assertPath from '../util/assert-path'; -import mapAdaptorsToMonorepo, { - MapAdaptorsToMonorepoOptions, -} from '../util/map-adaptors-to-monorepo'; const compileHandler = async (options: CompileOptions, logger: Logger) => { assertPath(options.path); - await loadInput(options, logger); - if (options.workflow) { - // expand shorthand adaptors in the workflow jobs - expandAdaptors(options); - await mapAdaptorsToMonorepo( - options as MapAdaptorsToMonorepoOptions, - logger - ); + let result; + if (options.expressionPath) { + result = await compile(options.expressionPath, options, logger); + } else { + const plan = await loadPlan(options, logger); + result = await compile(plan, options, logger); + result = JSON.stringify(result, null, 2); } - let result = await compile(options, logger); - if (options.workflow) { - result = JSON.stringify(result); - } if (options.outputStdout) { - logger.success('Compiled code:'); - logger.success('\n' + result); + logger.success('Result:\n\n' + result); } else { await writeFile(options.outputPath!, result as string); logger.success(`Compiled to ${options.outputPath}`); diff --git a/packages/cli/src/docs/handler.ts b/packages/cli/src/docs/handler.ts index 5be8d0792..a60acd5cc 100644 --- a/packages/cli/src/docs/handler.ts +++ b/packages/cli/src/docs/handler.ts @@ -60,7 +60,7 @@ const docsHandler = async ( // does the adaptor have a version? If not, fetch the latest // (docgen won't do this for us) - const { adaptors } = expandAdaptors({ adaptors: [adaptor] }); + const adaptors = expandAdaptors([adaptor]) as string[]; const [adaptorName] = adaptors!; let { name, version } = getNameAndVersion(adaptorName); if (!version) { diff --git a/packages/cli/src/execute/command.ts b/packages/cli/src/execute/command.ts index 6183b82a5..a18cdd40d 100644 --- a/packages/cli/src/execute/command.ts +++ b/packages/cli/src/execute/command.ts @@ -14,7 +14,7 @@ export type ExecuteOptions = Required< | 'expandAdaptors' | 'immutable' | 'ignoreImports' - | 'jobPath' + | 'expressionPath' | 'log' | 'logJson' | 'outputPath' @@ -26,11 +26,9 @@ export type ExecuteOptions = Required< | 'statePath' | 'stateStdin' | 'sanitize' - | 'strict' | 'timeout' | 'useAdaptorsMonorepo' | 'workflowPath' - | 'workflow' > > & Pick; @@ -54,17 +52,14 @@ const options = [ o.start, o.statePath, o.stateStdin, - o.strict, // order important - o.strictOutput, o.timeout, o.useAdaptorsMonorepo, ]; const executeCommand: yargs.CommandModule = { command: 'execute [path]', - describe: `Run an openfn job or workflow. Get more help by running openfn help. - \nExecute will run a job/workflow at the path and write the output state to disk (to ./state.json unless otherwise specified) - \nBy default only state.data will be returned fron a job. Include --no-strict to write the entire state object. + describe: `Run an openfn expression or workflow. Get more help by running openfn help. + \nExecute will run a expression/workflow at the path and write the output state to disk (to ./state.json unless otherwise specified) \nRemember to include the adaptor name with -a. Auto install adaptors with the -i flag.`, aliases: ['$0'], handler: ensure('execute', options), @@ -89,7 +84,7 @@ const executeCommand: yargs.CommandModule = { ) .example( 'openfn compile job.js -a http', - 'Compile job.js with the http adaptor and print the code to stdout' + 'Compile the expression at job.js with the http adaptor and print the code to stdout' ), }; diff --git a/packages/cli/src/execute/execute.ts b/packages/cli/src/execute/execute.ts index 487924ca5..9b4b4a576 100644 --- a/packages/cli/src/execute/execute.ts +++ b/packages/cli/src/execute/execute.ts @@ -1,5 +1,7 @@ import run, { getNameAndVersion } from '@openfn/runtime'; -import type { ModuleInfo, ModuleInfoMap, ExecutionPlan } from '@openfn/runtime'; +import type { ExecutionPlan, Job } from '@openfn/lexicon'; +import type { ModuleInfo, ModuleInfoMap } from '@openfn/runtime'; + import createLogger, { RUNTIME, JOB } from '../util/logger'; import { ExecuteOptions } from './command'; @@ -8,21 +10,18 @@ type ExtendedModuleInfo = ModuleInfo & { }; export default async ( - input: string | ExecutionPlan, - state: any, - opts: Omit + plan: ExecutionPlan, + input: any, + opts: ExecuteOptions ): Promise => { try { - const result = await run(input, state, { - strict: opts.strict, - start: opts.start, - timeout: opts.timeout, + const result = await run(plan, input, { immutableState: opts.immutable, logger: createLogger(RUNTIME, opts), jobLogger: createLogger(JOB, opts), linker: { repo: opts.repoDir, - modules: parseAdaptors(opts), + modules: parseAdaptors(plan), }, }); return result; @@ -34,9 +33,7 @@ export default async ( }; // TODO we should throw if the adaptor strings are invalid for any reason -export function parseAdaptors( - opts: Partial> -) { +export function parseAdaptors(plan: ExecutionPlan) { const extractInfo = (specifier: string) => { const [module, path] = specifier.split('='); const { name, version } = getNameAndVersion(module); @@ -54,24 +51,15 @@ export function parseAdaptors( const adaptors: ModuleInfoMap = {}; - if (opts.adaptors) { - opts.adaptors.reduce((obj, exp) => { - const { name, ...maybeVersionAndPath } = extractInfo(exp); - obj[name] = { ...maybeVersionAndPath }; - return obj; - }, adaptors); - } - - if (opts.workflow) { - // TODO what if there are different versions of the same adaptor? - // This structure can't handle it - we'd need to build it for every job - Object.values(opts.workflow.jobs).forEach((job) => { - if (job.adaptor) { - const { name, ...maybeVersionAndPath } = extractInfo(job.adaptor); - adaptors[name] = { ...maybeVersionAndPath }; - } - }); - } + // TODO what if there are different versions of the same adaptor? + // This structure can't handle it - we'd need to build it for every job + Object.values(plan.workflow.steps).forEach((step) => { + const job = step as Job; + if (job.adaptor) { + const { name, ...maybeVersionAndPath } = extractInfo(job.adaptor); + adaptors[name] = maybeVersionAndPath; + } + }); return adaptors; } diff --git a/packages/cli/src/execute/get-autoinstall-targets.ts b/packages/cli/src/execute/get-autoinstall-targets.ts index eead48820..677f41f50 100644 --- a/packages/cli/src/execute/get-autoinstall-targets.ts +++ b/packages/cli/src/execute/get-autoinstall-targets.ts @@ -1,23 +1,15 @@ -import type { ExecuteOptions } from './command'; +import { ExecutionPlan, Job } from '@openfn/lexicon'; -const getAutoinstallTargets = ( - options: Partial< - Pick - > -) => { - if (options.workflow) { - const adaptors = {} as Record; - Object.values(options.workflow.jobs).forEach((job) => { - if (job.adaptor) { - adaptors[job.adaptor] = true; - } - }); - return Object.keys(adaptors); - } - if (options.adaptors) { - return options.adaptors?.filter((a) => !/=/.test(a)); - } - return []; +const getAutoinstallTargets = (plan: ExecutionPlan) => { + const adaptors = {} as Record; + Object.values(plan.workflow.steps).forEach((step) => { + const job = step as Job; + // Do not autoinstall adaptors with a path + if (job.adaptor && !/=/.test(job.adaptor)) { + adaptors[job.adaptor] = true; + } + }); + return Object.keys(adaptors); }; export default getAutoinstallTargets; diff --git a/packages/cli/src/execute/handler.ts b/packages/cli/src/execute/handler.ts index aefb6894e..060a06c22 100644 --- a/packages/cli/src/execute/handler.ts +++ b/packages/cli/src/execute/handler.ts @@ -1,3 +1,5 @@ +import type { ExecutionPlan } from '@openfn/lexicon'; + import type { ExecuteOptions } from './command'; import execute from './execute'; import serializeOutput from './serialize-output'; @@ -5,16 +7,11 @@ import getAutoinstallTargets from './get-autoinstall-targets'; import { install } from '../repo/handler'; import compile from '../compile/compile'; -import { CompileOptions } from '../compile/command'; import { Logger, printDuration } from '../util/logger'; import loadState from '../util/load-state'; import validateAdaptors from '../util/validate-adaptors'; -import loadInput from '../util/load-input'; -import expandAdaptors from '../util/expand-adaptors'; -import mapAdaptorsToMonorepo, { - MapAdaptorsToMonorepoOptions, -} from '../util/map-adaptors-to-monorepo'; +import loadPlan from '../util/load-plan'; import assertPath from '../util/assert-path'; const executeHandler = async (options: ExecuteOptions, logger: Logger) => { @@ -22,23 +19,13 @@ const executeHandler = async (options: ExecuteOptions, logger: Logger) => { assertPath(options.path); await validateAdaptors(options, logger); - let input = await loadInput(options, logger); - - if (options.workflow) { - // expand shorthand adaptors in the workflow jobs - expandAdaptors(options); - await mapAdaptorsToMonorepo( - options as MapAdaptorsToMonorepoOptions, - logger - ); - } - + let plan = await loadPlan(options, logger); const { repoDir, monorepoPath, autoinstall } = options; if (autoinstall) { if (monorepoPath) { logger.warn('Skipping auto-install as monorepo is being used'); } else { - const autoInstallTargets = getAutoinstallTargets(options); + const autoInstallTargets = getAutoinstallTargets(plan); if (autoInstallTargets.length) { logger.info('Auto-installing language adaptors'); await install({ packages: autoInstallTargets, repoDir }, logger); @@ -49,13 +36,13 @@ const executeHandler = async (options: ExecuteOptions, logger: Logger) => { const state = await loadState(options, logger); if (options.compile) { - input = await compile(options as CompileOptions, logger); + plan = (await compile(plan, options, logger)) as ExecutionPlan; } else { logger.info('Skipping compilation as noCompile is set'); } try { - const result = await execute(input!, state, options); + const result = await execute(plan, state, options); await serializeOutput(options, result, logger); const duration = printDuration(new Date().getTime() - start); if (result?.errors) { diff --git a/packages/cli/src/execute/serialize-output.ts b/packages/cli/src/execute/serialize-output.ts index 040b43b5e..79f338f70 100644 --- a/packages/cli/src/execute/serialize-output.ts +++ b/packages/cli/src/execute/serialize-output.ts @@ -3,21 +3,14 @@ import { Logger } from '../util/logger'; import { Opts } from '../options'; const serializeOutput = async ( - options: Pick, + options: Pick, result: any, logger: Logger ) => { let output = result; if (output && (output.configuration || output.data)) { - if (options.strict) { - output = { data: output.data }; - if (result.errors) { - output.errors = result.errors; - } - } else { - const { configuration, ...rest } = result; - output = rest; - } + const { configuration, ...rest } = result; + output = rest; } if (output === undefined) { diff --git a/packages/cli/src/options.ts b/packages/cli/src/options.ts index a29be4c7f..95b42da93 100644 --- a/packages/cli/src/options.ts +++ b/packages/cli/src/options.ts @@ -1,13 +1,13 @@ import path from 'node:path'; - import yargs from 'yargs'; -import type { ExecutionPlan } from '@openfn/runtime'; + import type { CommandList } from './commands'; -import { CLIExecutionPlan } from './types'; import { DEFAULT_REPO_DIR } from './constants'; -import doExpandAdaptors from './util/expand-adaptors'; -import ensureLogOpts from './util/ensure-log-opts'; -import { LogLevel } from './util'; +import { + expandAdaptors as doExpandAdaptors, + ensureLogOpts, + LogLevel, +} from './util'; // Central type definition for the main options // This represents the types coming out of yargs, @@ -28,8 +28,7 @@ export type Opts = { force?: boolean; immutable?: boolean; ignoreImports?: boolean | string[]; - jobPath?: string; - job?: string; + expressionPath?: string; log?: Record; logJson?: boolean; monorepoPath?: string; @@ -37,6 +36,7 @@ export type Opts = { outputPath?: string; outputStdout?: boolean; packages?: string[]; + planPath?: string; projectPath?: string; repoDir?: string; skipAdaptorValidation?: boolean; @@ -44,13 +44,13 @@ export type Opts = { start?: string; // workflow start node statePath?: string; stateStdin?: string; - strict?: boolean; // Strict state handling (only forward state.data). Defaults to true sanitize: 'none' | 'remove' | 'summarize' | 'obfuscate'; timeout?: number; // ms useAdaptorsMonorepo?: boolean; - workflow?: CLIExecutionPlan | ExecutionPlan; - workflowPath?: string; projectId?: string; + + // deprecated + workflowPath?: string; }; // Definition of what Yargs returns (before ensure is called) @@ -97,8 +97,10 @@ export const adaptors: CLIOption = { opts.adaptors = []; } + // TODO this might be redundant now as load-plan should handle it + // maybe commands other than execute need it if (opts.expandAdaptors) { - doExpandAdaptors(opts); + opts.adaptors = doExpandAdaptors(opts.adaptors) as string[]; } // delete the aliases as they have not been expanded @@ -112,8 +114,8 @@ export const autoinstall: CLIOption = { yargs: { alias: ['i'], boolean: true, - description: 'Auto-install the language adaptor', - default: false, + description: 'Auto-install the language adaptor(s)', + default: true, }, }; @@ -218,15 +220,13 @@ export const projectId: CLIOption = { hidden: true, }, ensure: (opts) => { - const projectId = opts.projectId; - //check that this is a uuid - return projectId; - }, + const projectId = opts.projectId; + //check that this is a uuid + return projectId; + }, }; - - -// Input path covers jobPath and workflowPath +// Input path covers expressionPath and workflowPath export const inputPath: CLIOption = { name: 'input-path', yargs: { @@ -235,12 +235,12 @@ export const inputPath: CLIOption = { ensure: (opts) => { const { path: basePath } = opts; if (basePath?.endsWith('.json')) { - opts.workflowPath = basePath; + opts.planPath = basePath; } else if (basePath?.endsWith('.js')) { - opts.jobPath = basePath; + opts.expressionPath = basePath; } else { const base = getBaseDir(opts); - setDefaultValue(opts, 'jobPath', path.join(base, 'job.js')); + setDefaultValue(opts, 'expressionPath', path.join(base, 'job.js')); } }, }; @@ -327,38 +327,6 @@ export const start: CLIOption = { }, }; -// Preserve this but hide it -export const strictOutput: CLIOption = { - name: 'no-strict-output', - yargs: { - deprecated: true, - hidden: true, - boolean: true, - }, - ensure: (opts: { strictOutput?: boolean; strict?: boolean }) => { - if (!opts.hasOwnProperty('strict')) { - // override strict not set - opts.strict = opts.strictOutput; - } - delete opts.strictOutput; - }, -}; - -export const strict: CLIOption = { - name: 'strict', - yargs: { - default: false, - boolean: true, - description: - 'Enables strict state handling, meaning only state.data is returned from a job.', - }, - ensure: (opts) => { - if (!opts.hasOwnProperty('strictOutput')) { - setDefaultValue(opts, 'strict', false); - } - }, -}; - export const skipAdaptorValidation: CLIOption = { name: 'skip-adaptor-validation', yargs: { diff --git a/packages/cli/src/test/handler.ts b/packages/cli/src/test/handler.ts index 8c2edde9e..52cb2cc32 100644 --- a/packages/cli/src/test/handler.ts +++ b/packages/cli/src/test/handler.ts @@ -1,3 +1,5 @@ +import type { ExecutionPlan } from '@openfn/lexicon'; + import { TestOptions } from './command'; import { createNullLogger, Logger } from '../util/logger'; import loadState from '../util/load-state'; @@ -6,44 +8,48 @@ import execute from '../execute/execute'; import { ExecuteOptions } from '../execute/command'; const testHandler = async (options: TestOptions, logger: Logger) => { - logger.log('Running test job...'); + logger.log('Running test workflow...'); const opts: Partial = { ...options }; // Preconfigure some options opts.compile = true; opts.adaptors = []; - opts.workflow = { - start: 'start', - jobs: [ - { - id: 'start', - state: { data: { defaultAnswer: 42 } }, - expression: - "const fn = () => (state) => { console.log('Starting computer...'); return state; }; fn()", - next: { - calculate: '!state.error', + const plan = { + options: { + start: 'start', + }, + workflow: { + steps: [ + { + id: 'start', + state: { data: { defaultAnswer: 42 } }, + expression: + "const fn = () => (state) => { console.log('Starting computer...'); return state; }; fn()", + next: { + calculate: '!state.error', + }, + }, + { + id: 'calculate', + expression: + "const fn = () => (state) => { console.log('Calculating to life, the universe, and everything..'); return state }; fn()", + next: { + result: true, + }, }, - }, - { - id: 'calculate', - expression: - "const fn = () => (state) => { console.log('Calculating to life, the universe, and everything..'); return state }; fn()", - next: { - result: true, + { + id: 'result', + expression: + 'const fn = () => (state) => ({ data: { answer: state.data.answer || state.data.defaultAnswer } }); fn()', }, - }, - { - id: 'result', - expression: - 'const fn = () => (state) => ({ data: { answer: state.data.answer || state.data.defaultAnswer } }); fn()', - }, - ], - }; + ], + }, + } as ExecutionPlan; logger.break(); - logger.info('Workflow object:'); - logger.info(JSON.stringify(opts.workflow, null, 2)); + logger.info('Execution plan:'); + logger.info(JSON.stringify(plan, null, 2)); logger.break(); if (!opts.stateStdin) { @@ -54,8 +60,8 @@ const testHandler = async (options: TestOptions, logger: Logger) => { } const state = await loadState(opts, createNullLogger()); - const code = await compile(opts, logger); - const result = await execute(code!, state, opts as ExecuteOptions); + const compiledPlan = (await compile(plan, opts, logger)) as ExecutionPlan; + const result = await execute(compiledPlan, state, opts as ExecuteOptions); logger.success(`Result: ${result.data.answer}`); return result; }; diff --git a/packages/cli/src/types.ts b/packages/cli/src/types.ts index 3c6b781e8..ed27ef8bc 100644 --- a/packages/cli/src/types.ts +++ b/packages/cli/src/types.ts @@ -2,6 +2,12 @@ // Ie config can be a string export type JobNodeID = string; +export type OldCLIWorkflow = { + id?: string; // UUID for this plan + start?: JobNodeID; + jobs: CLIJobNode[]; +}; + export type CLIExecutionPlan = { id?: string; // UUID for this plan start?: JobNodeID; diff --git a/packages/cli/src/util/expand-adaptors.ts b/packages/cli/src/util/expand-adaptors.ts index d60f1a0ab..45b952e9d 100644 --- a/packages/cli/src/util/expand-adaptors.ts +++ b/packages/cli/src/util/expand-adaptors.ts @@ -1,6 +1,6 @@ -import { Opts } from '../options'; +import { ExecutionPlan, Job } from '@openfn/lexicon'; -const expand = (name: any) => { +const expand = (name: string) => { if (typeof name === 'string') { const [left] = name.split('='); // don't expand adaptors which look like a path (or @openfn/language-) @@ -12,20 +12,24 @@ const expand = (name: any) => { return name; }; -export default (opts: Partial>) => { - const { adaptors, workflow } = opts; +type ArrayOrPlan = T extends string[] ? string[] : ExecutionPlan; - if (adaptors) { - opts.adaptors = adaptors?.map(expand); +// TODO typings here aren't good,I can't get this to work! +// At least this looks nice externally +export default | ExecutionPlan>( + input: T +): ArrayOrPlan => { + if (Array.isArray(input)) { + return input?.map(expand) as any; } - if (workflow) { - Object.values(workflow.jobs).forEach((job) => { - if (job.adaptor) { - job.adaptor = expand(job.adaptor); - } - }); - } + const plan = input as ExecutionPlan; + Object.values(plan.workflow.steps).forEach((step) => { + const job = step as Job; + if (job.adaptor) { + job.adaptor = expand(job.adaptor); + } + }); - return opts; + return plan as any; }; diff --git a/packages/cli/src/util/index.d.ts b/packages/cli/src/util/index.d.ts deleted file mode 100644 index 1ff09efd4..000000000 --- a/packages/cli/src/util/index.d.ts +++ /dev/null @@ -1 +0,0 @@ -export * from './logger'; diff --git a/packages/cli/src/util/index.ts b/packages/cli/src/util/index.ts new file mode 100644 index 000000000..640967359 --- /dev/null +++ b/packages/cli/src/util/index.ts @@ -0,0 +1,6 @@ +import expandAdaptors from './expand-adaptors'; +import ensureLogOpts from './ensure-log-opts'; + +export * from './logger'; + +export { expandAdaptors, ensureLogOpts }; diff --git a/packages/cli/src/util/load-input.ts b/packages/cli/src/util/load-input.ts deleted file mode 100644 index da1e58f1b..000000000 --- a/packages/cli/src/util/load-input.ts +++ /dev/null @@ -1,144 +0,0 @@ -import path from 'node:path'; -import fs from 'node:fs/promises'; -import { isPath } from '@openfn/compiler'; -import type { Logger } from '@openfn/logger'; -import type { Opts } from '../options'; -import { CLIExecutionPlan } from '../types'; -import { ExecutionPlan } from '@openfn/runtime'; -import abort from './abort'; - -type LoadWorkflowOpts = Required< - Pick ->; - -export default async ( - opts: Pick, - log: Logger -) => { - const { job, workflow, jobPath, workflowPath } = opts; - if (workflow || workflowPath) { - return loadWorkflow(opts as LoadWorkflowOpts, log); - } - - if (job) { - return job; - } - if (jobPath) { - try { - log.debug(`Loading job from ${jobPath}`); - opts.job = await fs.readFile(jobPath, 'utf8'); - return opts.job; - } catch (e: any) { - abort( - log, - 'Job not found', - undefined, - `Failed to load the job from ${jobPath}` - ); - } - } -}; - -const loadWorkflow = async (opts: LoadWorkflowOpts, log: Logger) => { - const { workflowPath, workflow } = opts; - - const readWorkflow = async () => { - try { - const text = await fs.readFile(workflowPath, 'utf8'); - return text; - } catch (e) { - abort( - log, - 'Workflow not found', - undefined, - `Failed to load a workflow from ${workflowPath}` - ); - } - }; - - const parseWorkflow = (contents: string) => { - try { - return JSON.parse(contents); - } catch (e: any) { - abort( - log, - 'Invalid JSON in workflow', - e, - `Check the syntax of the JSON at ${workflowPath}` - ); - } - }; - - const fetchWorkflowFile = async ( - jobId: string, - rootDir: string = '', - filePath: string - ) => { - try { - // Special handling for ~ feels like a necessary evil - const fullPath = filePath.startsWith('~') - ? filePath - : path.resolve(rootDir, filePath); - const result = await fs.readFile(fullPath, 'utf8'); - return result; - } catch (e) { - abort( - log, - `File not found for job ${jobId}: ${filePath}`, - undefined, - `This workflow references a file which cannot be found at ${filePath}\n\nPaths inside the workflow are relative to the workflow.json` - ); - } - }; - - log.debug(`Loading workflow from ${workflowPath}`); - try { - let wf: CLIExecutionPlan; - let rootDir = opts.baseDir; - if (workflowPath) { - let workflowRaw = await readWorkflow(); - wf = parseWorkflow(workflowRaw!); - if (!rootDir) { - // TODO this may not be neccessary, but keeping just in case - rootDir = path.dirname(workflowPath); - } - } else { - wf = workflow as CLIExecutionPlan; - } - - // TODO auto generate ids? - - // identify any expressions/configs that are paths, and load them in - // All paths are relative to the workflow itself - let idx = 0; - for (const job of wf.jobs) { - idx += 1; - const expressionStr = - typeof job.expression === 'string' && job.expression?.trim(); - const configurationStr = - typeof job.configuration === 'string' && job.configuration?.trim(); - if (expressionStr && isPath(expressionStr)) { - job.expression = await fetchWorkflowFile( - job.id || `${idx}`, - rootDir, - expressionStr - ); - } - if (configurationStr && isPath(configurationStr)) { - const configString = await fetchWorkflowFile( - job.id || `${idx}`, - rootDir, - configurationStr - ); - job.configuration = JSON.parse(configString!); - } - } - - opts.workflow = wf as ExecutionPlan; - log.debug('Workflow loaded!'); - return opts.workflow; - } catch (e) { - log.error(`Error loading workflow from ${workflowPath}`); - throw e; - } -}; diff --git a/packages/cli/src/util/load-plan.ts b/packages/cli/src/util/load-plan.ts new file mode 100644 index 000000000..490fadede --- /dev/null +++ b/packages/cli/src/util/load-plan.ts @@ -0,0 +1,252 @@ +import fs from 'node:fs/promises'; +import path from 'node:path'; +import { isPath } from '@openfn/compiler'; + +import abort from './abort'; +import expandAdaptors from './expand-adaptors'; +import mapAdaptorsToMonorepo from './map-adaptors-to-monorepo'; +import type { ExecutionPlan, Job, WorkflowOptions } from '@openfn/lexicon'; +import type { Opts } from '../options'; +import type { Logger } from './logger'; +import type { OldCLIWorkflow } from '../types'; + +const loadPlan = async ( + options: Pick< + Opts, + | 'expressionPath' + | 'planPath' + | 'workflowPath' + | 'adaptors' + | 'baseDir' + | 'expandAdaptors' + >, + logger: Logger +): Promise => { + const { workflowPath, planPath, expressionPath } = options; + + if (expressionPath) { + return loadExpression(options, logger); + } + + const jsonPath = planPath || workflowPath; + + if (!options.baseDir) { + options.baseDir = path.dirname(jsonPath!); + } + + const json = await loadJson(jsonPath!, logger); + const defaultName = path.parse(jsonPath!).name; + if (json.workflow) { + return loadXPlan(json, options, logger, defaultName); + } else { + return loadOldWorkflow(json, options, logger, defaultName); + } +}; + +export default loadPlan; + +const loadJson = async (workflowPath: string, logger: Logger): Promise => { + let text: string; + + try { + text = await fs.readFile(workflowPath, 'utf8'); + } catch (e) { + return abort( + logger, + 'Workflow not found', + undefined, + `Failed to load a workflow from ${workflowPath}` + ); + } + + let json: object; + try { + json = JSON.parse(text); + } catch (e: any) { + return abort( + logger, + 'Invalid JSON in workflow', + e, + `Check the syntax of the JSON at ${workflowPath}` + ); + } + + return json; +}; + +const maybeAssign = (a: any, b: any, keys: Array) => { + keys.forEach((key) => { + if (a.hasOwnProperty(key)) { + b[key] = a[key]; + } + }); +}; + +const loadExpression = async ( + options: Pick, + logger: Logger +): Promise => { + const expressionPath = options.expressionPath!; + + logger.debug(`Loading expression from ${expressionPath}`); + try { + const expression = await fs.readFile(expressionPath, 'utf8'); + const name = path.parse(expressionPath).name; + + const step: Job = { expression }; + + // The adaptor should have been expanded nicely already, so we don't need intervene here + if (options.adaptors) { + const [adaptor] = options.adaptors; + if (adaptor) { + step.adaptor = adaptor; + } + } + + const wfOptions: WorkflowOptions = {}; + // TODO support state props to remove? + maybeAssign(options, wfOptions, ['timeout']); + + const plan: ExecutionPlan = { + workflow: { + name, + steps: [step], + }, + options: wfOptions, + }; + // call loadXPlan now so that any options can be written + return loadXPlan(plan, options, logger); + } catch (e) { + abort( + logger, + 'Expression not found', + undefined, + `Failed to load the expression from ${expressionPath}` + ); + + // This will never execute + return {} as ExecutionPlan; + } +}; + +const loadOldWorkflow = async ( + workflow: OldCLIWorkflow, + options: Pick, + logger: Logger, + defaultName: string = '' +) => { + const plan: ExecutionPlan = { + workflow: { + steps: workflow.jobs, + }, + options: { + start: workflow.start, + }, + }; + + if (workflow.id) { + plan.id = workflow.id; + } + + // call loadXPlan now so that any options can be written + const final = await loadXPlan(plan, options, logger, defaultName); + + logger.warn('Converted workflow into new format:'); + logger.warn(final); + + return final; +}; + +const fetchFile = async ( + jobId: string, + rootDir: string = '', + filePath: string, + log: Logger +) => { + try { + // Special handling for ~ feels like a necessary evil + const fullPath = filePath.startsWith('~') + ? filePath + : path.resolve(rootDir, filePath); + const result = await fs.readFile(fullPath, 'utf8'); + return result; + } catch (e) { + abort( + log, + `File not found for job ${jobId}: ${filePath}`, + undefined, + `This workflow references a file which cannot be found at ${filePath}\n\nPaths inside the workflow are relative to the workflow.json` + ); + + // should never get here + return '.'; + } +}; + +// TODO this is currently untested in load-plan +// (but covered a bit in execute tests) +const importExpressions = async ( + plan: ExecutionPlan, + rootDir: string, + log: Logger +) => { + let idx = 0; + for (const step of plan.workflow.steps) { + const job = step as Job; + if (!job.expression) { + continue; + } + idx += 1; + const expressionStr = + typeof job.expression === 'string' && job.expression?.trim(); + const configurationStr = + typeof job.configuration === 'string' && job.configuration?.trim(); + if (expressionStr && isPath(expressionStr)) { + job.expression = await fetchFile( + job.id || `${idx}`, + rootDir, + expressionStr, + log + ); + } + if (configurationStr && isPath(configurationStr)) { + const configString = await fetchFile( + job.id || `${idx}`, + rootDir, + configurationStr, + log + ); + job.configuration = JSON.parse(configString!); + } + } +}; + +const loadXPlan = async ( + plan: ExecutionPlan, + options: Pick, + logger: Logger, + defaultName: string = '' +) => { + if (!plan.options) { + plan.options = {}; + } + + if (!plan.workflow.name && defaultName) { + plan.workflow.name = defaultName; + } + // Note that baseDir should be set up in the default function + await importExpressions(plan, options.baseDir!, logger); + // expand shorthand adaptors in the workflow jobs + if (options.expandAdaptors) { + expandAdaptors(plan); + } + await mapAdaptorsToMonorepo(options.monorepoPath, plan, logger); + + // Assign options form the CLI into the Xplan + // TODO support state props to remove + maybeAssign(options, plan.options, ['timeout', 'start']); + + logger.info(`Loaded workflow ${plan.workflow.name ?? ''}`); + + return plan; +}; diff --git a/packages/cli/src/util/map-adaptors-to-monorepo.ts b/packages/cli/src/util/map-adaptors-to-monorepo.ts index e4e33fce2..4e25d9876 100644 --- a/packages/cli/src/util/map-adaptors-to-monorepo.ts +++ b/packages/cli/src/util/map-adaptors-to-monorepo.ts @@ -3,6 +3,8 @@ import path from 'node:path'; import assert from 'node:assert'; import { Logger } from '@openfn/logger'; import { getNameAndVersion } from '@openfn/runtime'; +import type { ExecutionPlan, Job } from '@openfn/lexicon'; + import type { Opts } from '../options'; export const validateMonoRepo = async (repoPath: string, log: Logger) => { @@ -32,39 +34,38 @@ export const updatePath = (adaptor: string, repoPath: string, log: Logger) => { } const shortName = name.replace('@openfn/language-', ''); const abspath = path.resolve(repoPath, 'packages', shortName); + + log.info(`Mapped adaptor ${name} to monorepo: ${abspath}`); return `${name}=${abspath}`; }; export type MapAdaptorsToMonorepoOptions = Pick< Opts, - 'monorepoPath' | 'adaptors' | 'workflow' + 'monorepoPath' | 'adaptors' >; -// This will mutate options (adaptors, workflow) to support the monorepo -const mapAdaptorsToMonorepo = async ( - options: MapAdaptorsToMonorepoOptions, +const mapAdaptorsToMonorepo = ( + monorepoPath: string = '', + input: string[] | ExecutionPlan = [], log: Logger -) => { - const { adaptors, monorepoPath, workflow } = options; +): string[] | ExecutionPlan => { if (monorepoPath) { - await validateMonoRepo(monorepoPath, log); - log.success(`Loading adaptors from monorepo at ${monorepoPath}`); - if (adaptors) { - options.adaptors = adaptors.map((a) => { - const p = updatePath(a, monorepoPath, log); - log.info(`Mapped adaptor ${a} to monorepo: ${p.split('=')[1]}`); - return p; - }); - } - if (workflow) { - Object.values(workflow.jobs).forEach((job) => { - if (job.adaptor) { - job.adaptor = updatePath(job.adaptor, monorepoPath, log); - } - }); + if (Array.isArray(input)) { + const adaptors = input as string[]; + return adaptors.map((a) => updatePath(a, monorepoPath, log)); } + + const plan = input as ExecutionPlan; + Object.values(plan.workflow.steps).forEach((step) => { + const job = step as Job; + if (job.adaptor) { + job.adaptor = updatePath(job.adaptor, monorepoPath, log); + } + }); + + return plan; } - return options; + return input; }; export default mapAdaptorsToMonorepo; diff --git a/packages/cli/src/util/print-versions.ts b/packages/cli/src/util/print-versions.ts index 38e035ba1..7532a7ff8 100644 --- a/packages/cli/src/util/print-versions.ts +++ b/packages/cli/src/util/print-versions.ts @@ -26,7 +26,8 @@ const loadVersionFromPath = (adaptorPath: string) => { const printVersions = async ( logger: Logger, - options: Partial> = {} + options: Partial> = {}, + includeComponents = false ) => { const { adaptors, logJson } = options; let adaptor = ''; @@ -41,6 +42,9 @@ const printVersions = async ( const [namePart, pathPart] = adaptor.split('='); adaptorVersion = loadVersionFromPath(pathPart); adaptorName = getNameAndVersion(namePart).name; + } else if (options.monorepoPath) { + adaptorName = getNameAndVersion(adaptor).name; + adaptorVersion = 'monorepo'; } else { const { name, version } = getNameAndVersion(adaptor); adaptorName = name; @@ -73,23 +77,28 @@ const printVersions = async ( versions: { 'node.js': process.version.substring(1), cli: version, - runtime: runtimeVersion, - compiler: compilerVersion, }, }; + if (includeComponents) { + output.versions.runtime = runtimeVersion; + output.versions.compiler = compilerVersion; + } if (adaptorName) { output.versions[adaptorName] = adaptorVersion; } } else { - const adaptorVersionString = adaptorName - ? `\n${prefix(adaptorName)}${adaptorVersion}` - : ''; - output = `Versions: ${prefix(NODE)}${process.version.substring(1)} -${prefix(CLI)}${version} -${prefix(RUNTIME)}${runtimeVersion} -${prefix(COMPILER)}${compilerVersion}${adaptorVersionString}`; +${prefix(CLI)}${version}`; + + if (includeComponents) { + output += `\n${prefix(RUNTIME)}${runtimeVersion} +${prefix(COMPILER)}${compilerVersion}`; + } + + if (adaptorName) { + output += `\n${prefix(adaptorName)}${adaptorVersion}`; + } } logger.always(output); }; diff --git a/packages/cli/src/util/validate-adaptors.ts b/packages/cli/src/util/validate-adaptors.ts index e6b2666bc..d5126ec58 100644 --- a/packages/cli/src/util/validate-adaptors.ts +++ b/packages/cli/src/util/validate-adaptors.ts @@ -9,16 +9,18 @@ const validateAdaptors = async ( | 'autoinstall' | 'repoDir' | 'workflowPath' + | 'planPath' >, logger: Logger ) => { if (options.skipAdaptorValidation) { return; } + const isPlan = options.planPath || options.workflowPath; const hasDeclaredAdaptors = options.adaptors && options.adaptors.length > 0; - if (options.workflowPath && hasDeclaredAdaptors) { + if (isPlan && hasDeclaredAdaptors) { logger.error('ERROR: adaptor and workflow provided'); logger.error( 'This is probably not what you meant to do. A workflow should declare an adaptor for each job.' @@ -29,7 +31,7 @@ const validateAdaptors = async ( // If no adaptor is specified, pass a warning // (The runtime is happy to run without) // This can be overriden from options - if (!options.workflowPath && !hasDeclaredAdaptors) { + if (!isPlan && !hasDeclaredAdaptors) { logger.warn('WARNING: No adaptor provided!'); logger.warn( 'This job will probably fail. Pass an adaptor with the -a flag, eg:' diff --git a/packages/cli/test/commands.test.ts b/packages/cli/test/commands.test.ts index 31706ce5b..a0d339945 100644 --- a/packages/cli/test/commands.test.ts +++ b/packages/cli/test/commands.test.ts @@ -15,16 +15,16 @@ test.afterEach(() => { logger._reset(); }); -const JOB_EXPORT_42 = 'export default [() => ({ data: { count: 42 } })];'; -const JOB_TIMES_2 = +const EXPR_EXPORT_42 = 'export default [() => ({ data: { count: 42 } })];'; +const EXPR_TIMES_2 = 'export default [(state) => { state.data.count = state.data.count * 2; return state; }];'; -const JOB_MOCK_ADAPTOR = +const EXPR_MOCK_ADAPTOR = 'import { byTwo } from "times-two"; export default [byTwo];'; -const JOB_EXPORT_STATE = +const EXPR_EXPORT_STATE = "export default [() => ({ configuration: {}, data: {}, foo: 'bar' })];"; type RunOptions = { - jobPath?: string; + expressionPath?: string; statePath?: string; outputPath?: string; state?: any; @@ -43,7 +43,7 @@ async function run(command: string, job: string, options: RunOptions = {}) { // A good reason to move all these into integration tests tbh! command = command.replace(/^openfn /, ''); - const jobPath = options.jobPath || 'job.js'; + const expressionPath = options.expressionPath || 'job.js'; const statePath = options.statePath || 'state.json'; const outputPath = options.outputPath || 'output.json'; const state = @@ -58,7 +58,7 @@ async function run(command: string, job: string, options: RunOptions = {}) { // Mock the file system in-memory if (!options.disableMock) { mock({ - [jobPath]: job, + [expressionPath]: job, [statePath]: state, [outputPath]: '{}', [pnpm]: mock.load(pnpm, {}), @@ -74,7 +74,7 @@ async function run(command: string, job: string, options: RunOptions = {}) { const opts = cmd.parse(command) as Opts; // Override some options after the command has been parsed - opts.path = jobPath; + opts.path = expressionPath; opts.repoDir = options.repoDir; opts.log = { default: 'none' }; @@ -93,6 +93,65 @@ async function run(command: string, job: string, options: RunOptions = {}) { } } +test.serial('run an execution plan', async (t) => { + const plan = { + workflow: { + steps: [ + { + id: 'job1', + state: { data: { x: 0 } }, + expression: 'export default [s => { s.data.x += 1; return s; } ]', + next: { job2: true }, + }, + { + id: 'job2', + expression: 'export default [s => { s.data.x += 1; return s; } ]', + }, + ], + }, + }; + + const options = { + outputPath: 'output.json', + expressionPath: 'wf.json', // just to fool the test + }; + + const result = await run('openfn wf.json', JSON.stringify(plan), options); + t.assert(result.data.x === 2); +}); + +test.serial('run an execution plan with start', async (t) => { + const state = JSON.stringify({ data: { x: 0 } }); + const plan = { + workflow: { + steps: [ + { + id: 'a', + expression: 'export default [s => { s.data.x += 1; return s; } ]', + next: { b: true }, + }, + { + id: 'b', + expression: 'export default [s => { s.data.x += 1; return s; } ]', + }, + ], + }, + }; + + const options = { + outputPath: 'output.json', + expressionPath: 'wf.json', // just to fool the test + }; + + const result = await run( + `openfn wf.json -S ${state} --start b`, + JSON.stringify(plan), + options + ); + + t.assert(result.data.x === 1); +}); + test.serial('print version information with version', async (t) => { await run('version', ''); @@ -119,7 +178,7 @@ test.serial('run test job with custom state', async (t) => { }); test.serial('run a job with defaults: openfn job.js', async (t) => { - const result = await run('openfn job.js', JOB_EXPORT_42); + const result = await run('openfn job.js', EXPR_EXPORT_42); t.assert(result.data.count === 42); }); @@ -147,7 +206,7 @@ test.serial('run a workflow', async (t) => { const options = { outputPath: 'output.json', - jobPath: 'wf.json', // just to fool the test + expressionPath: 'wf.json', // just to fool the test }; const result = await run('openfn wf.json', JSON.stringify(workflow), options); @@ -168,7 +227,7 @@ test.serial('run a workflow with config as an object', async (t) => { const options = { outputPath: 'output.json', - jobPath: 'wf.json', // just to fool the test + expressionPath: 'wf.json', // just to fool the test }; const result = await run('openfn wf.json', JSON.stringify(workflow), options); t.deepEqual(result, { @@ -190,7 +249,7 @@ test.serial('run a workflow with config as a path', async (t) => { const options = { outputPath: 'output.json', - jobPath: 'wf.json', // just to fool the test + expressionPath: 'wf.json', // just to fool the test mockfs: { '/config.json': JSON.stringify({ y: 0 }), }, @@ -208,7 +267,7 @@ test.serial.skip( async (t) => { const options = { // set up the file system - jobPath: + expressionPath: '~/openfn/jobs/the-question/what-is-the-answer-to-life-the-universe-and-everything.js', outputPath: '~/openfn/jobs/the-question/output.json', statePath: '~/openfn/jobs/the-question/state.json', @@ -216,7 +275,7 @@ test.serial.skip( const result = await run( 'openfn ~/openfn/jobs/the-question', - JOB_EXPORT_42, + EXPR_EXPORT_42, options ); t.assert(result === 42); @@ -237,7 +296,7 @@ test.serial( }; const result = await run( 'openfn job.js --output-path=/tmp/my-output.json', - JOB_EXPORT_42, + EXPR_EXPORT_42, options ); t.is(result.data.count, 42); @@ -256,7 +315,7 @@ test.serial( }; const result = await run( 'openfn job.js -o /tmp/my-output.json', - JOB_EXPORT_42, + EXPR_EXPORT_42, options ); t.is(result.data.count, 42); @@ -268,59 +327,15 @@ test.serial( ); test.serial( - 'output to file with strict state: openfn job.js --output-path=/tmp/my-output.json --strict', + 'output to file removing configuration: openfn job.js --output-path=/tmp/my-output.json', async (t) => { const options = { outputPath: '/tmp/my-output.json', }; const result = await run( - 'openfn job.js --output-path=/tmp/my-output.json --strict', - JOB_EXPORT_STATE, - options - ); - t.deepEqual(result, { data: {} }); - - const expectedFileContents = JSON.stringify({ data: {} }, null, 2); - const output = await fs.readFile('/tmp/my-output.json', 'utf8'); - t.is(output, expectedFileContents); - } -); - -test.serial( - 'output to file with non-strict state: openfn job.js --output-path=/tmp/my-output.json --no-strict-output', - async (t) => { - const options = { - outputPath: '/tmp/my-output.json', - }; - - const result = await run( - 'openfn job.js --output-path=/tmp/my-output.json --no-strict-output', - JOB_EXPORT_STATE, - options - ); - t.deepEqual(result, { data: {}, foo: 'bar' }); - - const expectedFileContents = JSON.stringify( - { data: {}, foo: 'bar' }, - null, - 2 - ); - const output = await fs.readFile('/tmp/my-output.json', 'utf8'); - t.assert(output === expectedFileContents); - } -); - -test.serial( - 'output to file with non-strict state: openfn job.js --output-path=/tmp/my-output.json --no-strict', - async (t) => { - const options = { - outputPath: '/tmp/my-output.json', - }; - - const result = await run( - 'openfn job.js --output-path=/tmp/my-output.json --no-strict', - JOB_EXPORT_STATE, + 'openfn job.js --output-path=/tmp/my-output.json', + EXPR_EXPORT_STATE, options ); t.deepEqual(result, { data: {}, foo: 'bar' }); @@ -344,7 +359,7 @@ test.serial( }; const result = await run( 'openfn job.js --state-path=/tmp/my-state.json', - JOB_TIMES_2, + EXPR_TIMES_2, options ); t.assert(result.data.count === 66); @@ -360,7 +375,7 @@ test.serial( }; const result = await run( 'openfn job.js -s /tmp/my-state.json', - JOB_TIMES_2, + EXPR_TIMES_2, options ); t.assert(result.data.count === 66); @@ -373,7 +388,7 @@ test.serial( const state = JSON.stringify({ data: { count: 11 } }); const result = await run( `openfn job.js --state-stdin=${state}`, - JOB_TIMES_2 + EXPR_TIMES_2 ); t.assert(result.data.count === 22); } @@ -383,7 +398,7 @@ test.serial( 'read state from stdin with alias: openfn job.js -S ', async (t) => { const state = JSON.stringify({ data: { count: 44 } }); - const result = await run(`openfn job.js -S ${state}`, JOB_TIMES_2); + const result = await run(`openfn job.js -S ${state}`, EXPR_TIMES_2); t.assert(result.data.count === 88); } ); @@ -394,7 +409,7 @@ test.serial( const state = JSON.stringify({ data: { count: 49.5 } }); const result = await run( `openfn --no-expand-adaptors -S ${state} --adaptor times-two=/modules/times-two`, - JOB_MOCK_ADAPTOR + EXPR_MOCK_ADAPTOR ); t.assert(result.data.count === 99); } @@ -406,7 +421,7 @@ test.serial( const state = JSON.stringify({ data: { count: 49.5 } }); const result = await run( `openfn --no-expand-adaptors -S ${state} --adaptors times-two=/modules/times-two`, - JOB_MOCK_ADAPTOR + EXPR_MOCK_ADAPTOR ); t.assert(result.data.count === 99); } @@ -418,7 +433,7 @@ test.serial( const state = JSON.stringify({ data: { count: 49.5 } }); const result = await run( `openfn --no-expand-adaptors -S ${state} -a times-two=/modules/times-two`, - JOB_MOCK_ADAPTOR + EXPR_MOCK_ADAPTOR ); t.assert(result.data.count === 99); } @@ -430,7 +445,7 @@ test.serial( const state = JSON.stringify({ data: { count: 11 } }); const job = 'export default [byTwo]'; const result = await run( - `openfn --no-expand-adaptors -S ${state} -a times-two`, + `openfn --no-expand-adaptors -S ${state} -a times-two --no-autoinstall`, job, { repoDir: '/repo', @@ -479,7 +494,7 @@ test.serial( const options = { outputPath: 'output.json', - jobPath: 'wf.json', + expressionPath: 'wf.json', repoDir: '/repo', }; @@ -497,9 +512,13 @@ test.serial( async (t) => { const job = 'fn((state) => { /* function isn\t actually called by the mock adaptor */ throw new Error("fake adaptor") });'; - const result = await run('openfn -a @openfn/language-postgres', job, { - repoDir: '/repo', - }); + const result = await run( + 'openfn -a @openfn/language-postgres --no-autoinstall', + job, + { + repoDir: '/repo', + } + ); t.assert(result === 'execute called!'); } ); @@ -548,7 +567,7 @@ test.serial( }); const result = await run('workflow.json -m', workflow, { - jobPath: 'workflow.json', + expressionPath: 'workflow.json', }); t.true(result.data.done); delete process.env.OPENFN_ADAPTORS_REPO; @@ -576,7 +595,7 @@ test.serial('compile a job: openfn compile job.js to file', async (t) => { test.serial('compile a workflow: openfn compile wf.json to file', async (t) => { const options = { outputPath: 'out.json', - jobPath: 'wf.json', // just to fool the test + expressionPath: 'wf.json', // just to fool the test }; const wf = JSON.stringify({ @@ -588,7 +607,7 @@ test.serial('compile a workflow: openfn compile wf.json to file', async (t) => { const output = await fs.readFile('out.json', 'utf8'); const result = JSON.parse(output); t.truthy(result); - t.is(result.jobs[0].expression, 'export default [x()];'); + t.is(result.workflow.steps[0].expression, 'export default [x()];'); }); test.serial('docs should print documentation with full names', async (t) => { diff --git a/packages/cli/test/compile/compile.test.ts b/packages/cli/test/compile/compile.test.ts index 55b867860..46f6520d1 100644 --- a/packages/cli/test/compile/compile.test.ts +++ b/packages/cli/test/compile/compile.test.ts @@ -8,13 +8,14 @@ import compile, { resolveSpecifierPath, } from '../../src/compile/compile'; import { CompileOptions } from '../../src/compile/command'; -import { ExecutionPlan } from '@openfn/runtime'; +import { mockFs, resetMockFs } from '../util'; +import { ExecutionPlan, Job } from '@openfn/lexicon'; const mockLog = createMockLogger(); -test.afterEach(() => { - mock.restore(); -}); +test.after(resetMockFs); + +const expressionPath = '/job.js'; type TransformOptionsWithImports = { ['add-imports']: { @@ -26,67 +27,64 @@ type TransformOptionsWithImports = { }; }; +// TODO this isn't really used and is a bit of a quirky thing +// The compiler itself probably doesn't do any path parsing? +// Just compile a source string and return the result test('compile from source string', async (t) => { const job = 'x();'; - const opts = { - job, - } as CompileOptions; + const opts = {} as CompileOptions; - const result = await compile(opts, mockLog); + const result = await compile(job, opts, mockLog); const expected = 'export default [x()];'; t.is(result, expected); }); test.serial('compile from path', async (t) => { - const pnpm = path.resolve('../../node_modules/.pnpm'); - mock({ - [pnpm]: mock.load(pnpm, {}), - '/tmp/job.js': 'x();', + const job = 'x();'; + mockFs({ + [expressionPath]: job, }); - const jobPath = '/tmp/job.js'; - const opts = { - jobPath, + expressionPath, } as CompileOptions; - const result = await compile(opts, mockLog); + const result = await compile(expressionPath, opts, mockLog); const expected = 'export default [x()];'; t.is(result, expected); }); -test('compile from workflow', async (t) => { - const workflow = { - start: 'a', - jobs: [ - { id: 'a', expression: 'x()' }, - { id: 'b', expression: 'x()' }, - ], - }; +test('compile from execution plan', async (t) => { + const plan = { + workflow: { + steps: [ + { id: 'a', expression: 'x()' }, + { id: 'b', expression: 'x()' }, + ], + }, + options: {}, + } as ExecutionPlan; - const opts = { - workflow, - } as CompileOptions; + const opts = {} as CompileOptions; - const result = (await compile(opts, mockLog)) as ExecutionPlan; + const result = (await compile(plan, opts, mockLog)) as ExecutionPlan; const expected = 'export default [x()];'; - t.is(result.jobs[0].expression, expected); - t.is(result.jobs[1].expression, expected); + const [a, b] = result.workflow.steps; + t.is((a as Job).expression, expected); + t.is((b as Job).expression, expected); }); test('throw an AbortError if a job is uncompilable', async (t) => { const job = 'a b'; - const opts = { - job, - } as CompileOptions; + const opts = {} as CompileOptions; const logger = createMockLogger(); - await t.throwsAsync(() => compile(opts, logger), { + await t.throwsAsync(() => compile(job, opts, logger), { message: 'Failed to compile job', }); @@ -95,18 +93,18 @@ test('throw an AbortError if a job is uncompilable', async (t) => { t.assert(logger._find('error', /critical error: aborting command/i)); }); -test('throw an AbortError if a workflow contains an uncompilable job', async (t) => { - const workflow = { - start: 'a', - jobs: [{ id: 'a', expression: 'x b' }], +test('throw an AbortError if an xplan contains an uncompilable job', async (t) => { + const plan: ExecutionPlan = { + workflow: { + steps: [{ id: 'a', expression: 'x b' }], + }, + options: {}, }; - const opts = { - workflow, - } as CompileOptions; + const opts = {} as CompileOptions; const logger = createMockLogger(); - await t.throwsAsync(() => compile(opts, logger), { + await t.throwsAsync(() => compile(plan, opts, logger), { message: 'Failed to compile job a', }); diff --git a/packages/cli/test/compile/options.test.ts b/packages/cli/test/compile/options.test.ts index ed8a48390..e3d896f8d 100644 --- a/packages/cli/test/compile/options.test.ts +++ b/packages/cli/test/compile/options.test.ts @@ -13,7 +13,7 @@ test('correct default options', (t) => { t.deepEqual(options.adaptors, []); t.is(options.command, 'compile'); t.is(options.expandAdaptors, true); - t.is(options.jobPath, 'job.js'); + t.is(options.expressionPath, 'job.js'); t.falsy(options.logJson); // TODO this is undefined right now t.is(options.outputStdout, true); t.is(options.path, 'job.js'); @@ -52,7 +52,7 @@ test("don't expand adaptors if --no-expand-adaptors is set", (t) => { test('default job path', (t) => { const options = parse('compile /tmp/my-job/ --immutable'); t.is(options.path, '/tmp/my-job/'); - t.is(options.jobPath, '/tmp/my-job/job.js'); + t.is(options.expressionPath, '/tmp/my-job/job.js'); }); test('enable json logging', (t) => { diff --git a/packages/cli/test/docgen/handler.test.ts b/packages/cli/test/docgen/handler.test.ts index 52cee0471..07e38c133 100644 --- a/packages/cli/test/docgen/handler.test.ts +++ b/packages/cli/test/docgen/handler.test.ts @@ -53,7 +53,7 @@ const options = { }; test.serial('generate mock docs', async (t) => { - const path = await docsHandler(options, logger, mockGen); + const path = (await docsHandler(options, logger, mockGen)) as string; t.is(path, `${DOCS_PATH}/${specifier}.json`); const docs = await loadJSON(path); diff --git a/packages/cli/test/execute/execute.test.ts b/packages/cli/test/execute/execute.test.ts index 59513bec7..a3e648b6b 100644 --- a/packages/cli/test/execute/execute.test.ts +++ b/packages/cli/test/execute/execute.test.ts @@ -1,12 +1,11 @@ // bunch of unit tests on the execute function itself // so far this is only done in commands.test.ts, which has the cli overhead // I don't want any io or adaptor tests here, really just looking for the actual execute flow -import mock from 'mock-fs'; -import path from 'node:path'; import { createMockLogger } from '@openfn/logger'; import test from 'ava'; import { ExecuteOptions } from '../../src/execute/command'; import handler from '../../src/execute/handler'; +import { mockFs, resetMockFs } from '../util'; // Why is this logging everywhere? const logger = createMockLogger(undefined, { level: 'none' }); @@ -33,234 +32,280 @@ const defaultOptions = { const fn = `const fn = (fn) => (s) => fn(s); `; -test.before(() => { - const pnpm = path.resolve('../../node_modules/.pnpm'); - mock({ - '/repo/': mock.load(path.resolve('test/__repo__/'), {}), - [pnpm]: mock.load(pnpm, {}), - '/exp.js': `${fn}fn(() => ({ data: 42 }));`, - '/config.json': JSON.stringify({ id: 'x' }), - '/workflow.json': JSON.stringify({ - jobs: [ - { - expression: `${fn}fn(() => ({ data: { count: 42 } }));`, - }, - ], - }), - }); -}); - -test.after(() => mock.restore()); +test.after(resetMockFs); -test('run a job', async (t) => { +test.serial('run a simple job', async (t) => { const job = `${fn}fn(() => ({ data: 42 }));`; + + mockFs({ + '/job.js': job, + }); + const options = { ...defaultOptions, - job, + expressionPath: '/job.js', }; + const result = await handler(options, logger); t.is(result.data, 42); }); -test('run a job with initial state', async (t) => { +test.serial('run a job with initial state', async (t) => { const job = `${fn}fn((state) => state);`; + mockFs({ + '/job.js': job, + }); + const options = { ...defaultOptions, - job, + expressionPath: '/job.js', stateStdin: JSON.stringify({ data: { count: 10 } }), }; - const result = await handler(options, logger); - t.is(result.data.count, 10); -}); -test('run a workflow from a path', async (t) => { - const options = { - ...defaultOptions, - workflowPath: '/workflow.json', - }; const result = await handler(options, logger); - t.is(result.data.count, 42); + t.is(result.data.count, 10); }); -test('run a workflow', async (t) => { +test.serial('run a workflow', async (t) => { const workflow = { - start: 'a', - jobs: [ - { - id: 'a', - expression: `${fn}fn(() => ({ data: { count: 42 } }));`, - next: { b: true }, - }, - { - id: 'b', - expression: `${fn}fn((state) => { state.data.count = state.data.count * 2; return state; });`, - }, - ], + options: { + start: 'a', + }, + workflow: { + steps: [ + { + id: 'a', + expression: `${fn}fn(() => ({ data: { count: 42 } }));`, + next: { b: true }, + }, + { + id: 'b', + expression: `${fn}fn((state) => { state.data.count = state.data.count * 2; return state; });`, + }, + ], + }, }; + mockFs({ + '/workflow.json': JSON.stringify(workflow), + }); + const options = { ...defaultOptions, - workflow, + workflowPath: '/workflow.json', }; const result = await handler(options, logger); t.is(result.data.count, 84); }); -test('run a workflow with state', async (t) => { +test.serial('run a workflow with state', async (t) => { const workflow = { - start: 'a', - jobs: [ - { - id: 'a', - state: { data: { count: 1 } }, - expression: `${fn}fn((state) => { state.data.count += 1; return state;});`, - next: { b: true }, - }, - { - id: 'b', - state: { data: { diff: 2 } }, - expression: `${fn}fn((state) => { state.data.count += state.data.diff; return state; });`, - }, - ], + workflow: { + steps: [ + { + id: 'a', + state: { data: { count: 1 } }, + expression: `${fn}fn((state) => { state.data.count += 1; return state;});`, + next: { b: true }, + }, + { + id: 'b', + state: { data: { diff: 2 } }, + expression: `${fn}fn((state) => { state.data.count += state.data.diff; return state; });`, + }, + ], + }, }; + + mockFs({ + '/workflow.json': JSON.stringify(workflow), + }); + const options = { ...defaultOptions, - workflow, + workflowPath: '/workflow.json', }; const result = await handler(options, logger); t.is(result.data.count, 4); }); -test('run a workflow with initial state', async (t) => { +test.serial('run a workflow with initial state from stdin', async (t) => { const workflow = { - start: 'a', - jobs: [ - { - id: 'a', - expression: `${fn}fn((state) => { state.data.count += 1; return state;});`, - next: { b: true }, - }, - { - id: 'b', - expression: `${fn}fn((state) => { state.data.count += 1; return state; });`, - }, - ], + workflow: { + steps: [ + { + id: 'a', + expression: `${fn}fn((state) => { state.data.count += 1; return state;});`, + next: { b: true }, + }, + { + id: 'b', + expression: `${fn}fn((state) => { state.data.count += 1; return state; });`, + }, + ], + }, }; + mockFs({ + '/workflow.json': JSON.stringify(workflow), + }); + const options = { ...defaultOptions, - workflow, + workflowPath: '/workflow.json', stateStdin: JSON.stringify({ data: { count: 10 } }), }; const result = await handler(options, logger); t.is(result.data.count, 12); }); -test('run a workflow with an expression as a path', async (t) => { +test.serial('run a workflow with an expression as a path', async (t) => { const workflow = { - jobs: [ - { - expression: '/exp.js', - }, - ], + workflow: { + steps: [ + { + expression: '/exp.js', + }, + ], + }, }; + mockFs({ + '/workflow.json': JSON.stringify(workflow), + '/exp.js': `${fn}fn(() => ({ data: 42 }));`, + }); + const options = { ...defaultOptions, - workflow, + workflowPath: '/workflow.json', }; const result = await handler(options, logger); t.is(result.data, 42); }); -test('run a workflow with config as a path', async (t) => { +test.serial('run a workflow with config as a path', async (t) => { const workflow = { - jobs: [ - { - configuration: '/config.json', - expression: `${fn}fn((state) => { state.cfg = state.configuration; return state; })`, - }, - ], + workflow: { + steps: [ + { + configuration: '/config.json', + expression: `${fn}fn((state) => { state.cfg = state.configuration; return state; })`, + }, + ], + }, }; + mockFs({ + '/workflow.json': JSON.stringify(workflow), + '/config.json': JSON.stringify({ id: 'x' }), + }); + const options = { ...defaultOptions, - workflow, + workflowPath: '/workflow.json', }; const result = await handler(options, logger); t.is(result.cfg.id, 'x'); }); -test('run a workflow from a start node', async (t) => { +test.serial('run a workflow from a start node', async (t) => { const workflow = { - jobs: [ - { - id: 'a', - expression: `${fn}fn((state) => ({ data: { result: 'a' }}))`, - }, - { - id: 'b', - expression: `${fn}fn((state) => ({ data: { result: 'b' }}))`, - }, - ], + workflow: { + steps: [ + { + id: 'a', + expression: `${fn}fn((state) => ({ data: { result: 'a' }}))`, + }, + { + id: 'b', + expression: `${fn}fn((state) => ({ data: { result: 'b' }}))`, + }, + ], + }, }; + mockFs({ + '/workflow.json': JSON.stringify(workflow), + }); + const options = { ...defaultOptions, - workflow, + workflowPath: '/workflow.json', start: 'b', }; const result = await handler(options, logger); t.is(result.data.result, 'b'); }); -test('run a workflow with an adaptor (longform)', async (t) => { +test.serial('run a workflow with an adaptor (longform)', async (t) => { const workflow = { - jobs: [ - { - adaptor: '@openfn/language-common', - expression: `fn((state) => state);`, - }, - ], + workflow: { + steps: [ + { + adaptor: '@openfn/language-common', + expression: `fn((state) => state);`, + }, + ], + }, }; + mockFs({ + '/workflow.json': JSON.stringify(workflow), + }); + const options = { ...defaultOptions, - workflow, + workflowPath: '/workflow.json', stateStdin: JSON.stringify({ data: { count: 10 } }), }; const result = await handler(options, logger); t.is(result.data.count, 10); }); -test('run a workflow with an adaptor (shortform)', async (t) => { +test.serial('run a workflow with an adaptor (shortform)', async (t) => { const workflow = { - jobs: [ - { - adaptor: 'common', - expression: `fn((state) => state);`, - }, - ], + workflow: { + steps: [ + { + adaptor: 'common', + expression: `fn((state) => state);`, + }, + ], + }, }; + mockFs({ + '/workflow.json': JSON.stringify(workflow), + }); + const options = { ...defaultOptions, - workflow, + workflowPath: '/workflow.json', stateStdin: JSON.stringify({ data: { count: 10 } }), + expandAdaptors: true, }; const result = await handler(options, logger); t.is(result.data.count, 10); }); -test('run a job without compilation', async (t) => { +test.serial('run a job without compilation', async (t) => { const job = `export default [() => ({ data: { count: 42 } })]`; + mockFs({ + '/job.js': job, + }); + const options = { ...defaultOptions, compile: false, - job, + expressionPath: '/job.js', }; + const result = await handler(options, logger); t.is(result.data.count, 42); }); -test('run a job which does not return state', async (t) => { +test.serial('run a job which does not return state', async (t) => { const job = `${fn}fn(() => {});`; + mockFs({ + '/job.js': job, + }); + const options = { ...defaultOptions, - job, + expressionPath: '/job.js', }; const result = await handler(options, logger); t.falsy(result); diff --git a/packages/cli/test/execute/get-autoinstall-targets.test.ts b/packages/cli/test/execute/get-autoinstall-targets.test.ts index 9dc275a6d..33a29786b 100644 --- a/packages/cli/test/execute/get-autoinstall-targets.test.ts +++ b/packages/cli/test/execute/get-autoinstall-targets.test.ts @@ -1,162 +1,82 @@ import test from 'ava'; import getAutoinstallTargets from '../../src/execute/get-autoinstall-targets'; +import { ExecutionPlan, Job } from '@openfn/lexicon'; -test('return empty if an empty array is passed', (t) => { - const result = getAutoinstallTargets({ - adaptors: [], - }); - t.truthy(result); - t.is(result.length, 0); -}); - -test('return 2 valid targets', (t) => { - const result = getAutoinstallTargets({ - adaptors: ['a', 'b'], - }); - t.truthy(result); - t.is(result.length, 2); - t.deepEqual(result, ['a', 'b']); -}); - -test('return empty if a path is passed', (t) => { - const result = getAutoinstallTargets({ - adaptors: ['a=a/b/c'], - }); - t.truthy(result); - t.is(result.length, 0); -}); - -test('return 1 valid target', (t) => { - const result = getAutoinstallTargets({ - adaptors: ['a=/some/path', 'b@1.2.3'], - }); - t.truthy(result); - t.is(result.length, 1); - t.deepEqual(result, ['b@1.2.3']); -}); - -test('return language common', (t) => { - const result = getAutoinstallTargets({ - adaptors: ['@openfn/language-common'], - }); - t.truthy(result); - t.is(result.length, 1); - t.deepEqual(result, ['@openfn/language-common']); -}); - -test('return language common with specifier', (t) => { - const result = getAutoinstallTargets({ - adaptors: ['@openfn/language-common@1.0.0'], - }); - t.truthy(result); - t.is(result.length, 1); - t.deepEqual(result, ['@openfn/language-common@1.0.0']); -}); - -test('reject language common with path', (t) => { - const result = getAutoinstallTargets({ - adaptors: ['@openfn/language-common=/a/b/c'], - }); - t.truthy(result); - t.is(result.length, 0); -}); - -test('reject language common with specifier and path', (t) => { - const result = getAutoinstallTargets({ - adaptors: ['@openfn/language-common@1.0.0=/tmp/repo/common'], - }); - t.truthy(result); - t.is(result.length, 0); -}); - -test('empty workflow', (t) => { - const result = getAutoinstallTargets({ +const getPlan = (steps: Job[]) => + ({ workflow: { - start: 'a', - jobs: {}, + steps, }, - }); + options: {}, + } as ExecutionPlan); + +test('empty plan', (t) => { + const plan = getPlan([]); + const result = getAutoinstallTargets(plan); t.truthy(result); t.is(result.length, 0); }); -test('workflow with zero adaptors', (t) => { - const result = getAutoinstallTargets({ - workflow: { - start: 'a', - jobs: { - a: { - expression: 'fn()', - }, - }, +test('plan with zero adaptors', (t) => { + const plan = getPlan([ + { + expression: 'fn()', }, - }); + ]); + const result = getAutoinstallTargets(plan); t.truthy(result); t.is(result.length, 0); }); -test('workflow with multiple adaptors', (t) => { - const result = getAutoinstallTargets({ - workflow: { - start: 'a', - jobs: { - a: { - adaptor: '@openfn/language-common', - expression: 'fn()', - }, - b: { - adaptor: '@openfn/language-http', - expression: 'fn()', - }, - }, +test('plan with multiple adaptors', (t) => { + const plan = getPlan([ + { + adaptor: '@openfn/language-common', + expression: 'fn()', + }, + { + adaptor: '@openfn/language-http', + expression: 'fn()', }, - }); + ]); + const result = getAutoinstallTargets(plan); t.is(result.length, 2); t.deepEqual(result, ['@openfn/language-common', '@openfn/language-http']); }); -test('workflow with duplicate adaptors', (t) => { - const result = getAutoinstallTargets({ - workflow: { - start: 'a', - jobs: { - a: { - adaptor: '@openfn/language-common', - expression: 'fn()', - }, - b: { - adaptor: '@openfn/language-common', - expression: 'fn()', - }, - }, +test('plan with duplicate adaptors', (t) => { + const plan = getPlan([ + { + adaptor: '@openfn/language-common', + expression: 'fn()', + }, + { + adaptor: '@openfn/language-common', + expression: 'fn()', }, - }); + ]); + const result = getAutoinstallTargets(plan); t.is(result.length, 1); t.deepEqual(result, ['@openfn/language-common']); }); -test('workflow with one adaptor but different versions', (t) => { - const result = getAutoinstallTargets({ - adaptors: [], - workflow: { - start: 'a', - jobs: { - a: { - adaptor: '@openfn/language-common@1.0.0', - expression: 'fn()', - }, - b: { - adaptor: '@openfn/language-common@2.0.0', - expression: 'fn()', - }, - c: { - adaptor: '@openfn/language-common@3.0.0', - expression: 'fn()', - }, - }, +test('plan with one adaptor but different versions', (t) => { + const plan = getPlan([ + { + adaptor: '@openfn/language-common@1.0.0', + expression: 'fn()', }, - }); + { + adaptor: '@openfn/language-common@2.0.0', + expression: 'fn()', + }, + { + adaptor: '@openfn/language-common@3.0.0', + expression: 'fn()', + }, + ]); + const result = getAutoinstallTargets(plan); t.is(result.length, 3); t.deepEqual(result, [ '@openfn/language-common@1.0.0', @@ -164,3 +84,15 @@ test('workflow with one adaptor but different versions', (t) => { '@openfn/language-common@3.0.0', ]); }); + +test('do not return adaptors with a path', (t) => { + const plan = getPlan([ + { + expression: 'fn()', + adaptor: 'common=a/b/c', + }, + ]); + const result = getAutoinstallTargets(plan); + t.truthy(result); + t.is(result.length, 0); +}); diff --git a/packages/cli/test/execute/options.test.ts b/packages/cli/test/execute/options.test.ts index 968523921..4b9d0e7cd 100644 --- a/packages/cli/test/execute/options.test.ts +++ b/packages/cli/test/execute/options.test.ts @@ -11,18 +11,17 @@ test('correct default options', (t) => { const options = parse('execute job.js'); t.deepEqual(options.adaptors, []); - t.is(options.autoinstall, false); + t.is(options.autoinstall, true); t.is(options.command, 'execute'); t.is(options.compile, true); t.is(options.expandAdaptors, true); t.is(options.immutable, false); - t.is(options.jobPath, 'job.js'); + t.is(options.expressionPath, 'job.js'); t.falsy(options.logJson); // TODO this is undefined right now t.is(options.outputPath, 'output.json'); t.is(options.outputStdout, false); t.is(options.path, 'job.js'); t.is(options.skipAdaptorValidation, false); - t.is(options.strict, false); t.is(options.timeout, 300000); t.falsy(options.useAdaptorsMonorepo); }); @@ -79,7 +78,7 @@ test('enable immutability', (t) => { test('default job path', (t) => { const options = parse('execute /tmp/my-job/ --immutable'); t.is(options.path, '/tmp/my-job/'); - t.is(options.jobPath, '/tmp/my-job/job.js'); + t.is(options.expressionPath, '/tmp/my-job/job.js'); }); test('enable json logging', (t) => { @@ -87,16 +86,6 @@ test('enable json logging', (t) => { t.true(options.logJson); }); -test('disable strict output', (t) => { - const options = parse('execute job.js --no-strict'); - t.false(options.strict); -}); - -test('disable strict output (legacy)', (t) => { - const options = parse('execute job.js --no-strict-output'); - t.false(options.strict); -}); - test('set an output path (short)', (t) => { const options = parse('execute job.js -o /tmp/out.json'); t.is(options.outputPath, '/tmp/out.json'); diff --git a/packages/cli/test/execute/parse-adaptors.test.ts b/packages/cli/test/execute/parse-adaptors.test.ts index 46f2444dc..cdbdf6753 100644 --- a/packages/cli/test/execute/parse-adaptors.test.ts +++ b/packages/cli/test/execute/parse-adaptors.test.ts @@ -1,48 +1,57 @@ import test from 'ava'; +import { ExecutionPlan, Job } from '@openfn/lexicon'; import { parseAdaptors } from '../../src/execute/execute'; -test('parse a simple specifier', (t) => { - const adaptors = ['a']; - const result = parseAdaptors({ adaptors }); - t.assert(Object.keys(result).length === 1); - t.truthy(result.a); - t.falsy(Object.keys(result.a).length); +const createPlan = (adaptor: string): ExecutionPlan => ({ + workflow: { + steps: [ + { + adaptor, + expression: '.', + }, + ], + }, + options: {}, }); -test('parse multiple specifiers', (t) => { - const adaptors = ['a', 'b']; - const result = parseAdaptors({ adaptors }); - t.assert(Object.keys(result).length === 2); - t.truthy(result.a); - t.truthy(result.b); +test('parse a simple specifier with no path or version', (t) => { + const adaptor = 'a'; + const plan = createPlan(adaptor); + const result = parseAdaptors(plan); + + t.deepEqual(result, { a: {} }); }); test('parse a specifier with a path', (t) => { - const adaptors = ['a=x']; - const result = parseAdaptors({ adaptors }); - t.assert(Object.keys(result).length === 1); - t.deepEqual(result.a, { path: 'x' }); + const adaptor = 'a=x'; + const plan = createPlan(adaptor); + const result = parseAdaptors(plan); + + t.deepEqual(result, { a: { path: 'x' } }); }); test('parse a specifier with a version', (t) => { - const adaptors = ['a@1']; - const result = parseAdaptors({ adaptors }); - t.assert(Object.keys(result).length === 1); - t.deepEqual(result.a, { version: '1' }); + const adaptor = 'a@1'; + const plan = createPlan(adaptor); + const result = parseAdaptors(plan); + + t.deepEqual(result, { a: { version: '1' } }); }); test('parse a specifier with a path and version', (t) => { - const adaptors = ['a@1=x']; - const result = parseAdaptors({ adaptors }); - t.assert(Object.keys(result).length === 1); - t.deepEqual(result.a, { path: 'x', version: '1' }); + const adaptor = 'a@1=x'; + const plan = createPlan(adaptor); + const result = parseAdaptors(plan); + + t.deepEqual(result, { a: { path: 'x', version: '1' } }); }); test('parse @openfn/language-common@1.0.0=~/repo/modules/common', (t) => { - const adaptors = ['@openfn/language-common@1.0.0=~/repo/modules/common']; - const result = parseAdaptors({ adaptors }); - t.assert(Object.keys(result).length === 1); + const adaptor = '@openfn/language-common@1.0.0=~/repo/modules/common'; + const plan = createPlan(adaptor); + const result = parseAdaptors(plan); + t.deepEqual(result, { '@openfn/language-common': { path: '~/repo/modules/common', @@ -51,25 +60,29 @@ test('parse @openfn/language-common@1.0.0=~/repo/modules/common', (t) => { }); }); -test('parse workflow', (t) => { - const workflow = { - start: 'a', - jobs: { - a: { - adaptor: '@openfn/language-common', - expression: 'fn()', - }, - b: { - adaptor: '@openfn/language-http@1.0.0', - expression: 'fn()', - }, - c: { - adaptor: '@openfn/language-salesforce=a/b/c', - expression: 'fn()', - }, +test('parse plan with several steps', (t) => { + const plan = { + options: { + start: 'a', + }, + workflow: { + steps: [ + { + adaptor: '@openfn/language-common', + expression: 'fn()', + }, + { + adaptor: '@openfn/language-http@1.0.0', + expression: 'fn()', + }, + { + adaptor: '@openfn/language-salesforce=a/b/c', + expression: 'fn()', + }, + ], }, }; - const result = parseAdaptors({ workflow }); + const result = parseAdaptors(plan); t.assert(Object.keys(result).length === 3); t.deepEqual(result, { '@openfn/language-common': {}, diff --git a/packages/cli/test/integration.test.ts b/packages/cli/test/integration.test.ts index b4499cd1c..c20b68cf5 100644 --- a/packages/cli/test/integration.test.ts +++ b/packages/cli/test/integration.test.ts @@ -4,7 +4,7 @@ import { exec } from 'node:child_process'; test('openfn help', async (t) => { await new Promise((resolve) => { exec('pnpm openfn help', (error, stdout, stderr) => { - t.regex(stdout, /Run an openfn job/); + t.regex(stdout, /Run an openfn expression/); t.falsy(error); t.falsy(stderr); resolve(); diff --git a/packages/cli/test/options/ensure/inputPath.test.ts b/packages/cli/test/options/ensure/inputPath.test.ts index 8c7690c5b..e62a827b7 100644 --- a/packages/cli/test/options/ensure/inputPath.test.ts +++ b/packages/cli/test/options/ensure/inputPath.test.ts @@ -1,37 +1,37 @@ import test from 'ava'; import { inputPath, Opts } from '../../../src/options'; -test('sets jobPath using path', (t) => { +test('sets expressionPath using path', (t) => { const opts = { path: 'jam.js', } as Opts; inputPath.ensure!(opts); - t.is(opts.jobPath, 'jam.js'); + t.is(opts.expressionPath, 'jam.js'); }); -test('sets jobPath to path/job.js if path is a folder', (t) => { +test('sets expressionPath to path/job.js if path is a folder', (t) => { const opts = { path: '/jam', } as Opts; inputPath.ensure!(opts); - t.is(opts.jobPath, '/jam/job.js'); + t.is(opts.expressionPath, '/jam/job.js'); }); -test('sets jobPath to path/job.js if path is a folder (trailing slash)', (t) => { +test('sets expressionPath to path/job.js if path is a folder (trailing slash)', (t) => { const opts = { path: '/jam/', } as Opts; inputPath.ensure!(opts); - t.is(opts.jobPath, '/jam/job.js'); + t.is(opts.expressionPath, '/jam/job.js'); }); -test('set workflowPath if path ends in json', (t) => { +test.skip('set workflowPath if path ends in json', (t) => { const opts = { path: 'workflow.json', } as Opts; diff --git a/packages/cli/test/options/ensure/strict.test.ts b/packages/cli/test/options/ensure/strict.test.ts deleted file mode 100644 index 7bdb5783e..000000000 --- a/packages/cli/test/options/ensure/strict.test.ts +++ /dev/null @@ -1,51 +0,0 @@ -import test from 'ava'; -import { strict, strictOutput, Opts } from '../../../src/options'; - -// Tests on legacy behaviour -test('strictOutput: true should set strict', (t) => { - const opts = { - strictOutput: true, - } as Opts; - strictOutput.ensure!(opts); - t.true(opts.strict); - // @ts-ignore - t.falsy(opts.strictOutput); -}); - -test('strictOutput: false should set strict', (t) => { - const opts = { - strictOutput: false, - } as Opts; - strictOutput.ensure!(opts); - t.false(opts.strict); - // @ts-ignore - t.falsy(opts.strictOutput); -}); - -test('strict should default to false', (t) => { - const opts = {} as Opts; - strict.ensure!(opts); - t.false(opts.strict); -}); - -test('strict can be set to true', (t) => { - const opts = { - strict: true, - } as Opts; - strict.ensure!(opts); - t.true(opts.strict); -}); - -test('strict overrides strictOutput', (t) => { - const opts = { - strictOutput: false, - strict: true, - } as Opts; - - // Note that the order of these two is important - strict.ensure!(opts); - strictOutput.ensure!(opts); - - t.true(opts.strict); - t.falsy(opts.strictOutput); -}); diff --git a/packages/cli/test/options/execute.test.ts b/packages/cli/test/options/execute.test.ts index 720e15906..e9950e22f 100644 --- a/packages/cli/test/options/execute.test.ts +++ b/packages/cli/test/options/execute.test.ts @@ -12,9 +12,9 @@ const cmd = yargs().command(execute); const parse = (command: string) => cmd.parse(command) as yargs.Arguments; -test("execute: jobPath'.'", (t) => { +test("execute: expressionPath'.'", (t) => { const options = parse('execute job.js'); - t.assert(options.jobPath === 'job.js'); + t.assert(options.expressionPath === 'job.js'); }); test('execute: default outputPath to ./output.json', (t) => { diff --git a/packages/cli/test/util.ts b/packages/cli/test/util.ts new file mode 100644 index 000000000..550720736 --- /dev/null +++ b/packages/cli/test/util.ts @@ -0,0 +1,18 @@ +/* + * test utils + */ +import mock from 'mock-fs'; +import path from 'node:path'; + +export const mockFs = (files: Record) => { + const pnpm = path.resolve('../../node_modules/.pnpm'); + mock({ + [pnpm]: mock.load(pnpm, {}), + '/repo/': mock.load(path.resolve('test/__repo__/'), {}), + ...files, + }); +}; + +export const resetMockFs = () => { + mock.restore(); +}; diff --git a/packages/cli/test/util/expand-adaptors.test.ts b/packages/cli/test/util/expand-adaptors.test.ts index fa0c19da7..23f1a006d 100644 --- a/packages/cli/test/util/expand-adaptors.test.ts +++ b/packages/cli/test/util/expand-adaptors.test.ts @@ -2,86 +2,91 @@ import test from 'ava'; import expandAdaptors from '../../src/util/expand-adaptors'; test('expands common', (t) => { - const { adaptors } = expandAdaptors({ adaptors: ['common'] }); - t.is(adaptors![0], '@openfn/language-common'); + const adaptors = expandAdaptors(['common']) as string[]; + t.is(adaptors[0], '@openfn/language-common'); }); test('expands common with version', (t) => { - const { adaptors } = expandAdaptors({ adaptors: ['common@1.0.0'] }); - t.is(adaptors![0], '@openfn/language-common@1.0.0'); + const adaptors = expandAdaptors(['common@1.0.0']) as string[]; + t.is(adaptors[0], '@openfn/language-common@1.0.0'); }); test('expands common with path', (t) => { - const { adaptors } = expandAdaptors({ adaptors: ['common=a/b/c'] }); - t.is(adaptors![0], '@openfn/language-common=a/b/c'); + const adaptors = expandAdaptors(['common=a/b/c']) as string[]; + t.is(adaptors[0], '@openfn/language-common=a/b/c'); }); test('expands http and dhis2', (t) => { - const { adaptors } = expandAdaptors({ adaptors: ['common', 'dhis2'] }); - const [a, b] = adaptors!; + const adaptors = expandAdaptors(['common', 'dhis2']) as string[]; + const [a, b] = adaptors; t.is(a, '@openfn/language-common'); t.is(b, '@openfn/language-dhis2'); }); test('expands nonsense', (t) => { - const { adaptors } = expandAdaptors({ adaptors: ['gn@25~A8fa1'] }); - t.is(adaptors![0], '@openfn/language-gn@25~A8fa1'); + const adaptors = expandAdaptors(['gn@25~A8fa1']) as string[]; + t.is(adaptors[0], '@openfn/language-gn@25~A8fa1'); }); test('does not expand a full adaptor name', (t) => { - const { adaptors } = expandAdaptors({ - adaptors: ['@openfn/language-common'], - }); - t.is(adaptors![0], '@openfn/language-common'); + const adaptors = expandAdaptors(['@openfn/language-common']) as string[]; + t.is(adaptors[0], '@openfn/language-common'); }); test('does not expand a full adaptor name with a path', (t) => { - const { adaptors } = expandAdaptors({ - adaptors: ['@openfn/language-common=a/b/c'], - }); - t.is(adaptors![0], '@openfn/language-common=a/b/c'); + const adaptors = expandAdaptors([ + '@openfn/language-common=a/b/c', + ]) as string[]; + t.is(adaptors[0], '@openfn/language-common=a/b/c'); }); test('does not expand a simple path', (t) => { - const { adaptors } = expandAdaptors({ adaptors: ['a/b'] }); - t.is(adaptors![0], 'a/b'); + const adaptors = expandAdaptors(['a/b']) as string[]; + t.is(adaptors[0], 'a/b'); }); test('does not expand an absolute path', (t) => { - const { adaptors } = expandAdaptors({ adaptors: ['/a/b/c'] }); - t.is(adaptors![0], '/a/b/c'); + const adaptors = expandAdaptors(['/a/b/c']) as string[]; + t.is(adaptors[0], '/a/b/c'); }); test('does not expand a js file', (t) => { - const { adaptors } = expandAdaptors({ adaptors: ['my-adaptor.js'] }); - t.is(adaptors![0], 'my-adaptor.js'); + const adaptors = expandAdaptors(['my-adaptor.js']) as string[]; + t.is(adaptors[0], 'my-adaptor.js'); }); -test('expands adaptors in a workflow', (t) => { - const workflow = { - start: 'a', - jobs: { - a: { - adaptor: 'common', - expression: 'fn()', - }, - b: { - adaptor: 'http@1.0.0', - expression: 'fn()', - }, - c: { - adaptor: 'salesforce=a/b/c', - expression: 'fn()', - }, - d: { - adaptor: 'a/b/c/my-adaptor.js', - expression: 'fn()', - }, +test('expands adaptors in an execution plan', (t) => { + const plan = { + workflow: { + steps: [ + { + id: 'a', + adaptor: 'common', + expression: 'fn()', + }, + { + id: 'b', + adaptor: 'http@1.0.0', + expression: 'fn()', + }, + { + id: 'c', + adaptor: 'salesforce=a/b/c', + expression: 'fn()', + }, + { + id: 'd', + adaptor: 'a/b/c/my-adaptor.js', + expression: 'fn()', + }, + ], }, + options: {}, }; - const newOpts = expandAdaptors({ workflow }); - t.is(newOpts.workflow!.jobs.a.adaptor, '@openfn/language-common'); - t.is(newOpts.workflow!.jobs.b.adaptor, '@openfn/language-http@1.0.0'); - t.is(newOpts.workflow!.jobs.c.adaptor, '@openfn/language-salesforce=a/b/c'); - t.is(newOpts.workflow!.jobs.d.adaptor, 'a/b/c/my-adaptor.js'); + expandAdaptors(plan); + const [a, b, c, d] = plan.workflow.steps; + t.is(a.adaptor, '@openfn/language-common'); + t.is(b.adaptor, '@openfn/language-http@1.0.0'); + t.is(c.adaptor, '@openfn/language-salesforce=a/b/c'); + t.is(d.adaptor, 'a/b/c/my-adaptor.js'); }); diff --git a/packages/cli/test/util/load-input.test.ts b/packages/cli/test/util/load-input.test.ts deleted file mode 100644 index 4ee819802..000000000 --- a/packages/cli/test/util/load-input.test.ts +++ /dev/null @@ -1,322 +0,0 @@ -import test from 'ava'; -import mock from 'mock-fs'; -import { createMockLogger } from '@openfn/logger'; -import loadInput from '../../src/util/load-input'; -import { ExecutionPlan } from '@openfn/runtime'; - -const logger = createMockLogger(undefined, { level: 'debug' }); - -test.beforeEach(() => { - mock({ - 'test/job.js': 'x', - 'test/wf.json': JSON.stringify({ - start: 'a', - jobs: [{ id: 'a', expression: 'x()' }], - }), - 'test/wf-err.json': '!!!', - }); -}); - -test.afterEach(() => { - logger._reset(); - mock.restore(); -}); - -test.serial('do nothing if no path provided', async (t) => { - const opts = {}; - - const result = await loadInput(opts, logger); - t.falsy(result); - t.assert(Object.keys(opts).length === 0); -}); - -test.serial('return the workflow if already set ', async (t) => { - const opts = { - workflow: { start: 'x', jobs: [] }, - job: 'j', - jobPath: 'test/job.js', - }; - - const result = (await loadInput(opts, logger)) as ExecutionPlan; - t.truthy(result); - t.is(result.start, 'x'); -}); - -test.serial( - 'return the job if already set (and workflow is not)', - async (t) => { - const opts = { - job: 'j', - jobPath: 'test/job.js', - }; - - const result = await loadInput(opts, logger); - t.is(result, 'j'); - } -); - -test.serial('load a job from a path and return the result', async (t) => { - const opts = { - jobPath: 'test/job.js', - }; - - const result = await loadInput(opts, logger); - t.is(result, 'x'); -}); - -test.serial('load a job from a path and mutate opts', async (t) => { - const opts = { - jobPath: 'test/job.js', - job: '', - }; - - await loadInput(opts, logger); - t.is(opts.job, 'x'); -}); - -test.serial('abort if the job cannot be found', async (t) => { - const opts = { - jobPath: 'test/blah.js', - }; - - const logger = createMockLogger(); - await t.throwsAsync(() => loadInput(opts, logger)); - - t.assert(logger._find('error', /job not found/i)); - t.assert( - logger._find('always', /Failed to load the job from test\/blah.js/i) - ); - t.assert(logger._find('error', /critical error: aborting command/i)); -}); - -test.serial( - 'load a workflow from a path and return the result as JSON', - async (t) => { - const opts = { - workflowPath: 'test/wf.json', - }; - - const result = await loadInput(opts, logger); - t.is(result.start, 'a'); - } -); - -test.serial('abort if the workflow cannot be found', async (t) => { - const opts = { - workflowPath: 'test/blah.json', - }; - - const logger = createMockLogger(); - await t.throwsAsync(() => loadInput(opts, logger)); - - t.assert(logger._find('error', /workflow not found/i)); - t.assert( - logger._find('always', /Failed to load a workflow from test\/blah.json/i) - ); - t.assert(logger._find('error', /critical error: aborting command/i)); -}); - -test.serial('abort if the workflow contains invalid json', async (t) => { - const opts = { - workflowPath: 'test/wf-err.json', - }; - - const logger = createMockLogger(); - await t.throwsAsync(() => loadInput(opts, logger)); - - t.assert(logger._find('error', /invalid json in workflow/i)); - t.assert( - logger._find('always', /check the syntax of the json at test\/wf-err.json/i) - ); - t.assert(logger._find('error', /critical error: aborting command/i)); -}); - -test.serial('load a workflow from a path and mutate opts', async (t) => { - const opts = { - workflowPath: 'test/wf.json', - workflow: undefined, - }; - - await loadInput(opts, logger); - t.is((opts.workflow as any).start, 'a'); -}); - -test.serial('prefer workflow to job if both are somehow set', async (t) => { - const opts = { - jobPath: 'test/job.js', - workflowPath: 'test/wf.json', - }; - - const result = await loadInput(opts, logger); - t.is(result.start, 'a'); -}); - -test.serial('resolve workflow expression paths (filename)', async (t) => { - mock({ - '/test/job.js': 'x', - '/test/wf.json': JSON.stringify({ - jobs: [{ expression: 'job.js' }], - }), - }); - - const opts = { - workflowPath: '/test/wf.json', - }; - - const result = (await loadInput(opts, logger)) as ExecutionPlan; - t.is(result.jobs[0].expression, 'x'); -}); - -test.serial( - 'resolve workflow expression paths (relative same dir)', - async (t) => { - mock({ - '/test/job.js': 'x', - '/test/wf.json': JSON.stringify({ - jobs: [{ expression: './job.js' }], - }), - }); - - const opts = { - workflowPath: '/test/wf.json', - }; - - const result = (await loadInput(opts, logger)) as ExecutionPlan; - t.is(result.jobs[0].expression, 'x'); - } -); - -test.serial( - 'resolve workflow expression paths (relative different dir)', - async (t) => { - mock({ - '/jobs/job.js': 'x', - '/test/wf.json': JSON.stringify({ - jobs: [{ expression: '../jobs/job.js' }], - }), - }); - - const opts = { - workflowPath: '/test/wf.json', - }; - - const result = (await loadInput(opts, logger)) as ExecutionPlan; - t.is(result.jobs[0].expression, 'x'); - } -); - -test.serial('resolve workflow expression paths (absolute)', async (t) => { - mock({ - '/job.js': 'x', - '/test/wf.json': JSON.stringify({ - start: 'a', - jobs: [{ expression: '/job.js' }], - }), - }); - - const opts = { - workflowPath: '/test/wf.json', - }; - - const result = (await loadInput(opts, logger)) as ExecutionPlan; - t.is(result.jobs[0].expression, 'x'); -}); - -test.serial('resolve workflow expression paths (home)', async (t) => { - mock({ - '~/job.js': 'x', - '/test/wf.json': JSON.stringify({ - jobs: [{ expression: '~/job.js' }], - }), - }); - - const opts = { - workflowPath: '/test/wf.json', - }; - - const result = (await loadInput(opts, logger)) as ExecutionPlan; - t.is(result.jobs[0].expression, 'x'); -}); - -test.serial('Load a workflow path with trailing spaces', async (t) => { - const opts = { - workflow: { jobs: [{ expression: 'test/job.js ' }] }, - }; - - const result = (await loadInput(opts, logger)) as ExecutionPlan; - t.is(result.jobs[0].expression, 'x'); -}); - -// Less thorough testing on config because it goes through the same code -test.serial('resolve workflow config paths (home)', async (t) => { - const cfg = { id: 'x' }; - const cfgString = JSON.stringify(cfg); - mock({ - '~/config.json': cfgString, - '/config.json': cfgString, - '/test/config.json': cfgString, - '/test/wf.json': JSON.stringify({ - jobs: [ - { configuration: '/config.json' }, - { configuration: '~/config.json' }, - { configuration: 'config.json ' }, // trailing spaces! - { configuration: './config.json ' }, - ], - }), - }); - - const opts = { - workflowPath: '/test/wf.json', - }; - - const result = (await loadInput(opts, logger)) as ExecutionPlan; - t.is(result.jobs.length, 4); - for (const job of result.jobs) { - t.deepEqual(job.configuration, cfg); - } -}); - -test.serial( - 'abort if a workflow expression path cannot be found', - async (t) => { - const opts = { - workflow: { start: 'x', jobs: [{ id: 'a', expression: 'err.js' }] }, - }; - - const logger = createMockLogger(); - await t.throwsAsync(() => loadInput(opts, logger)); - - t.assert(logger._find('error', /file not found for job a: err.js/i)); - t.assert( - logger._find( - 'always', - /This workflow references a file which cannot be found/i - ) - ); - t.assert(logger._find('error', /critical error: aborting command/i)); - } -); - -test.serial( - 'abort if a workflow expression path cannot be found for an anonymous job', - async (t) => { - const opts = { - workflow: { - start: 'x', - jobs: [{ expression: 'jam()' }, { expression: 'err.js' }], - }, - }; - - const logger = createMockLogger(); - await t.throwsAsync(() => loadInput(opts, logger)); - - t.assert(logger._find('error', /file not found for job 2: err.js/i)); - t.assert( - logger._find( - 'always', - /This workflow references a file which cannot be found/i - ) - ); - t.assert(logger._find('error', /critical error: aborting command/i)); - } -); diff --git a/packages/cli/test/util/load-plan.test.ts b/packages/cli/test/util/load-plan.test.ts new file mode 100644 index 000000000..caadaad71 --- /dev/null +++ b/packages/cli/test/util/load-plan.test.ts @@ -0,0 +1,273 @@ +import test from 'ava'; +import mock from 'mock-fs'; +import { createMockLogger } from '@openfn/logger'; +import type { Job } from '@openfn/lexicon'; + +import loadPlan from '../../src/util/load-plan'; +import { Opts } from '../../src/options'; + +const logger = createMockLogger(undefined, { level: 'debug' }); + +const sampleXPlan = { + options: { start: 'a' }, + workflow: { + name: 'wf', + steps: [{ id: 'a', expression: 'x()' }], + }, +}; + +const createPlan = (steps: Job[] = []) => ({ + workflow: { + steps, + }, + options: { + start: steps[0]?.id ?? 'a', + }, +}); + +test.beforeEach(() => { + mock({ + 'test/job.js': 'x', + 'test/wf-old.json': JSON.stringify({ + start: 'a', + jobs: [{ id: 'a', expression: 'x()' }], + }), + 'test/wf.json': JSON.stringify(sampleXPlan), + 'test/wf-err.json': '!!!', + }); +}); + +test.afterEach(() => { + logger._reset(); + mock.restore(); +}); + +test.serial('expression: load a plan from an expression.js', async (t) => { + const opts = { + expressionPath: 'test/job.js', + plan: {}, + }; + + const plan = await loadPlan(opts as Opts, logger); + + t.truthy(plan); + t.deepEqual(plan.options, {}); + t.is(plan.workflow.steps.length, 1); + t.is(plan.workflow.name, 'job'); + t.deepEqual(plan.workflow.steps[0], { + expression: 'x', + }); +}); + +test.serial('expression: set an adaptor on the plan', async (t) => { + const opts = { + expressionPath: 'test/job.js', + // Note that adaptor expansion should have happened before loadPlan is called + adaptors: ['@openfn/language-common'], + } as Partial; + + const plan = await loadPlan(opts as Opts, logger); + + const step = plan.workflow.steps[0] as Job; + + t.is(step.adaptor, '@openfn/language-common'); +}); + +test.serial('expression: do not expand adaptors', async (t) => { + const opts = { + expressionPath: 'test/job.js', + expandAdaptors: false, + // Note that adaptor expansion should have happened before loadPlan is called + adaptors: ['common'], + } as Partial; + + const plan = await loadPlan(opts as Opts, logger); + + const step = plan.workflow.steps[0] as Job; + + t.is(step.adaptor, 'common'); +}); + +test.serial('expression: set a timeout on the plan', async (t) => { + const opts = { + expressionPath: 'test/job.js', + expandAdaptors: true, + timeout: 111, + } as Partial; + + const plan = await loadPlan(opts as Opts, logger); + + t.is(plan.options.timeout, 111); +}); + +test.serial('expression: set a start on the plan', async (t) => { + const opts = { + expressionPath: 'test/job.js', + start: 'x', + } as Partial; + + const plan = await loadPlan(opts as Opts, logger); + + t.is(plan.options.start, 'x'); +}); + +test.serial('xplan: load a plan from workflow path', async (t) => { + const opts = { + workflowPath: 'test/wf.json', + expandAdaptors: true, + plan: {}, + }; + + const plan = await loadPlan(opts as Opts, logger); + + t.truthy(plan); + t.deepEqual(plan, sampleXPlan); +}); + +test.serial('xplan: expand adaptors', async (t) => { + const opts = { + workflowPath: 'test/wf.json', + expandAdaptors: true, + plan: {}, + }; + + const plan = createPlan([ + { + id: 'a', + expression: '.', + adaptor: 'common@1.0.0', + }, + ]); + + mock({ + 'test/wf.json': JSON.stringify(plan), + }); + + const result = await loadPlan(opts as Opts, logger); + t.truthy(result); + + const step = result.workflow.steps[0] as Job; + t.is(step.adaptor, '@openfn/language-common@1.0.0'); +}); + +test.serial('xplan: do not expand adaptors', async (t) => { + const opts = { + workflowPath: 'test/wf.json', + expandAdaptors: false, + plan: {}, + }; + + const plan = createPlan([ + { + id: 'a', + expression: '.', + adaptor: 'common@1.0.0', + }, + ]); + + mock({ + 'test/wf.json': JSON.stringify(plan), + }); + + const result = await loadPlan(opts as Opts, logger); + t.truthy(result); + + const step = result.workflow.steps[0] as Job; + t.is(step.adaptor, 'common@1.0.0'); +}); + +test.serial('xplan: set timeout from CLI', async (t) => { + const opts = { + workflowPath: 'test/wf.json', + timeout: 666, + plan: {}, + }; + + const plan = createPlan([ + { + id: 'a', + expression: '.', + }, + ]); + // The incoming option should overwrite this one + // @ts-ignore + plan.options.timeout = 1; + + mock({ + 'test/wf.json': JSON.stringify(plan), + }); + + const { options } = await loadPlan(opts as Opts, logger); + t.is(options.timeout, 666); +}); + +test.serial('xplan: set start from CLI', async (t) => { + const opts = { + workflowPath: 'test/wf.json', + start: 'b', + plan: {}, + }; + + const plan = createPlan([ + { + id: 'a', + expression: '.', + }, + ]); + // The incoming option should overwrite this one + // @ts-ignore + plan.options.start = 'a'; + + mock({ + 'test/wf.json': JSON.stringify(plan), + }); + + const { options } = await loadPlan(opts as Opts, logger); + t.is(options.start, 'b'); +}); + +test.serial('xplan: map to monorepo', async (t) => { + const opts = { + workflowPath: 'test/wf.json', + expandAdaptors: true, + plan: {}, + monorepoPath: '/repo/', + } as Partial; + + const plan = createPlan([ + { + id: 'a', + expression: '.', + adaptor: 'common', + }, + ]); + + mock({ + 'test/wf.json': JSON.stringify(plan), + }); + + const result = await loadPlan(opts as Opts, logger); + t.truthy(result); + + const step = result.workflow.steps[0] as Job; + t.is(step.adaptor, '@openfn/language-common=/repo/packages/common'); +}); + +test.serial('old-workflow: load a plan from workflow path', async (t) => { + const opts = { + workflowPath: 'test/wf-old.json', + plan: {}, + }; + + const plan = await loadPlan(opts as Opts, logger); + + t.deepEqual(plan.options, { + start: 'a', + }); + t.is(plan.workflow.steps.length, 1); + t.is(plan.workflow.name, 'wf-old'); + t.deepEqual(plan.workflow.steps[0], { + id: 'a', + expression: 'x()', + }); +}); diff --git a/packages/cli/test/util/map-adaptors-to-monorepo.test.ts b/packages/cli/test/util/map-adaptors-to-monorepo.test.ts index a5970ad01..3c6dd9a7d 100644 --- a/packages/cli/test/util/map-adaptors-to-monorepo.test.ts +++ b/packages/cli/test/util/map-adaptors-to-monorepo.test.ts @@ -7,6 +7,7 @@ import mapAdaptorsToMonorepo, { validateMonoRepo, updatePath, } from '../../src/util/map-adaptors-to-monorepo'; +import { ExecutionPlan } from '@openfn/lexicon'; const REPO_PATH = 'a/b/c'; const ABS_REPO_PATH = path.resolve(REPO_PATH); @@ -72,13 +73,8 @@ test.serial('mapAdaptorsToMonorepo: map adaptors', async (t) => { [`${REPO_PATH}/package.json`]: '{ "name": "adaptors" }', }); - const options = { - monorepoPath: REPO_PATH, - adaptors: ['common'], - }; - - const newOptions = await mapAdaptorsToMonorepo(options, logger); - t.deepEqual(newOptions.adaptors, [`common=${ABS_REPO_PATH}/packages/common`]); + const result = await mapAdaptorsToMonorepo(REPO_PATH, ['common'], logger); + t.deepEqual(result, [`common=${ABS_REPO_PATH}/packages/common`]); }); test.serial('mapAdaptorsToMonorepo: map workflow', async (t) => { @@ -86,23 +82,23 @@ test.serial('mapAdaptorsToMonorepo: map workflow', async (t) => { [`${REPO_PATH}/package.json`]: '{ "name": "adaptors" }', }); - const options = { - monorepoPath: REPO_PATH, + const plan: ExecutionPlan = { workflow: { - id: 'x', - jobs: [ + steps: [ { + expression: '.', adaptor: 'common', }, ], }, + options: {}, }; - const newOptions = await mapAdaptorsToMonorepo(options, logger); - t.deepEqual(newOptions.workflow, { - id: 'x', - jobs: [ + await mapAdaptorsToMonorepo(REPO_PATH, plan, logger); + t.deepEqual(plan.workflow, { + steps: [ { + expression: '.', adaptor: `common=${ABS_REPO_PATH}/packages/common`, }, ], diff --git a/packages/cli/test/util/print-versions.test.ts b/packages/cli/test/util/print-versions.test.ts index 9594eb317..05c387653 100644 --- a/packages/cli/test/util/print-versions.test.ts +++ b/packages/cli/test/util/print-versions.test.ts @@ -6,7 +6,7 @@ import printVersions from '../../src/util/print-versions'; const root = path.resolve('package.json'); -test('print versions for node, cli, runtime and compiler', async (t) => { +test('print versions for node and cli', async (t) => { const logger = createMockLogger('', { level: 'info' }); await printVersions(logger); @@ -17,12 +17,13 @@ test('print versions for node, cli, runtime and compiler', async (t) => { // very crude testing but it's ok to test the intent here t.regex(message, /Versions:/); t.regex(message, /cli/); - t.regex(message, /runtime/); - t.regex(message, /compiler/); + t.regex(message, /node/); t.notRegex(message, /adaptor/); + t.notRegex(message, /compiler/); + t.notRegex(message, /runtime/); }); -test('print versions for node, cli, runtime, compiler and adaptor', async (t) => { +test('print versions for node, cli and adaptor', async (t) => { const logger = createMockLogger('', { level: 'info' }); await printVersions(logger, { adaptors: ['http'] }); @@ -31,27 +32,39 @@ test('print versions for node, cli, runtime, compiler and adaptor', async (t) => t.regex(message, /Versions:/); t.regex(message, /cli/); + t.regex(message, /node/); + t.regex(message, /http .+ latest/); +}); + +test('print versions for node, cli, components and adaptor', async (t) => { + const logger = createMockLogger('', { level: 'info' }); + await printVersions(logger, { adaptors: ['http'] }, true); + + const last = logger._parse(logger._last); + const message = last.message as string; + + t.regex(message, /Versions:/); + t.regex(message, /cli/); + t.regex(message, /node/); t.regex(message, /runtime/); - t.regex(message, /compiler/); + t.regex(message, /node/); t.regex(message, /http .+ latest/); }); -test('print versions for node, cli, runtime, compiler and adaptor with version', async (t) => { +test('print versions for node, cli and adaptor with version', async (t) => { const logger = createMockLogger('', { level: 'info' }); await printVersions(logger, { adaptors: ['http@1234'] }); const last = logger._parse(logger._last); const message = last.message as string; - // very crude testing but it's ok to test the intent here t.regex(message, /Versions:/); t.regex(message, /cli/); - t.regex(message, /runtime/); - t.regex(message, /compiler/); + t.regex(message, /node/); t.regex(message, /http .+ 1234/); }); -test('print versions for node, cli, runtime, compiler and long-form adaptor', async (t) => { +test('print versions for node, cli and long-form adaptor', async (t) => { const logger = createMockLogger('', { level: 'info' }); await printVersions(logger, { adaptors: ['@openfn/language-http'] }); @@ -61,7 +74,7 @@ test('print versions for node, cli, runtime, compiler and long-form adaptor', as t.regex(message, /@openfn\/language-http .+ latest/); }); -test('print versions for node, cli, runtime, compiler and long-form adaptor with version', async (t) => { +test('print versions for node, cli and long-form adaptor with version', async (t) => { const logger = createMockLogger('', { level: 'info' }); await printVersions(logger, { adaptors: ['@openfn/language-http@1234'] }); @@ -71,6 +84,24 @@ test('print versions for node, cli, runtime, compiler and long-form adaptor with t.regex(message, /@openfn\/language-http .+ 1234/); }); +test('print version of adaptor with monorepo', async (t) => { + mock({ + '/repo/http/package.json': '{ "version": "1.0.0" }', + [root]: mock.load(root, {}), + }); + + const logger = createMockLogger('', { level: 'info' }); + await printVersions(logger, { + adaptors: ['@openfn/language-http@1.0.0'], + monorepoPath: '.', + }); + + const last = logger._parse(logger._last); + const message = last.message as string; + + t.regex(message, /@openfn\/language-http(.+)monorepo/); +}); + test('print version of adaptor with path', async (t) => { mock({ '/repo/http/package.json': '{ "version": "1.0.0" }', @@ -88,6 +119,24 @@ test('print version of adaptor with path', async (t) => { t.regex(message, /@openfn\/language-http(.+)1\.0\.0/); }); +test('print version of adaptor with path even if monorepo is set', async (t) => { + mock({ + '/repo/http/package.json': '{ "version": "1.0.0" }', + [root]: mock.load(root, {}), + }); + + const logger = createMockLogger('', { level: 'info' }); + await printVersions(logger, { + adaptors: ['@openfn/language-http=/repo/http'], + monorepoPath: '.', + }); + + const last = logger._parse(logger._last); + const message = last.message as string; + + t.regex(message, /@openfn\/language-http(.+)1\.0\.0/); +}); + test('print version of adaptor with path and @', async (t) => { mock({ '/repo/node_modules/@openfn/http/package.json': '{ "version": "1.0.0" }', @@ -115,7 +164,5 @@ test('json output', async (t) => { const [{ versions }] = last.message; t.truthy(versions['node.js']); t.truthy(versions['cli']); - t.truthy(versions['runtime']); - t.truthy(versions['compiler']); t.truthy(versions['http']); }); diff --git a/packages/compiler/CHANGELOG.md b/packages/compiler/CHANGELOG.md index 48300926a..c50e21489 100644 --- a/packages/compiler/CHANGELOG.md +++ b/packages/compiler/CHANGELOG.md @@ -1,5 +1,14 @@ # @openfn/compiler +## 0.0.40 + +### Patch Changes + +- Updated dependencies [649ca43] +- Updated dependencies [9f6c35d] +- Updated dependencies [86dd668] + - @openfn/logger@1.0.0 + ## 0.0.39 ### Patch Changes diff --git a/packages/compiler/package.json b/packages/compiler/package.json index 1b44313c6..45d5718a0 100644 --- a/packages/compiler/package.json +++ b/packages/compiler/package.json @@ -1,6 +1,6 @@ { "name": "@openfn/compiler", - "version": "0.0.39", + "version": "0.0.40", "description": "Compiler and language tooling for openfn jobs.", "author": "Open Function Group ", "license": "ISC", diff --git a/packages/compiler/src/compile.ts b/packages/compiler/src/compile.ts index 9e37b192d..9e66d17d3 100644 --- a/packages/compiler/src/compile.ts +++ b/packages/compiler/src/compile.ts @@ -21,10 +21,10 @@ export default function compile(pathOrSource: string, options: Options = {}) { let source = pathOrSource; if (isPath(pathOrSource)) { - logger.debug('Starting compilation from file at', pathOrSource); + //logger.debug('Starting compilation from file at', pathOrSource); source = loadFile(pathOrSource); } else { - logger.debug('Starting compilation from string'); + //logger.debug('Starting compilation from string'); } const ast = parse(source); diff --git a/packages/deploy/CHANGELOG.md b/packages/deploy/CHANGELOG.md index 46ce5ec5d..66e70c31d 100644 --- a/packages/deploy/CHANGELOG.md +++ b/packages/deploy/CHANGELOG.md @@ -1,5 +1,15 @@ # @openfn/deploy +## 0.4.2 + +### Patch Changes + +- 86dd668: Log the result to success (not always) +- Updated dependencies [649ca43] +- Updated dependencies [9f6c35d] +- Updated dependencies [86dd668] + - @openfn/logger@1.0.0 + ## 0.4.1 ### Patch Changes diff --git a/packages/deploy/package.json b/packages/deploy/package.json index 3eb36e86a..aef2ded1c 100644 --- a/packages/deploy/package.json +++ b/packages/deploy/package.json @@ -1,6 +1,6 @@ { "name": "@openfn/deploy", - "version": "0.4.1", + "version": "0.4.2", "description": "Deploy projects to Lightning instances", "type": "module", "exports": { diff --git a/packages/deploy/src/index.ts b/packages/deploy/src/index.ts index 1695f2bd8..ed77619f1 100644 --- a/packages/deploy/src/index.ts +++ b/packages/deploy/src/index.ts @@ -164,7 +164,7 @@ export async function deploy(config: DeployConfig, logger: Logger) { await writeState(config, deployedState); - logger.always('Deployed.'); + logger.success('Deployed'); return true; } diff --git a/packages/engine-multi/CHANGELOG.md b/packages/engine-multi/CHANGELOG.md index e58b97b27..ef3d59bb6 100644 --- a/packages/engine-multi/CHANGELOG.md +++ b/packages/engine-multi/CHANGELOG.md @@ -1,5 +1,25 @@ # engine-multi +## 1.0.0 + +### Major Changes + +- 86dd668: The 1.0 release updates the language and input of the Engine to match the nomenclature of Lightning. + +### Patch Changes + +- 5f24294: Don't log adaptor logs to stdout +- 823b471: Update handling of logs so that JSON messages are stringified +- ea6fc05: Add a CredentialLoadError +- Updated dependencies [649ca43] +- Updated dependencies [86dd668] +- Updated dependencies [9f6c35d] +- Updated dependencies [86dd668] + - @openfn/logger@1.0.0 + - @openfn/runtime@1.0.0 + - @openfn/compiler@0.0.40 + - @openfn/lexicon@1.0.0 + ## 0.4.1 ### Patch Changes diff --git a/packages/engine-multi/ava b/packages/engine-multi/ava new file mode 100644 index 000000000..e69de29bb diff --git a/packages/engine-multi/package.json b/packages/engine-multi/package.json index 4985a9f89..0b4c520cc 100644 --- a/packages/engine-multi/package.json +++ b/packages/engine-multi/package.json @@ -1,6 +1,6 @@ { "name": "@openfn/engine-multi", - "version": "0.4.1", + "version": "1.0.0", "description": "Multi-process runtime engine", "main": "dist/index.js", "type": "module", @@ -16,6 +16,7 @@ "dependencies": { "@openfn/compiler": "workspace:*", "@openfn/language-common": "2.0.0-rc3", + "@openfn/lexicon": "workspace:^", "@openfn/logger": "workspace:*", "@openfn/runtime": "workspace:*", "fast-safe-stringify": "^2.1.1" diff --git a/packages/engine-multi/src/api/autoinstall.ts b/packages/engine-multi/src/api/autoinstall.ts index 769c25b36..a20113630 100644 --- a/packages/engine-multi/src/api/autoinstall.ts +++ b/packages/engine-multi/src/api/autoinstall.ts @@ -1,17 +1,16 @@ import { - ExecutionPlan, ensureRepo, getAliasedName, getNameAndVersion, loadRepoPkg, } from '@openfn/runtime'; import { install as runtimeInstall } from '@openfn/runtime'; +import type { ExecutionPlan, Job } from '@openfn/lexicon'; +import type { Logger } from '@openfn/logger'; import { AUTOINSTALL_COMPLETE, AUTOINSTALL_ERROR } from '../events'; import { AutoinstallError } from '../errors'; - -import type { Logger } from '@openfn/logger'; -import type { ExecutionContext } from '../types'; +import ExecutionContext from '../classes/ExecutionContext'; // none of these options should be on the plan actually export type AutoinstallOptions = { @@ -140,6 +139,7 @@ const autoinstall = async (context: ExecutionContext): Promise => { // Write the adaptor version to the context // This is a reasonably accurate, but not totally bulletproof, report + // @ts-ignore context.versions[name] = v; paths[name] = { @@ -206,9 +206,9 @@ const isInstalled = async ( export const identifyAdaptors = (plan: ExecutionPlan): Set => { const adaptors = new Set(); - plan.jobs - .filter((job) => job.adaptor) - .forEach((job) => adaptors.add(job.adaptor!)); + plan.workflow.steps + .filter((job) => (job as Job).adaptor) + .forEach((job) => adaptors.add((job as Job).adaptor!)); return adaptors; }; diff --git a/packages/engine-multi/src/api/compile.ts b/packages/engine-multi/src/api/compile.ts index 92830d893..c47660adf 100644 --- a/packages/engine-multi/src/api/compile.ts +++ b/packages/engine-multi/src/api/compile.ts @@ -1,12 +1,10 @@ -// This function will compile a workflow -// Later we'll add an in-memory cache to prevent the same job -// being compiled twice - -import type { Logger } from '@openfn/logger'; import compile, { preloadAdaptorExports, Options } from '@openfn/compiler'; import { getModulePath } from '@openfn/runtime'; -import { ExecutionContext } from '../types'; +import type { Job } from '@openfn/lexicon'; +import type { Logger } from '@openfn/logger'; + import { CompileError } from '../errors'; +import type ExecutionContext from '../classes/ExecutionContext'; // TODO this compiler is going to change anyway to run just in time // the runtime will have an onCompile hook @@ -15,8 +13,9 @@ export default async (context: ExecutionContext) => { const { logger, state, options } = context; const { repoDir, noCompile } = options; - if (!noCompile && state.plan?.jobs?.length) { - for (const job of state.plan.jobs) { + if (!noCompile && state.plan?.workflow.steps?.length) { + for (const step of state.plan.workflow.steps) { + const job = step as Job; if (job.expression) { try { job.expression = await compileJob( diff --git a/packages/engine-multi/src/api/execute.ts b/packages/engine-multi/src/api/execute.ts index c35085581..eb52f7dea 100644 --- a/packages/engine-multi/src/api/execute.ts +++ b/packages/engine-multi/src/api/execute.ts @@ -1,7 +1,7 @@ import { timestamp } from '@openfn/logger'; import * as workerEvents from '../worker/events'; -import type { ExecutionContext } from '../types'; +import type ExecutionContext from '../classes/ExecutionContext'; import autoinstall from './autoinstall'; import compile from './compile'; import { @@ -40,7 +40,8 @@ const execute = async (context: ExecutionContext) => { // TODO catch and "throw" nice clean credentials issues await preloadCredentials( state.plan as any, - options.resolvers?.credential + options.resolvers?.credential, + logger ); } @@ -115,11 +116,9 @@ const execute = async (context: ExecutionContext) => { error(context, { workflowId: state.plan.id, error: evt.error }); }, }; - - // TODO in the new world order, what sorts of errors are being caught here? return callWorker( 'run', - [state.plan, runOptions], + [state.plan, state.input || {}, runOptions || {}], events, workerOptions ).catch((e: any) => { diff --git a/packages/engine-multi/src/api/lifecycle.ts b/packages/engine-multi/src/api/lifecycle.ts index 68dcae76a..f7c71101b 100644 --- a/packages/engine-multi/src/api/lifecycle.ts +++ b/packages/engine-multi/src/api/lifecycle.ts @@ -1,7 +1,11 @@ // here's here things get a bit complex event wise import * as externalEvents from '../events'; import * as internalEvents from '../worker/events'; -import { ExecutionContext } from '../types'; +import type ExecutionContext from '../classes/ExecutionContext'; + +// Log events from the inner thread will be logged to stdout +// EXCEPT the keys listed here +const logsToExcludeFromStdout = /(job)|(ada)/i; export const workflowStart = ( context: ExecutionContext, @@ -120,7 +124,7 @@ export const log = ( ) => { const { threadId } = event; - if (event.log.name !== 'JOB') { + if (!logsToExcludeFromStdout.test(event.log.name!)) { // Forward the log event to the engine's logger // Note that we may have to parse the serialized log string const proxy = { diff --git a/packages/engine-multi/src/api/preload-credentials.ts b/packages/engine-multi/src/api/preload-credentials.ts index fb9545ff7..d74a6cfdb 100644 --- a/packages/engine-multi/src/api/preload-credentials.ts +++ b/packages/engine-multi/src/api/preload-credentials.ts @@ -1,16 +1,34 @@ -import { CompiledExecutionPlan } from '@openfn/runtime'; +import { ExecutionPlan, Job } from '@openfn/lexicon'; +import type { Logger } from '@openfn/logger'; +import { CredentialErrorObj, CredentialLoadError } from '../errors'; export default async ( - plan: CompiledExecutionPlan, - loader: (id: string) => Promise + plan: ExecutionPlan, + loader: (id: string) => Promise, + logger?: Logger ) => { const loaders: Promise[] = []; - Object.values(plan.jobs).forEach((job) => { + const errors: CredentialErrorObj[] = []; + + Object.values(plan.workflow.steps).forEach((step) => { + const job = step as Job; if (typeof job.configuration === 'string') { + const config = job.configuration as string; loaders.push( new Promise(async (resolve) => { - job.configuration = await loader(job.configuration as string); + logger?.debug(`Loading credential ${config} for step ${job.id}`); + try { + job.configuration = await loader(config as string); + logger?.debug(`Credential ${config} loaded OK (${config})`); + } catch (e: any) { + logger?.debug(`Error loading credential ${config}`); + errors.push({ + id: config, + step: step.id!, + error: e?.message || e?.toString() || e, + }); + } resolve(); }) ); @@ -18,5 +36,8 @@ export default async ( }); await Promise.all(loaders); + if (errors.length) { + throw new CredentialLoadError(errors); + } return plan; }; diff --git a/packages/engine-multi/src/classes/ExecutionContext.ts b/packages/engine-multi/src/classes/ExecutionContext.ts index cf340407e..0e7c70480 100644 --- a/packages/engine-multi/src/classes/ExecutionContext.ts +++ b/packages/engine-multi/src/classes/ExecutionContext.ts @@ -1,13 +1,15 @@ import { EventEmitter } from 'node:events'; +import type { Logger } from '@openfn/logger'; +import loadVersions from '../util/load-versions'; import type { WorkflowState, CallWorker, ExecutionContextConstructor, ExecutionContextOptions, + Versions, } from '../types'; -import type { Logger } from '@openfn/logger'; -import loadVersions from '../util/load-versions'; +import type { ExternalEvents, EventMap } from '../events'; /** * The ExeuctionContext class wraps an event emitter with some useful context @@ -22,7 +24,7 @@ export default class ExecutionContext extends EventEmitter { logger: Logger; callWorker: CallWorker; options: ExecutionContextOptions; - versions = {}; + versions: Versions; constructor({ state, @@ -40,8 +42,11 @@ export default class ExecutionContext extends EventEmitter { // override emit to add the workflowId to all events // @ts-ignore - emit(event: string, payload: any) { - payload.workflowId = this.state.id; + emit( + event: T, + payload: Omit + ): boolean { + (payload as EventMap[T]).workflowId = this.state.id; return super.emit(event, payload); } } diff --git a/packages/engine-multi/src/engine.ts b/packages/engine-multi/src/engine.ts index d5162cb2f..ad868a40e 100644 --- a/packages/engine-multi/src/engine.ts +++ b/packages/engine-multi/src/engine.ts @@ -1,7 +1,9 @@ import { EventEmitter } from 'node:events'; import path from 'node:path'; import { fileURLToPath } from 'node:url'; -import type { ExecutionPlan } from '@openfn/runtime'; +import type { ExecutionPlan, State } from '@openfn/lexicon'; +import type { Logger } from '@openfn/logger'; + import { JOB_COMPLETE, JOB_START, @@ -15,10 +17,14 @@ import execute from './api/execute'; import validateWorker from './api/validate-worker'; import ExecutionContext from './classes/ExecutionContext'; -import type { SanitizePolicies } from '@openfn/logger'; import type { LazyResolvers } from './api'; -import type { EngineAPI, EventHandler, WorkflowState } from './types'; -import type { Logger } from '@openfn/logger'; +import type { + EngineAPI, + EventHandler, + ExecuteOptions, + RuntimeEngine, + WorkflowState, +} from './types'; import type { AutoinstallOptions } from './api/autoinstall'; const DEFAULT_RUN_TIMEOUT = 1000 * 60 * 10; // ms @@ -70,23 +76,24 @@ export type EngineOptions = { repoDir: string; resolvers?: LazyResolvers; runtimelogger?: Logger; - runTimeoutMs?: number; + runTimeoutMs?: number; // default timeout statePropsToRemove?: string[]; whitelist?: RegExp[]; }; -export type ExecuteOptions = { - memoryLimitMb?: number; - resolvers?: LazyResolvers; - runTimeoutMs?: number; - sanitize?: SanitizePolicies; +export type InternalEngine = RuntimeEngine & { + // TODO Not a very good type definition, but it calms the tests down + [other: string]: any; }; // This creates the internal API // tbh this is actually the engine, right, this is where stuff happens // the api file is more about the public api I think // TOOD options MUST have a logger -const createEngine = async (options: EngineOptions, workerPath?: string) => { +const createEngine = async ( + options: EngineOptions, + workerPath?: string +): Promise => { const states: Record = {}; const contexts: Record = {}; const deferredListeners: Record[]> = {}; @@ -130,9 +137,9 @@ const createEngine = async (options: EngineOptions, workerPath?: string) => { // create, register and return a state object // should it also load the initial data clip? // when does that happen? No, that's inside execute - const registerWorkflow = (plan: ExecutionPlan) => { + const registerWorkflow = (plan: ExecutionPlan, input: State) => { // TODO throw if already registered? - const state = createState(plan); + const state = createState(plan, input); states[state.id] = state; return state; }; @@ -144,13 +151,17 @@ const createEngine = async (options: EngineOptions, workerPath?: string) => { // TODO too much logic in this execute function, needs farming out // I don't mind having a wrapper here but it must be super thin // TODO maybe engine options is too broad? - const executeWrapper = (plan: ExecutionPlan, opts: ExecuteOptions = {}) => { + const executeWrapper = ( + plan: ExecutionPlan, + input: State, + opts: ExecuteOptions = {} + ) => { options.logger!.debug('executing plan ', plan?.id ?? ''); const workflowId = plan.id!; // TODO throw if plan is invalid // Wait, don't throw because the server will die // Maybe return null instead - const state = registerWorkflow(plan); + const state = registerWorkflow(plan, input); const context = new ExecutionContext({ state, diff --git a/packages/engine-multi/src/errors.ts b/packages/engine-multi/src/errors.ts index c6020e84f..5468aa390 100644 --- a/packages/engine-multi/src/errors.ts +++ b/packages/engine-multi/src/errors.ts @@ -100,4 +100,20 @@ export class ExitError extends EngineError { } } -// CredentialsError (exception) +export type CredentialErrorObj = { id: string; step: string; error: string }; + +// Error lazy-loading a credenial +export class CredentialLoadError extends EngineError { + severity = 'exception'; + type = 'CredentialLoadError'; + name = 'CredentialLoadError'; + message; + + original: any; // this is the original error + constructor(errors: CredentialErrorObj[]) { + super(); + this.message = errors + .map((e) => `Failed to load credential ${e.id} for step ${e.step}`) + .join('\n'); + } +} diff --git a/packages/engine-multi/src/test/util.ts b/packages/engine-multi/src/test/util.ts index 0777af17a..494c24e27 100644 --- a/packages/engine-multi/src/test/util.ts +++ b/packages/engine-multi/src/test/util.ts @@ -1,15 +1,26 @@ -export const createPlan = (job = {}) => ({ - id: 'wf-1', - jobs: [ - { - id: 'j1', - adaptor: 'common', // not used - credential: {}, // not used - data: {}, // Used if no expression - expression: '(s) => ({ data: { answer: s.data?.input || 42 } })', - _delay: 1, // only used in the mock - - ...job, +import { ExecutionPlan } from '@openfn/lexicon'; + +export const createPlan = (job = {}) => + ({ + id: 'wf-1', + workflow: { + steps: [ + { + id: 'j1', + adaptor: 'common', // not used + configuration: {}, // not used + expression: '(s) => ({ data: { answer: s.data?.input || 42 } })', + + // TODO is this actually used? Should I get rid? Underscore + // @ts-ignore + data: {}, // Used if no expression + + // @ts-ignore + _delay: 1, // only used in the mock + + ...job, + }, + ], }, - ], -}); + options: {}, + } as ExecutionPlan); diff --git a/packages/engine-multi/src/test/worker-functions.ts b/packages/engine-multi/src/test/worker-functions.ts index 0c516e07e..f562edbcb 100644 --- a/packages/engine-multi/src/test/worker-functions.ts +++ b/packages/engine-multi/src/test/worker-functions.ts @@ -2,6 +2,7 @@ import path from 'node:path'; import { register, publish, threadId } from '../worker/thread/runtime'; import { increment } from './counter.js'; +import { ExecutionPlan, Job } from '@openfn/lexicon'; const tasks = { test: async (result = 42) => { @@ -25,13 +26,13 @@ const tasks = { processId: async () => process.pid, // very very simple intepretation of a run function // Most tests should use the mock-worker instead - run: async (plan: any, _adaptorPaths: any) => { + run: async (plan: ExecutionPlan, _input: any, _adaptorPaths: any) => { const workflowId = plan.id; publish('worker:workflow-start', { workflowId, }); try { - const [job] = plan.jobs; + const [job] = plan.workflow.steps as Job[]; const result = eval(job.expression); publish('worker:workflow-complete', { workflowId, diff --git a/packages/engine-multi/src/types.ts b/packages/engine-multi/src/types.ts index 819b3473f..bc69b7445 100644 --- a/packages/engine-multi/src/types.ts +++ b/packages/engine-multi/src/types.ts @@ -1,10 +1,10 @@ import type { Logger, SanitizePolicies } from '@openfn/logger'; -import type { ExecutionPlan } from '@openfn/runtime'; +import type { ExecutionPlan, State } from '@openfn/lexicon'; import type { EventEmitter } from 'node:events'; -import type { ExternalEvents, EventMap } from './events'; import type { EngineOptions } from './engine'; import type { ExecOpts } from './worker/pool'; +import { LazyResolvers } from './api'; export type Resolver = (id: string) => Promise; @@ -23,9 +23,11 @@ export type WorkflowState = { startTime?: number; duration?: number; error?: string; - result?: any; // State + result?: State; + + // Ok this changes quite a bit huh plan: ExecutionPlan; // this doesn't include options - options: any; // TODO this is wf specific options, like logging policy + input: State; }; export type CallWorker = ( @@ -42,23 +44,16 @@ export type ExecutionContextConstructor = { options: ExecutionContextOptions; }; -export type ExecutionContextOptions = EngineOptions & { +export type ExecuteOptions = { + memoryLimitMb?: number; + resolvers?: LazyResolvers; + runTimeoutMs?: number; sanitize?: SanitizePolicies; }; -export interface ExecutionContext extends EventEmitter { - constructor(args: ExecutionContextConstructor): ExecutionContext; - options: EngineOptions; - state: WorkflowState; - logger: Logger; - callWorker: CallWorker; - versions: Versions; - - emit( - event: T, - payload: Omit - ): boolean; -} +export type ExecutionContextOptions = EngineOptions & { + sanitize?: SanitizePolicies; +}; export interface EngineAPI extends EventEmitter { callWorker: CallWorker; @@ -66,7 +61,7 @@ export interface EngineAPI extends EventEmitter { } export interface RuntimeEngine { - version: string; + version?: string; options: EngineOptions; @@ -75,14 +70,13 @@ export interface RuntimeEngine { execute( plan: ExecutionPlan, + input: State, options?: Partial ): Pick; destroy(): void; on: (evt: string, fn: (...args: any[]) => void) => void; - - // TODO my want some maintenance APIs, like getStatus. idk } export type Versions = { diff --git a/packages/engine-multi/src/util/create-state.ts b/packages/engine-multi/src/util/create-state.ts index 7e1c538aa..3175c92cb 100644 --- a/packages/engine-multi/src/util/create-state.ts +++ b/packages/engine-multi/src/util/create-state.ts @@ -1,22 +1,15 @@ -import { ExecutionPlan } from '@openfn/runtime'; +import { ExecutionPlan, State } from '@openfn/lexicon'; import { WorkflowState } from '../types'; -export default (plan: ExecutionPlan, options = {}): WorkflowState => ({ +export default (plan: ExecutionPlan, input: State): WorkflowState => ({ id: plan.id!, status: 'pending', plan, + input, threadId: undefined, startTime: undefined, duration: undefined, error: undefined, result: undefined, - - // this is wf-specific options - // but they should be on context, rather than state - options, - // options: { - // ...options, - // repoDir, - // }, }); diff --git a/packages/engine-multi/src/worker/events.ts b/packages/engine-multi/src/worker/events.ts index 698df06eb..eabd8876a 100644 --- a/packages/engine-multi/src/worker/events.ts +++ b/packages/engine-multi/src/worker/events.ts @@ -3,7 +3,6 @@ */ import { JSONLog } from '@openfn/logger'; -import { Versions } from '../types'; // events used by the internal thread runtime @@ -45,7 +44,6 @@ export interface WorkflowCompleteEvent extends InternalEvent { export interface JobStartEvent extends InternalEvent { jobId: string; - versions: Versions; } export interface JobCompleteEvent extends InternalEvent { diff --git a/packages/engine-multi/src/worker/pool.ts b/packages/engine-multi/src/worker/pool.ts index 74b699259..5e94f05b7 100644 --- a/packages/engine-multi/src/worker/pool.ts +++ b/packages/engine-multi/src/worker/pool.ts @@ -120,13 +120,17 @@ function createPool(script: string, options: PoolOptions = {}, logger: Logger) { } }; - const exec = (task: string, args: any[] = [], opts: ExecOpts = {}) => { + const exec = ( + task: string, + args: any[] = [], + opts: ExecOpts = {} + ): Promise => { // TODO Throw if destroyed if (destroyed) { throw new Error('Worker destroyed'); } - const promise = new Promise(async (resolve, reject) => { + const promise = new Promise(async (resolve, reject) => { // TODO what should we do if a process in the pool dies, perhaps due to OOM? const onExit = async (code: number) => { if (code !== HANDLED_EXIT_CODE) { @@ -194,7 +198,6 @@ function createPool(script: string, options: PoolOptions = {}, logger: Logger) { } try { - logger.debug(`pool: Running task "${task}" in worker ${worker.pid}`); worker.send({ type: ENGINE_RUN_TASK, task, diff --git a/packages/engine-multi/src/worker/thread/helpers.ts b/packages/engine-multi/src/worker/thread/helpers.ts index cb8a2d417..fb3e4d9ee 100644 --- a/packages/engine-multi/src/worker/thread/helpers.ts +++ b/packages/engine-multi/src/worker/thread/helpers.ts @@ -4,14 +4,13 @@ import process from 'node:process'; import stringify from 'fast-safe-stringify'; import createLogger, { SanitizePolicies } from '@openfn/logger'; +import type { JSONLog } from '@openfn/logger'; import * as workerEvents from '../events'; import { HANDLED_EXIT_CODE } from '../../events'; import { ExecutionError, ExitError } from '../../errors'; - import { publish } from './runtime'; import serializeError from '../../util/serialize-error'; -import { JSONLog } from '@openfn/logger'; export const createLoggers = ( workflowId: string, @@ -67,7 +66,7 @@ export const createLoggers = ( // Execute wrapper function export const execute = async ( workflowId: string, - executeFn: () => Promise + executeFn: () => Promise | undefined ) => { const handleError = (err: any) => { publish(workerEvents.ERROR, { diff --git a/packages/engine-multi/src/worker/thread/mock-run.ts b/packages/engine-multi/src/worker/thread/mock-run.ts index c6b29b0d8..194ee5478 100644 --- a/packages/engine-multi/src/worker/thread/mock-run.ts +++ b/packages/engine-multi/src/worker/thread/mock-run.ts @@ -10,6 +10,7 @@ import { register, publish } from './runtime'; import { execute, createLoggers } from './helpers'; import * as workerEvents from '../events'; +import { State } from '@openfn/lexicon'; type MockJob = { id?: string; @@ -25,13 +26,19 @@ type MockJob = { type MockExecutionPlan = { id: string; - jobs: MockJob[]; + workflow: { + steps: MockJob[]; + }; }; // This is a fake runtime handler which will return a fixed value, throw, and // optionally delay -function mockRun(plan: MockExecutionPlan) { - const [job] = plan.jobs; +function mockRun(plan: MockExecutionPlan, input: State, _options = {}) { + if (!input) { + throw new Error('no input passed to state'); + } + + const [job] = plan.workflow.steps; const { jobLogger } = createLoggers(plan.id!, 'none', publish); const workflowId = plan.id; return new Promise((resolve) => { @@ -79,6 +86,6 @@ function mockRun(plan: MockExecutionPlan) { } register({ - run: async (plan: MockExecutionPlan, _options?: any) => - execute(plan.id, () => mockRun(plan)), + run: async (plan: MockExecutionPlan, input: State, _options?: any) => + execute(plan.id, () => mockRun(plan, input)), }); diff --git a/packages/engine-multi/src/worker/thread/run.ts b/packages/engine-multi/src/worker/thread/run.ts index b6af70c87..9dd3585d4 100644 --- a/packages/engine-multi/src/worker/thread/run.ts +++ b/packages/engine-multi/src/worker/thread/run.ts @@ -1,9 +1,9 @@ // This is the run command that will be executed inside the worker thread // Most of the heavy lifting is actually handled by execute import run from '@openfn/runtime'; -import type { ExecutionPlan } from '@openfn/runtime'; -import type { SanitizePolicies } from '@openfn/logger'; import type { NotifyEvents } from '@openfn/runtime'; +import type { ExecutionPlan, State } from '@openfn/lexicon'; +import type { SanitizePolicies } from '@openfn/logger'; import { register, publish } from './runtime'; import { execute, createLoggers } from './helpers'; @@ -15,7 +15,6 @@ type RunOptions = { whitelist?: RegExp[]; sanitize: SanitizePolicies; statePropsToRemove?: string[]; - // TODO timeout }; const eventMap = { @@ -26,7 +25,7 @@ const eventMap = { }; register({ - run: (plan: ExecutionPlan, runOptions: RunOptions) => { + run: (plan: ExecutionPlan, input: State, runOptions: RunOptions) => { const { adaptorPaths, whitelist, sanitize, statePropsToRemove } = runOptions; const { logger, jobLogger, adaptorLogger } = createLoggers( @@ -73,6 +72,6 @@ register({ }, }; - return execute(plan.id!, () => run(plan, undefined, options)); + return execute(plan.id!, () => run(plan, input, options)); }, }); diff --git a/packages/engine-multi/test/api.test.ts b/packages/engine-multi/test/api.test.ts index f6fd843ef..a797e76dc 100644 --- a/packages/engine-multi/test/api.test.ts +++ b/packages/engine-multi/test/api.test.ts @@ -1,9 +1,10 @@ import test from 'ava'; -import createAPI from '../src/api'; import { createMockLogger } from '@openfn/logger'; +import type { ExecutionPlan } from '@openfn/lexicon'; +import createAPI from '../src/api'; import pkg from '../package.json' assert { type: 'json' }; -import { RuntimeEngine } from '../src/types'; +import type { RuntimeEngine } from '../src/types'; // thes are tests on the public api functions generally // so these are very high level tests and don't allow mock workers or anything @@ -97,17 +98,21 @@ test.serial( }, }); - const plan = { + const plan: ExecutionPlan = { id: 'a', - jobs: [ - { - expression: 'export default [s => s]', - // with no adaptor it shouldn't try to autoinstall - }, - ], + workflow: { + steps: [ + { + expression: 'export default [s => s]', + // with no adaptor it shouldn't try to autoinstall + }, + ], + }, + options: {}, }; - const listener = api.execute(plan); + const state = { x: 1 }; + const listener = api.execute(plan, state); listener.on('workflow-complete', () => { t.pass('workflow completed'); done(); @@ -126,18 +131,22 @@ test.serial('should listen to workflow-complete', async (t) => { }, }); - const plan = { + const plan: ExecutionPlan = { id: 'a', - jobs: [ - { - expression: 'export default [s => s]', - // with no adaptor it shouldn't try to autoinstall - }, - ], + workflow: { + steps: [ + { + expression: 'export default [s => s]', + // with no adaptor it shouldn't try to autoinstall + }, + ], + }, + options: {}, }; + const state = { x: 1 }; + api.execute(plan, state); - api.execute(plan); - api.listen(plan.id, { + api.listen(plan.id!, { 'workflow-complete': () => { t.pass('workflow completed'); done(); diff --git a/packages/engine-multi/test/api/autoinstall.test.ts b/packages/engine-multi/test/api/autoinstall.test.ts index 75e3464d9..defd732bd 100644 --- a/packages/engine-multi/test/api/autoinstall.test.ts +++ b/packages/engine-multi/test/api/autoinstall.test.ts @@ -1,7 +1,11 @@ import test from 'ava'; import { createMockLogger } from '@openfn/logger'; +import type { ExecutionPlan, Job } from '@openfn/lexicon'; -import autoinstall, { identifyAdaptors } from '../../src/api/autoinstall'; +import autoinstall, { + AutoinstallOptions, + identifyAdaptors, +} from '../../src/api/autoinstall'; import { AUTOINSTALL_COMPLETE, AUTOINSTALL_ERROR } from '../../src/events'; import ExecutionContext from '../../src/classes/ExecutionContext'; import whitelist from '../../src/whitelist'; @@ -16,7 +20,7 @@ const mockIsInstalled = (pkg: PackageJson) => async (specifier: string) => { return pkg.dependencies.hasOwnProperty(alias); }; -const mockHandleInstall = async (specifier: string): Promise => +const mockHandleInstall = async (_specifier: string): Promise => new Promise((r) => r()).then(); const logger = createMockLogger(); @@ -27,18 +31,23 @@ const wait = (duration = 10) => }); const createContext = ( - autoinstallOpts?, - jobs?: any[], + autoinstallOpts?: AutoinstallOptions, + jobs?: Partial[], customWhitelist?: RegExp[] ) => new ExecutionContext({ state: { id: 'x', status: 'pending', - options: {}, plan: { - jobs: jobs || [{ adaptor: '@openfn/language-common@1.0.0' }], + workflow: { + steps: jobs || [ + { adaptor: '@openfn/language-common@1.0.0', expression: '.' }, + ], + }, + options: {}, }, + input: {}, }, logger, // @ts-ignore @@ -47,6 +56,8 @@ const createContext = ( logger, whitelist: customWhitelist || whitelist, repoDir: 'tmp/repo', + + // @ts-ignore autoinstall: autoinstallOpts || { handleInstall: mockHandleInstall, handleIsInstalled: mockIsInstalled, @@ -104,18 +115,24 @@ test('mock install: should return async', async (t) => { }); test('identifyAdaptors: pick out adaptors and remove duplicates', (t) => { - const plan = { - jobs: [ - { - adaptor: 'common@1.0.0', - }, - { - adaptor: 'common@1.0.0', - }, - { - adaptor: 'common@1.0.1', - }, - ], + const plan: ExecutionPlan = { + workflow: { + steps: [ + { + adaptor: 'common@1.0.0', + expression: '.', + }, + { + adaptor: 'common@1.0.0', + expression: '.', + }, + { + adaptor: 'common@1.0.1', + expression: '.', + }, + ], + }, + options: {}, }; const adaptors = identifyAdaptors(plan); t.true(adaptors.size === 2); @@ -160,9 +177,9 @@ test.serial( async (t) => { let callCount = 0; - const installed = {}; + const installed: Record = {}; - const mockInstall = (name) => + const mockInstall = (name: string) => new Promise((resolve) => { installed[name] = true; callCount++; @@ -172,7 +189,7 @@ test.serial( const options = { skipRepoValidation: true, handleInstall: mockInstall, - handleIsInstalled: async (name) => name in installed, + handleIsInstalled: async (name: string) => name in installed, }; const context = createContext(options); @@ -184,11 +201,11 @@ test.serial( ); test.serial('autoinstall: install in sequence', async (t) => { - const installed = {}; + const installed: Record = {}; - const states = {}; + const states: Record = {}; - const mockInstall = (name) => + const mockInstall = (name: string) => new Promise((resolve) => { // Each time install is called, // record the time the call was made @@ -205,7 +222,7 @@ test.serial('autoinstall: install in sequence', async (t) => { skipRepoValidation: true, handleInstall: mockInstall, handleIsInstalled: false, - }; + } as any; const c1 = createContext(options, [{ adaptor: '@openfn/language-common@1' }]); const c2 = createContext(options, [{ adaptor: '@openfn/language-common@2' }]); @@ -354,7 +371,7 @@ test.serial('autoinstall: support custom whitelist', async (t) => { }); test.serial('autoinstall: emit an event on completion', async (t) => { - let event; + let event: any; const jobs = [ { adaptor: '@openfn/language-common@1.0.0', @@ -366,7 +383,7 @@ test.serial('autoinstall: emit an event on completion', async (t) => { skipRepoValidation: true, handleInstall: async () => new Promise((done) => setTimeout(done, 50)), handleIsInstalled: async () => false, - }; + } as any; const context = createContext(autoinstallOpts, jobs); context.on(AUTOINSTALL_COMPLETE, (evt) => { @@ -416,14 +433,14 @@ test.serial('autoinstall: throw on error twice if pending', async (t) => { const autoinstallOpts = { handleInstall: mockInstall, handleIsInstalled: mockIsInstalled, - }; + } as any; const context = createContext(autoinstallOpts); autoinstall(context).catch(assertCatches); autoinstall(context).catch(assertCatches); - function assertCatches(e) { + function assertCatches(e: any) { t.is(e.name, 'AutoinstallError'); errCount += 1; if (errCount === 2) { @@ -436,7 +453,7 @@ test.serial('autoinstall: throw on error twice if pending', async (t) => { }); test.serial('autoinstall: emit on error', async (t) => { - let evt; + let evt: any; const mockIsInstalled = async () => false; const mockInstall = async () => { throw new Error('err'); @@ -478,7 +495,7 @@ test.serial('autoinstall: throw twice in a row', async (t) => { const autoinstallOpts = { handleInstall: mockInstall, handleIsInstalled: mockIsInstalled, - }; + } as any; const context = createContext(autoinstallOpts); await t.throwsAsync(() => autoinstall(context), { @@ -503,6 +520,7 @@ test('write versions to context', async (t) => { await autoinstall(context); + // @ts-ignore t.is(context.versions['@openfn/language-common'], '1.0.0'); }); @@ -515,5 +533,6 @@ test("write versions to context even if we don't install", async (t) => { await autoinstall(context); + // @ts-ignore t.is(context.versions['@openfn/language-common'], '1.0.0'); }); diff --git a/packages/engine-multi/test/api/call-worker.test.ts b/packages/engine-multi/test/api/call-worker.test.ts index 314314527..c5608e05e 100644 --- a/packages/engine-multi/test/api/call-worker.test.ts +++ b/packages/engine-multi/test/api/call-worker.test.ts @@ -40,7 +40,7 @@ test.serial('callWorker should return a custom result', async (t) => { }); test.serial('callWorker should trigger an event callback', async (t) => { - const onCallback = ({ result }) => { + const onCallback = ({ result }: any) => { t.is(result, 11); }; @@ -69,7 +69,7 @@ test.serial( } ); - const onCallback = (evt) => { + const onCallback = () => { t.pass('all ok'); }; @@ -81,13 +81,13 @@ test.serial('callWorker should execute in one process', async (t) => { const ids: number[] = []; await engine.callWorker('test', [], { - 'test-message': ({ processId }) => { + 'test-message': ({ processId }: any) => { ids.push(processId); }, }); await engine.callWorker('test', [], { - 'test-message': ({ processId }) => { + 'test-message': ({ processId }: any) => { ids.push(processId); }, }); @@ -100,13 +100,13 @@ test.serial('callWorker should execute in two different threads', async (t) => { const ids: number[] = []; await engine.callWorker('test', [], { - 'test-message': ({ threadId }) => { + 'test-message': ({ threadId }: any) => { ids.push(threadId); }, }); await engine.callWorker('test', [], { - 'test-message': ({ threadId }) => { + 'test-message': ({ threadId }: any) => { ids.push(threadId); }, }); @@ -167,8 +167,6 @@ test.serial( test.serial( 'By default, worker thread cannot access parent env if env not set (with options arg)', async (t) => { - const defaultAPI = {} as EngineAPI; - const { callWorker, closeWorkers } = initWorkers( workerPath, { maxWorkers: 1 }, diff --git a/packages/engine-multi/test/api/execute.test.ts b/packages/engine-multi/test/api/execute.test.ts index deda81d22..9b46e2a74 100644 --- a/packages/engine-multi/test/api/execute.test.ts +++ b/packages/engine-multi/test/api/execute.test.ts @@ -1,8 +1,9 @@ import path from 'node:path'; import test from 'ava'; +import { createMockLogger } from '@openfn/logger'; + import initWorkers from '../../src/api/call-worker'; import execute from '../../src/api/execute'; -import { createMockLogger } from '@openfn/logger'; import { JOB_COMPLETE, JOB_START, @@ -13,20 +14,31 @@ import { } from '../../src/events'; import ExecutionContext from '../../src/classes/ExecutionContext'; -import type { RTEOptions } from '../../src/api'; -import type { WorkflowState } from '../../src/types'; -import { ExecuteOptions } from '../../src/engine'; +import type { + ExecuteOptions, + ExecutionContextOptions, + WorkflowState, +} from '../../src/types'; +import type { EngineOptions } from '../../src/engine'; const workerPath = path.resolve('dist/test/mock-run.js'); -const createContext = ({ state, options }) => { +const createContext = ({ + state, + options, +}: { + state: Partial; + options: Partial; +}) => { const logger = createMockLogger(); const { callWorker } = initWorkers(workerPath, {}, logger); const ctx = new ExecutionContext({ + // @ts-ignore state: state || { workflowId: 'x' }, logger, callWorker, + // @ts-ignore options, }); @@ -37,12 +49,15 @@ const createContext = ({ state, options }) => { const plan = { id: 'x', - jobs: [ - { - id: 'j', - expression: '() => 22', - }, - ], + workflow: { + steps: [ + { + id: 'j', + expression: '() => 22', + }, + ], + }, + options: {}, }; const options = { @@ -51,13 +66,13 @@ const options = { handleInstall: async () => {}, handleIsInstalled: async () => false, }, -} as RTEOptions; +} as Partial; test.serial('execute should run a job and return the result', async (t) => { const state = { id: 'x', plan, - } as WorkflowState; + } as Partial; const context = createContext({ state, options }); @@ -80,7 +95,7 @@ test.serial('should emit a workflow-start event', async (t) => { await execute(context); // No need to do a deep test of the event payload here - t.is(workflowStart.workflowId, 'x'); + t.is(workflowStart!.workflowId!, 'x'); }); test.serial('should emit a log event with the memory limit', async (t) => { @@ -89,7 +104,7 @@ test.serial('should emit a log event with the memory limit', async (t) => { plan, } as WorkflowState; - const logs = []; + const logs: any[] = []; const context = createContext({ state, @@ -122,8 +137,8 @@ test.serial('should emit a workflow-complete event', async (t) => { await execute(context); - t.is(workflowComplete.workflowId, 'x'); - t.is(workflowComplete.state, 22); + t.is(workflowComplete!.workflowId, 'x'); + t.is(workflowComplete!.state, 22); }); test.serial('should emit a job-start event', async (t) => { @@ -132,7 +147,7 @@ test.serial('should emit a job-start event', async (t) => { plan, } as WorkflowState; - let event; + let event: any; const context = createContext({ state, options }); @@ -152,7 +167,7 @@ test.serial('should emit a job-complete event', async (t) => { plan, } as WorkflowState; - let event; + let event: any; const context = createContext({ state, options }); @@ -166,19 +181,22 @@ test.serial('should emit a job-complete event', async (t) => { }); test.serial('should emit a log event', async (t) => { - let workflowLog; + let workflowLog: any; const plan = { id: 'y', - jobs: [ - { - expression: '() => { console.log("hi"); return 33 }', - }, - ], + workflow: { + steps: [ + { + expression: '() => { console.log("hi"); return 33 }', + }, + ], + }, + options: {}, }; const state = { id: 'y', plan, - } as WorkflowState; + } as Partial; const context = createContext({ state, options }); context.once(WORKFLOW_LOG, (evt) => (workflowLog = evt)); @@ -191,14 +209,16 @@ test.serial('should emit a log event', async (t) => { }); test.serial('log events are timestamped in hr time', async (t) => { - let workflowLog; + let workflowLog: any; const plan = { id: 'y', - jobs: [ - { - expression: '() => { console.log("hi"); return 33 }', - }, - ], + workflow: { + steps: [ + { + expression: '() => { console.log("hi"); return 33 }', + }, + ], + }, }; const state = { id: 'y', @@ -220,11 +240,13 @@ test.serial('should emit error on timeout', async (t) => { const state = { id: 'zz', plan: { - jobs: [ - { - expression: '() => { while(true) {} }', - }, - ], + workflow: { + steps: [ + { + expression: '() => { while(true) {} }', + }, + ], + }, }, } as WorkflowState; @@ -233,7 +255,7 @@ test.serial('should emit error on timeout', async (t) => { runTimeoutMs: 10, }; - let event; + let event: any; const context = createContext({ state, options: wfOptions }); @@ -280,7 +302,9 @@ test.serial('should emit CompileError if compilation fails', async (t) => { const state = { id: 'baa', plan: { - jobs: [{ id: 'j', expression: 'la la la' }], + workflow: { + steps: [{ id: 'j', expression: 'la la la' }], + }, }, } as WorkflowState; const context = createContext({ state, options: {} }); @@ -299,7 +323,7 @@ test.serial('should emit CompileError if compilation fails', async (t) => { }); test.serial('should stringify the whitelist array', async (t) => { - let passedOptions; + let passedOptions: any; const state = { id: 'x', @@ -312,8 +336,9 @@ test.serial('should stringify the whitelist array', async (t) => { }; const context = createContext({ state, options: opts }); + // @ts-ignore context.callWorker = (_command, args) => { - passedOptions = args[1]; + passedOptions = args[2]; }; await execute(context); diff --git a/packages/engine-multi/test/api/lifecycle.test.ts b/packages/engine-multi/test/api/lifecycle.test.ts index b6c0566a2..329128e5d 100644 --- a/packages/engine-multi/test/api/lifecycle.test.ts +++ b/packages/engine-multi/test/api/lifecycle.test.ts @@ -12,12 +12,15 @@ import { } from '../../src/api/lifecycle'; import { WorkflowState } from '../../src/types'; import ExecutionContext from '../../src/classes/ExecutionContext'; +import * as w from '../../src/worker/events'; const createContext = (workflowId: string, state?: any) => new ExecutionContext({ state: state || { id: workflowId }, logger: createMockLogger(), + // @ts-ignore callWorker: () => {}, + // @ts-ignore options: {}, }); @@ -26,10 +29,17 @@ test(`workflowStart: emits ${e.WORKFLOW_START}`, (t) => { const workflowId = 'a'; const context = createContext(workflowId); - const event = { workflowId, threadId: '123' }; + const event: w.WorkflowStartEvent = { + type: w.WORKFLOW_START, + workflowId, + threadId: '123', + }; context.on(e.WORKFLOW_START, (evt) => { - t.deepEqual(evt, event); + t.deepEqual(evt, { + workflowId, + threadId: '123', + }); done(); }); @@ -41,7 +51,11 @@ test('onWorkflowStart: updates state', (t) => { const workflowId = 'a'; const context = createContext(workflowId); - const event = { workflowId, threadId: '123' }; + const event: w.WorkflowStartEvent = { + type: w.WORKFLOW_START, + workflowId, + threadId: '123', + }; workflowStart(context, event); @@ -66,7 +80,12 @@ test(`workflowComplete: emits ${e.WORKFLOW_COMPLETE}`, (t) => { } as WorkflowState; const context = createContext(workflowId, state); - const event = { workflowId, state: result, threadId: '1' }; + const event: w.WorkflowCompleteEvent = { + type: w.WORKFLOW_START, + workflowId, + state: result, + threadId: '1', + }; context.on(e.WORKFLOW_COMPLETE, (evt) => { t.is(evt.workflowId, workflowId); @@ -88,7 +107,12 @@ test('workflowComplete: updates state', (t) => { startTime: Date.now() - 1000, } as WorkflowState; const context = createContext(workflowId, state); - const event = { workflowId, state: result, threadId: '1' }; + const event: w.WorkflowCompleteEvent = { + type: w.WORKFLOW_COMPLETE, + workflowId, + state: result, + threadId: '1', + }; workflowComplete(context, event); @@ -108,7 +132,8 @@ test(`job-start: emits ${e.JOB_START}`, (t) => { const context = createContext(workflowId, state); - const event = { + const event: w.JobStartEvent = { + type: w.JOB_START, workflowId, threadId: '1', jobId: 'j', @@ -136,7 +161,8 @@ test(`job-complete: emits ${e.JOB_COMPLETE}`, (t) => { const context = createContext(workflowId, state); - const event = { + const event: w.JobCompleteEvent = { + type: w.JOB_COMPLETE, workflowId, threadId: '1', jobId: 'j', @@ -167,14 +193,15 @@ test(`log: emits ${e.WORKFLOW_LOG}`, (t) => { const context = createContext(workflowId); - const event = { + const event: w.LogEvent = { + type: w.LOG, workflowId, threadId: 'a', log: { level: 'info', name: 'job', message: JSON.stringify(['oh hai']), - time: Date.now() - 100, + time: (Date.now() - 100).toString(), }, }; @@ -191,6 +218,84 @@ test(`log: emits ${e.WORKFLOW_LOG}`, (t) => { }); }); +test('logs get sent to stdout', (t) => { + const workflowId = 'a'; + + const stdout = createMockLogger(undefined, { level: 'debug', json: true }); + + const context = createContext(workflowId); + context.logger = stdout; + + const event: w.LogEvent = { + type: w.LOG, + workflowId, + threadId: 'a', + log: { + level: 'info', + name: 'r/t', + message: ['oh hai'], + time: (Date.now() - 100).toString(), + }, + }; + + log(context, event); + + const last: any = stdout._last; + t.truthy(last); + t.is(last.message[0], 'oh hai'); + t.is(last.name, 'r/t'); +}); + +test('job logs do not get sent to stdout', (t) => { + const workflowId = 'a'; + + const stdout = createMockLogger(undefined, { level: 'debug' }); + + const context = createContext(workflowId); + context.logger = stdout; + + const event: w.LogEvent = { + type: w.LOG, + workflowId, + threadId: 'a', + log: { + level: 'info', + name: 'job', + message: ['oh hai'], + time: (Date.now() - 100).toString(), + }, + }; + + log(context, event); + + t.is(stdout._history.length, 0); +}); + +test('adaptor logs do not get sent to stdout', (t) => { + const workflowId = 'a'; + + const stdout = createMockLogger(undefined, { level: 'debug' }); + + const context = createContext(workflowId); + context.logger = stdout; + + const event: w.LogEvent = { + type: w.LOG, + workflowId, + threadId: 'a', + log: { + level: 'info', + name: 'ada', + message: ['oh hai'], + time: (Date.now() - 100).toString(), + }, + }; + + log(context, event); + + t.is(stdout._history.length, 0); +}); + // TODO not a very thorough test, still not really sure what I'm doing here test(`error: emits ${e.WORKFLOW_ERROR}`, (t) => { return new Promise((done) => { @@ -206,6 +311,7 @@ test(`error: emits ${e.WORKFLOW_ERROR}`, (t) => { const err = new Error('test'); + // @ts-ignore error(context, { error: err }); }); }); diff --git a/packages/engine-multi/test/api/preload-credentials.test.ts b/packages/engine-multi/test/api/preload-credentials.test.ts index e31c04191..92a240f2c 100644 --- a/packages/engine-multi/test/api/preload-credentials.test.ts +++ b/packages/engine-multi/test/api/preload-credentials.test.ts @@ -1,6 +1,7 @@ import test from 'ava'; +import { ExecutionPlan, Job } from '@openfn/lexicon'; + import preloadCredentials from '../../src/api/preload-credentials'; -import { CompiledExecutionPlan } from '@openfn/runtime'; // Not very good test coverage test('handle a plan with no credentials', async (t) => { @@ -12,19 +13,22 @@ test('handle a plan with no credentials', async (t) => { }; const plan = { - id: 'a', - jobs: [ - { - expression: '.', - }, - { - expression: '.', - }, - { - expression: '.', - }, - ], - } as unknown as CompiledExecutionPlan; + id: t.title, + workflow: { + steps: [ + { + expression: '.', + }, + { + expression: '.', + }, + { + expression: '.', + }, + ], + }, + options: {}, + } as ExecutionPlan; const planCopy = JSON.parse(JSON.stringify(plan)); const result = await preloadCredentials(plan, loader); @@ -42,27 +46,93 @@ test('handle a plan with credentials', async (t) => { }; const plan = { - id: 'a', - jobs: [ - { - expression: '.', - configuration: 'a', - }, - { - expression: '.', - configuration: 'b', - }, - { - expression: '.', - configuration: 'c', - }, - ], - } as unknown as CompiledExecutionPlan; + id: t.title, + workflow: { + steps: [ + { + expression: '.', + configuration: 'a', + }, + { + expression: '.', + configuration: 'b', + }, + { + expression: '.', + configuration: 'c', + }, + ], + }, + options: {}, + } as ExecutionPlan; - const result = await preloadCredentials(plan, loader); + await preloadCredentials(plan, loader); t.is(timesCalled, 3); - t.is(plan.jobs[0].configuration, 'loaded-a'); - t.is(plan.jobs[1].configuration, 'loaded-b'); - t.is(plan.jobs[2].configuration, 'loaded-c'); + t.is((plan.workflow.steps[0] as Job).configuration, 'loaded-a'); + t.is((plan.workflow.steps[1] as Job).configuration, 'loaded-b'); + t.is((plan.workflow.steps[2] as Job).configuration, 'loaded-c'); +}); + +test('throw if one credential fails to load', async (t) => { + const loader = async () => { + throw new Error('err'); + }; + + const plan = { + id: t.title, + workflow: { + steps: [ + { + id: 'z', + expression: '.', + configuration: 'a', + }, + ], + }, + options: {}, + } as ExecutionPlan; + + try { + await preloadCredentials(plan, loader); + } catch (e: any) { + t.is(e.name, 'CredentialLoadError'); + t.is(e.message, `Failed to load credential a for step z`); + } +}); + +test('throw if several credentials fail to load', async (t) => { + const loader = async () => { + throw new Error('err'); + }; + + const plan = { + id: t.title, + workflow: { + steps: [ + { + id: 'j', + expression: '.', + configuration: 'a', + }, + { + id: 'k', + expression: '.', + configuration: 'a', + }, + ], + }, + options: {}, + } as ExecutionPlan; + + try { + await preloadCredentials(plan, loader); + } catch (e: any) { + t.is(e.name, 'CredentialLoadError'); + t.is( + e.message, + `Failed to load credential a for step j +Failed to load credential a for step k` + ); + } }); diff --git a/packages/engine-multi/test/engine.test.ts b/packages/engine-multi/test/engine.test.ts index ec8c2c062..d93c85f62 100644 --- a/packages/engine-multi/test/engine.test.ts +++ b/packages/engine-multi/test/engine.test.ts @@ -1,12 +1,11 @@ import test from 'ava'; import path from 'node:path'; import { createMockLogger } from '@openfn/logger'; +import type { ExecutionPlan } from '@openfn/lexicon'; -import createEngine, { ExecuteOptions } from '../src/engine'; +import createEngine, { InternalEngine } from '../src/engine'; import * as e from '../src/events'; -import { ExecutionPlan } from '@openfn/runtime'; - -// TOOD this becomes low level tests on the internal engine api +import type { ExecuteOptions } from '../src/types'; const logger = createMockLogger('', { level: 'debug' }); @@ -23,7 +22,19 @@ const options = { }, }; -let engine; +const createPlan = (expression: string = '.', id = 'a') => ({ + id, + workflow: { + steps: [ + { + expression, + }, + ], + }, + options: {}, +}); + +let engine: InternalEngine; test.afterEach(async () => { logger._reset(); @@ -82,22 +93,17 @@ test.serial( const p = path.resolve('dist/test/worker-functions.js'); engine = await createEngine(options, p); - const plan = { - id: 'a', - jobs: [ - { - expression: '22', - }, - ], - }; + const plan = createPlan('22'); - engine.execute(plan).on(e.WORKFLOW_COMPLETE, ({ state, threadId }) => { - t.is(state, 22); - t.truthy(threadId); // proves (sort of) that this has run in a worker + engine + .execute(plan, {}) + .on(e.WORKFLOW_COMPLETE, ({ state, threadId }) => { + t.is(state, 22); + t.truthy(threadId); // proves (sort of) that this has run in a worker - // Apparently engine.destroy won't resolve if we return immediately - setTimeout(done, 1); - }); + // Apparently engine.destroy won't resolve if we return immediately + setTimeout(done, 1); + }); }); } ); @@ -107,16 +113,9 @@ test.serial('execute does not return internal state stuff', async (t) => { const p = path.resolve('dist/test/worker-functions.js'); engine = await createEngine(options, p); - const plan = { - id: 'a', - jobs: [ - { - expression: '22', - }, - ], - }; + const plan = createPlan(); - const result = engine.execute(plan, {}); + const result: any = engine.execute(plan, {}); // Execute returns an event listener t.truthy(result.on); t.truthy(result.once); @@ -133,7 +132,6 @@ test.serial('execute does not return internal state stuff', async (t) => { t.falsy(result['options']); done(); - // TODO is this still running? Does it matter? }); }); @@ -142,17 +140,13 @@ test.serial('listen to workflow-complete', async (t) => { const p = path.resolve('dist/test/worker-functions.js'); engine = await createEngine(options, p); - const plan = { - id: 'a', - jobs: [ - { - expression: '33', - }, - ], - }; + const plan = createPlan('33'); engine.listen(plan.id, { - [e.WORKFLOW_COMPLETE]: ({ state, threadId }) => { + [e.WORKFLOW_COMPLETE]: ({ + state, + threadId, + }: e.WorkflowCompletePayload) => { t.is(state, 33); t.truthy(threadId); // proves (sort of) that this has run in a worker @@ -160,7 +154,7 @@ test.serial('listen to workflow-complete', async (t) => { setTimeout(done, 1); }, }); - engine.execute(plan); + engine.execute(plan, {}); }); }); @@ -171,22 +165,25 @@ test.serial('call listen before execute', async (t) => { const plan = { id: 'a', - jobs: [ - { - expression: '34', - }, - ], + workflow: { + steps: [ + { + expression: '34', + }, + ], + }, + options: {}, }; engine.listen(plan.id, { - [e.WORKFLOW_COMPLETE]: ({ state }) => { + [e.WORKFLOW_COMPLETE]: ({ state }: e.WorkflowCompletePayload) => { t.is(state, 34); // Apparently engine.destroy won't resolve if we return immediately setTimeout(done, 1); }, }); - engine.execute(plan); + engine.execute(plan, {}); }); }); @@ -197,21 +194,24 @@ test.serial('catch and emit errors', async (t) => { const plan = { id: 'a', - jobs: [ - { - expression: 'throw new Error("test")', - }, - ], + workflow: { + steps: [ + { + expression: 'throw new Error("test")', + }, + ], + }, + options: {}, }; engine.listen(plan.id, { - [e.WORKFLOW_ERROR]: ({ message }) => { + [e.WORKFLOW_ERROR]: ({ message }: e.WorkflowErrorPayload) => { t.is(message, 'test'); done(); }, }); - engine.execute(plan); + engine.execute(plan, {}); }); }); @@ -224,26 +224,31 @@ test.serial( const plan = { id: 'a', - jobs: [ - { - expression: 'while(true) {}', - }, - ], + workflow: { + steps: [ + { + expression: 'while(true) {}', + }, + ], + }, + options: {}, }; + // TODO Now then - this doesn't seem right + // the timeout should be on the xplan const opts: ExecuteOptions = { runTimeoutMs: 10, }; engine.listen(plan.id, { - [e.WORKFLOW_ERROR]: ({ message, type }) => { + [e.WORKFLOW_ERROR]: ({ message, type }: e.WorkflowErrorPayload) => { t.is(type, 'TimeoutError'); t.regex(message, /failed to return within 10ms/); done(); }, }); - engine.execute(plan, opts); + engine.execute(plan, {}, opts); }); } ); @@ -263,22 +268,25 @@ test.serial( const plan = { id: 'a', - jobs: [ - { - expression: 'while(true) {}', - }, - ], + workflow: { + steps: [ + { + expression: 'while(true) {}', + }, + ], + }, + options: {}, }; engine.listen(plan.id, { - [e.WORKFLOW_ERROR]: ({ message, type }) => { + [e.WORKFLOW_ERROR]: ({ message, type }: e.WorkflowErrorPayload) => { t.is(type, 'TimeoutError'); t.regex(message, /failed to return within 22ms/); done(); }, }); - engine.execute(plan); + engine.execute(plan, {}); }); } ); diff --git a/packages/engine-multi/test/errors.test.ts b/packages/engine-multi/test/errors.test.ts index e9202584b..26849678d 100644 --- a/packages/engine-multi/test/errors.test.ts +++ b/packages/engine-multi/test/errors.test.ts @@ -1,11 +1,12 @@ import test from 'ava'; import path from 'node:path'; +import { createMockLogger } from '@openfn/logger'; import createEngine, { EngineOptions } from '../src/engine'; -import { createMockLogger } from '@openfn/logger'; import { WORKFLOW_ERROR } from '../src/events'; +import type { RuntimeEngine } from '../src/types'; -let engine; +let engine: RuntimeEngine; test.before(async () => { const logger = createMockLogger('', { level: 'debug' }); @@ -30,16 +31,19 @@ test.serial('syntax error: missing bracket', (t) => { return new Promise((done) => { const plan = { id: 'a', - jobs: [ - { - id: 'x', - // This is subtle syntax error - expression: 'fn((s) => { return s )', - }, - ], + workflow: { + steps: [ + { + id: 'x', + // This is subtle syntax error + expression: 'fn((s) => { return s )', + }, + ], + }, + options: {}, }; - engine.execute(plan).on(WORKFLOW_ERROR, (evt) => { + engine.execute(plan, {}).on(WORKFLOW_ERROR, (evt) => { t.is(evt.type, 'CompileError'); // compilation happens in the main thread t.is(evt.threadId, '-'); @@ -53,16 +57,19 @@ test.serial('syntax error: illegal throw', (t) => { return new Promise((done) => { const plan = { id: 'b', - jobs: [ - { - id: 'z', - // This is also subtle syntax error - expression: 'fn(() => throw "e")', - }, - ], + workflow: { + steps: [ + { + id: 'z', + // This is also subtle syntax error + expression: 'fn(() => throw "e")', + }, + ], + }, + options: {}, }; - engine.execute(plan).on(WORKFLOW_ERROR, (evt) => { + engine.execute(plan, {}).on(WORKFLOW_ERROR, (evt) => { t.is(evt.type, 'CompileError'); // compilation happens in the main thread t.is(evt.threadId, '-'); @@ -75,21 +82,24 @@ test.serial('syntax error: illegal throw', (t) => { test.serial('thread oom error', (t) => { return new Promise((done) => { const plan = { - id: 'a', - jobs: [ - { - expression: `export default [(s) => { + id: 'c', + workflow: { + steps: [ + { + expression: `export default [(s) => { s.a = []; while(true) { s.a.push(new Array(1e6).fill("oom")); } return s; }]`, - }, - ], + }, + ], + }, + options: {}, }; - engine.execute(plan).on(WORKFLOW_ERROR, (evt) => { + engine.execute(plan, {}).on(WORKFLOW_ERROR, (evt) => { t.is(evt.type, 'OOMError'); t.is(evt.severity, 'kill'); t.is(evt.message, 'Run exceeded maximum memory usage'); @@ -102,21 +112,24 @@ test.serial('thread oom error', (t) => { test.serial.skip('vm oom error', (t) => { return new Promise((done) => { const plan = { - id: 'b', - jobs: [ - { - expression: `export default [(s) => { + id: 'd', + workflow: { + steps: [ + { + expression: `export default [(s) => { s.a = []; while(true) { s.a.push(new Array(1e8).fill("oom")); } return s; }]`, - }, - ], + }, + ], + }, + options: {}, }; - engine.execute(plan).on(WORKFLOW_ERROR, (evt) => { + engine.execute(plan, {}).on(WORKFLOW_ERROR, (evt) => { t.is(evt.type, 'OOMError'); t.is(evt.severity, 'kill'); t.is(evt.message, 'Run exceeded maximum memory usage'); @@ -131,21 +144,24 @@ test.serial.skip('vm oom error', (t) => { test.serial.skip('execution error from async code', (t) => { return new Promise((done) => { const plan = { - id: 'a', - jobs: [ - { - // this error will throw within the promise, and so before the job completes - // But REALLY naughty code could throw after the job has finished - // In which case it'll be ignored - // Also note that the wrapping promise will never resolve - expression: `export default [(s) => new Promise((r) => { + id: 'e', + workflow: { + steps: [ + { + // this error will throw within the promise, and so before the job completes + // But REALLY naughty code could throw after the job has finished + // In which case it'll be ignored + // Also note that the wrapping promise will never resolve + expression: `export default [(s) => new Promise((r) => { setTimeout(() => { throw new Error(\"e1324\"); r() }, 10) })]`, - }, - ], + }, + ], + }, + options: {}, }; - engine.execute(plan).on(WORKFLOW_ERROR, (evt) => { + engine.execute(plan, {}).on(WORKFLOW_ERROR, (evt) => { t.is(evt.type, 'ExecutionError'); t.is(evt.severity, 'crash'); @@ -158,15 +174,18 @@ test.serial('emit a crash error on process.exit()', (t) => { return new Promise((done) => { const plan = { id: 'z', - jobs: [ - { - adaptor: '@openfn/helper@1.0.0', - expression: 'export default [exit()]', - }, - ], + workflow: { + steps: [ + { + adaptor: '@openfn/helper@1.0.0', + expression: 'export default [exit()]', + }, + ], + }, + options: {}, }; - engine.execute(plan).on(WORKFLOW_ERROR, (evt) => { + engine.execute(plan, {}).on(WORKFLOW_ERROR, (evt) => { t.is(evt.type, 'ExitError'); t.is(evt.severity, 'crash'); t.is(evt.message, 'Process exited with code: 42'); diff --git a/packages/engine-multi/test/integration.test.ts b/packages/engine-multi/test/integration.test.ts index fed31f5b5..21c34e2cd 100644 --- a/packages/engine-multi/test/integration.test.ts +++ b/packages/engine-multi/test/integration.test.ts @@ -1,10 +1,15 @@ import test from 'ava'; import path from 'node:path'; -import createAPI from '../src/api'; import { createMockLogger } from '@openfn/logger'; +import type { ExecutionPlan } from '@openfn/lexicon'; + +import createAPI from '../src/api'; +import type { RuntimeEngine } from '../src'; const logger = createMockLogger(); -let api; +let api: RuntimeEngine; + +const emptyState = {}; test.afterEach(() => { logger._reset(); @@ -23,15 +28,19 @@ const withFn = `function fn(f) { return (s) => f(s) } let idgen = 0; -const createPlan = (jobs?: any[]) => ({ - id: `${++idgen}`, - jobs: jobs || [ - { - id: 'j1', - expression: 'export default [s => s]', +const createPlan = (jobs?: any[]) => + ({ + id: `${++idgen}`, + workflow: { + steps: jobs || [ + { + id: 'j1', + expression: 'export default [s => s]', + }, + ], }, - ], -}); + options: {}, + } as ExecutionPlan); test.serial('trigger workflow-start', (t) => { return new Promise(async (done) => { @@ -44,7 +53,7 @@ test.serial('trigger workflow-start', (t) => { const plan = createPlan(); - api.execute(plan).on('workflow-start', (evt) => { + api.execute(plan, emptyState).on('workflow-start', (evt) => { t.is(evt.workflowId, plan.id); t.truthy(evt.threadId); t.pass('workflow started'); @@ -64,7 +73,7 @@ test.serial('trigger job-start', (t) => { const plan = createPlan(); - api.execute(plan).on('job-start', (e) => { + api.execute(plan, emptyState).on('job-start', (e) => { t.is(e.workflowId, '2'); t.is(e.jobId, 'j1'); t.truthy(e.threadId); @@ -86,7 +95,7 @@ test.serial('trigger job-complete', (t) => { const plan = createPlan(); - api.execute(plan).on('job-complete', (evt) => { + api.execute(plan, emptyState).on('job-complete', (evt) => { t.deepEqual(evt.next, []); t.log('duration:', evt.duration); // Very lenient duration test - this often comes in around 200ms in CI @@ -115,7 +124,7 @@ test.serial('trigger workflow-complete', (t) => { const plan = createPlan(); - api.execute(plan).on('workflow-complete', (evt) => { + api.execute(plan, emptyState).on('workflow-complete', (evt) => { t.falsy(evt.state.errors); t.is(evt.workflowId, plan.id); @@ -142,7 +151,7 @@ test.serial('trigger workflow-log for job logs', (t) => { let didLog = false; - api.execute(plan).on('workflow-log', (evt) => { + api.execute(plan, emptyState).on('workflow-log', (evt) => { if (evt.name === 'JOB') { didLog = true; t.deepEqual(evt.message, JSON.stringify(['hola'])); @@ -150,7 +159,7 @@ test.serial('trigger workflow-log for job logs', (t) => { } }); - api.execute(plan).on('workflow-complete', (evt) => { + api.execute(plan, emptyState).on('workflow-complete', (evt) => { t.true(didLog); t.falsy(evt.state.errors); done(); @@ -170,25 +179,26 @@ test.serial('log errors', (t) => { }, ]); - api.execute(plan).on('workflow-log', (evt) => { - if (evt.name === 'JOB') { - t.log(evt); - t.deepEqual( - evt.message, - JSON.stringify([ - { - name: 'Error', - message: 'hola', - }, - ]) - ); - t.pass('workflow logged'); - } - }); - - api.execute(plan).on('workflow-complete', (evt) => { - done(); - }); + api + .execute(plan, emptyState) + .on('workflow-log', (evt) => { + if (evt.name === 'JOB') { + t.log(evt); + t.deepEqual( + evt.message, + JSON.stringify([ + { + name: 'Error', + message: 'hola', + }, + ]) + ); + t.pass('workflow logged'); + } + }) + .on('workflow-complete', () => { + done(); + }); }); }); @@ -208,7 +218,7 @@ test.serial('trigger workflow-log for adaptor logs', (t) => { }, ]); - api.execute(plan).on('workflow-log', (evt) => { + api.execute(plan, emptyState).on('workflow-log', (evt) => { if (evt.name === 'ADA') { t.deepEqual(evt.message, JSON.stringify(['hola'])); t.pass('workflow logged'); @@ -230,7 +240,7 @@ test.serial('compile and run', (t) => { }, ]); - api.execute(plan).on('workflow-complete', ({ state }) => { + api.execute(plan, emptyState).on('workflow-complete', ({ state }) => { t.deepEqual(state.data, 42); done(); }); @@ -249,7 +259,7 @@ test.serial('run without error if no state is returned', (t) => { }, ]); - api.execute(plan).on('workflow-complete', ({ state }) => { + api.execute(plan, emptyState).on('workflow-complete', ({ state }) => { t.falsy(state); // Ensure there are no error logs @@ -272,7 +282,7 @@ test.serial('errors get nicely serialized', (t) => { }, ]); - api.execute(plan).on('job-error', (evt) => { + api.execute(plan, emptyState).on('job-error', (evt) => { t.is(evt.error.type, 'TypeError'); t.is(evt.error.severity, 'fail'); t.is( @@ -299,7 +309,7 @@ test.serial( }, ]); - api.execute(plan).on('workflow-complete', ({ state }) => { + api.execute(plan, emptyState).on('workflow-complete', ({ state }) => { t.deepEqual(state, { a: 1 }); done(); }); @@ -321,7 +331,7 @@ test.serial('use custom state-props-to-remove', (t) => { }, ]); - api.execute(plan).on('workflow-complete', ({ state }) => { + api.execute(plan, emptyState).on('workflow-complete', ({ state }) => { t.deepEqual(state, { configuration: {}, response: {} }); done(); }); @@ -354,7 +364,7 @@ test.serial('evaluate conditional edges', (t) => { const plan = createPlan(jobs); - api.execute(plan).on('workflow-complete', ({ state }) => { + api.execute(plan, emptyState).on('workflow-complete', ({ state }) => { t.deepEqual(state.data, 'b'); done(); }); @@ -363,7 +373,7 @@ test.serial('evaluate conditional edges', (t) => { test.serial('preload credentials', (t) => { return new Promise(async (done) => { - let didCallLoader = true; + let didCallLoader = false; const loader = (id: string) => new Promise((resolve) => { @@ -393,8 +403,50 @@ test.serial('preload credentials', (t) => { const plan = createPlan(jobs); - api.execute(plan, options).on('workflow-complete', () => { + api.execute(plan, {}, options).on('workflow-complete', () => { + t.true(didCallLoader); + done(); + }); + }); +}); + +test.serial('send a workflow error if credentials fail to load', (t) => { + return new Promise(async (done) => { + let didCallLoader = false; + + const loader = () => + new Promise((_resolve, reject) => { + setTimeout(() => { + didCallLoader = true; + reject(); + }, 1); + }); + + api = await createAPI({ + logger, + }); + + const options = { + resolvers: { + credential: loader, + }, + }; + + const jobs = [ + { + id: 'a', + configuration: 'secret', + }, + ]; + + const plan = createPlan(jobs); + + api.execute(plan, {}, options).on('workflow-error', (e) => { t.true(didCallLoader); + + t.is(e.type, 'CredentialLoadError'); + t.is(e.severity, 'exception'); + t.is(e.message, 'Failed to load credential secret for step a'); done(); }); }); @@ -411,17 +463,15 @@ test.serial('accept initial state', (t) => { const plan = createPlan(); - // important! The runtime must use both x and y as initial state - // if we run the runtime in strict mode, x will be ignored - plan.initialState = { + const input = { x: 1, data: { y: 1, }, }; - api.execute(plan).on('workflow-complete', ({ state }) => { - t.deepEqual(state, plan.initialState); + api.execute(plan, input).on('workflow-complete', ({ state }) => { + t.deepEqual(state, input); done(); }); }); diff --git a/packages/engine-multi/test/security.test.ts b/packages/engine-multi/test/security.test.ts index 45b42634d..8c760d1f0 100644 --- a/packages/engine-multi/test/security.test.ts +++ b/packages/engine-multi/test/security.test.ts @@ -8,7 +8,7 @@ import test from 'ava'; import path from 'node:path'; import { createMockLogger } from '@openfn/logger'; -import createEngine from '../src/engine'; +import createEngine, { InternalEngine } from '../src/engine'; const logger = createMockLogger('', { level: 'debug' }); @@ -20,7 +20,7 @@ const options = { maxWorkers: 1, }; -let engine; +let engine: InternalEngine; test.before(async () => { engine = await createEngine( @@ -43,11 +43,13 @@ test.serial('parent env is hidden from sandbox', async (t) => { }); test.serial('sandbox does not share a global scope', async (t) => { + // @ts-ignore t.is(global.x, undefined); // Set a global inside the first task await engine.callWorker('setGlobalX', [9]); + // @ts-ignore // (this should not affect us outside) t.is(global.x, undefined); diff --git a/packages/engine-multi/test/worker/mock-worker.test.ts b/packages/engine-multi/test/worker/mock-worker.test.ts index 679f663a1..2947ae0c0 100644 --- a/packages/engine-multi/test/worker/mock-worker.test.ts +++ b/packages/engine-multi/test/worker/mock-worker.test.ts @@ -26,7 +26,7 @@ const workers = createPool( test('execute a mock plan inside a worker thread', async (t) => { const plan = createPlan(); - const result = await workers.exec('run', [plan]); + const result = await workers.exec('run', [plan, {}]); t.deepEqual(result, { data: { answer: 42 } }); }); @@ -35,7 +35,7 @@ test('execute a mock plan with data', async (t) => { id: 'j2', data: { input: 44 }, }); - const result = await workers.exec('run', [plan]); + const result = await workers.exec('run', [plan, {}]); t.deepEqual(result, { data: { answer: 44 } }); }); @@ -44,7 +44,7 @@ test('execute a mock plan with an expression', async (t) => { id: 'j2', expression: '() => ({ data: { answer: 46 } })', }); - const result = await workers.exec('run', [plan]); + const result = await workers.exec('run', [plan, {}]); t.deepEqual(result, { data: { answer: 46 } }); }); @@ -54,7 +54,7 @@ test('execute a mock plan with an expression which uses state', async (t) => { data: { input: 2 }, expression: '(s) => ({ data: { answer: s.data.input * 2 } })', }); - const result = await workers.exec('run', [plan]); + const result = await workers.exec('run', [plan, {}]); t.deepEqual(result, { data: { answer: 4 } }); }); @@ -68,7 +68,7 @@ test('execute a mock plan with a promise expression', async (t) => { }, 1); })`, }); - const result = await workers.exec('run', [plan]); + const result = await workers.exec('run', [plan, {}]); t.deepEqual(result, { data: { answer: 46 } }); }); @@ -78,16 +78,16 @@ test('expression state overrides data', async (t) => { data: { answer: 44 }, expression: '() => ({ data: { agent: "007" } })', }); - const result = await workers.exec('run', [plan]); + const result = await workers.exec('run', [plan, {}]); t.deepEqual(result, { data: { agent: '007' } }); }); test('write an exception to state', async (t) => { const plan = createPlan({ id: 'j2', - expression: 'ƸӜƷ', // it's a butterfly, obviously (and mmore importantly, invalid JSON) + expression: 'ƸӜƷ', // it's a butterfly, obviously (and more importantly, invalid JSON) }); - const result = await workers.exec('run', [plan]); + const result = await workers.exec('run', [plan, {}]); t.truthy(result.data); t.truthy(result.error); }); @@ -98,7 +98,7 @@ test('execute a mock plan with delay', async (t) => { id: 'j1', _delay: 50, }); - await workers.exec('run', [plan]); + await workers.exec('run', [plan, {}]); const elapsed = new Date().getTime() - start; t.log(elapsed); t.assert(elapsed > 40); @@ -108,7 +108,7 @@ test('Publish workflow-start event', async (t) => { const plan = createPlan(); plan.id = 'xx'; let didFire = false; - await workers.exec('run', [plan], { + await workers.exec('run', [plan, {}], { on: ({ type }) => { if (type === e.WORKFLOW_START) { didFire = true; @@ -122,7 +122,7 @@ test('Publish workflow-complete event with state', async (t) => { const plan = createPlan(); let didFire = false; let state; - await workers.exec('run', [plan], { + await workers.exec('run', [plan, {}], { on: ({ type, ...args }) => { if (type === e.WORKFLOW_COMPLETE) { didFire = true; @@ -142,9 +142,9 @@ test('Publish a job log event', async (t) => { }`, }); let didFire = false; - let log; + let log: any; let id; - await workers.exec('run', [plan], { + await workers.exec('run', [plan, {}], { on: ({ workflowId, type, log: _log }) => { if (type === e.LOG) { didFire = true; @@ -154,7 +154,7 @@ test('Publish a job log event', async (t) => { }, }); t.true(didFire); - t.is(id, plan.id); + t.is(id, plan.id as any); t.is(log.level, 'info'); t.is(log.name, 'JOB'); diff --git a/packages/engine-multi/test/worker/pool.test.ts b/packages/engine-multi/test/worker/pool.test.ts index ab679efe9..aa74d6a6f 100644 --- a/packages/engine-multi/test/worker/pool.test.ts +++ b/packages/engine-multi/test/worker/pool.test.ts @@ -56,7 +56,7 @@ test.serial( async (t) => { const pool = createPool(workerPath, { maxWorkers: 1 }, logger); - const ids = {}; + const ids: Record = {}; const saveProcessId = (id: string) => { if (!ids[id]) { @@ -98,8 +98,8 @@ test('Remove a worker from the pool and release it when finished', async (t) => return p.then(() => { t.is(pool._pool.length, 5); - // the first thing in the queue should be a worker - t.true(pool[0] !== false); + // the last thing in the queue should be a worker + t.true(pool._pool[4] !== false); }); }); @@ -168,7 +168,7 @@ test('throw if the task throws', async (t) => { try { await pool.exec('throw', []); - } catch (e) { + } catch (e: any) { // NB e is not an error isntance t.is(e.message, 'test_error'); } @@ -179,7 +179,7 @@ test('throw if memory limit is exceeded', async (t) => { try { await pool.exec('blowMemory', [], { memoryLimitMb: 100 }); - } catch (e) { + } catch (e: any) { t.is(e.message, 'Run exceeded maximum memory usage'); t.is(e.name, 'OOMError'); } @@ -398,13 +398,13 @@ test('events should disconnect between executions', (t) => { return new Promise(async (done) => { const pool = createPool(workerPath, { capacity: 1 }, logger); - const counts = { + const counts: Record = { a: 0, b: 0, c: 0, }; - const on = (event) => { + const on = (event: { type: string; result: number }) => { if (event.type === 'test-message') { counts[event.result] += 1; } diff --git a/packages/engine-multi/tsconfig.json b/packages/engine-multi/tsconfig.json index b3d766fc1..fda756656 100644 --- a/packages/engine-multi/tsconfig.json +++ b/packages/engine-multi/tsconfig.json @@ -1,4 +1,4 @@ { "extends": "../../tsconfig.common", - "include": ["src/**/*.ts"] + "include": ["src/**/*.ts", "test/**/*.ts"] } diff --git a/packages/lexicon/README.md b/packages/lexicon/README.md new file mode 100644 index 000000000..baa736f5f --- /dev/null +++ b/packages/lexicon/README.md @@ -0,0 +1,42 @@ +The lexicon (aka the OpenFunctionicon) is a central repositoty of key type and word definitions. + +It's a types repo and glossary at the same time. + +## Overview + +The OpenFunction stack is built on the concepts of Workflows, Runs, Jobs and Expressions (and more). Some of these terms can be used interchangable, or used differently in certain contexts. + +Here are the key concepts + +- An **Expression** is a string of Javascript (or Javascript-like code) written to be run in the CLI or Lightning. +- A **Job** is an expression plus some metadata required to run it - typically an adaptor and credentials. + The terms Job and Expression are often used interchangeably. +- A **Workflow** is a series of steps to be executed in sequence. Steps are usually Jobs (and so job and step are often used + interchangeably), but can be Triggers. +- An **Execution Plan** is a Workflow plus some options which inform how it should be executed (ie, start node, timeout). + +The term "Execution plan" is mostly used internally and not exposed to users, and is usually interchangeable with Workflow. + +You can find formal type definition of these and more in `src/core.d.ts`. + +Lightning also introduces it's own terminolgy as it is standalone application and has features that the runtime itself does not. + +In Lightning, a Step can be a Job or a Trigger. Jobs are connected by Paths (also known sometimes as Edges), which may be conditional. + +You can find lightning-specific typings in `src/lightning.d.ts` + +## Usage + +This repo only contains type definitions. It is unlikely to be of use outside the repo - although users are free to import and use it. + +To use the core types, simply import what you need: + +``` +import { ExecutionPlan } from '@openfn/lexicon +``` + +To use the lightning types, use `@openfn/lexicon/lightning` + +``` +import { Run } from '@openfn/lexicon/lightning +``` diff --git a/packages/lexicon/core.d.ts b/packages/lexicon/core.d.ts new file mode 100644 index 000000000..ec21ec13a --- /dev/null +++ b/packages/lexicon/core.d.ts @@ -0,0 +1,137 @@ +import { SanitizePolicies } from '@openfn/logger'; + +/** + * An execution plan is a portable definition of a Work Order, + * or, a unit of work to execute + */ +export type ExecutionPlan = { + id?: UUID; // this would be the run (nee attempt) id + workflow: Workflow; + options: WorkflowOptions; +}; + +/** + * A workflow is just a series of steps, executed start to finish + */ +export type Workflow = { + id?: UUID; // unique id used to track this workflow. Could be autogenerated + + // TODO: make required (worker and cli may have to generate a name) + name?: string; + + steps: Array; +}; + +/** + * A type of Step which executes code + * This is some openfn expression plus metadata (adaptor, credentials) + */ +export interface Job extends Step { + adaptor?: string; + expression: Expression; + configuration?: object | string; + state?: Omit | string; +} + +/** + * A raw openfn-js script to be executed by the runtime + * + * Can be compiled as part of a job. + * + * The expression itself has no metadata. It likely needs + * an adaptor and input state to run + */ +export type Expression = string; + +/** + * State is an object passed into a workflow and returned from a workflow + */ +export declare interface State { + // Core state props used by the runtime + configuration?: C; + data?: S; + errors?: Record; + + // Props added by common + references?: Array; + + // Props commonly used by other adaptors + index?: number; + response?: any; + query?: any; + + [other: string]: any; +} + +/** + * An operation function that runs in an Expression + */ +export declare interface Operation | State> { + (state: State): T; +} + +/** + * Options which can be set on a workflow as part of an execution plan + */ +export type WorkflowOptions = { + // TODO Both numbers in minutes maybe + timeout?: number; + stepTimeout?: number; + start?: StepId; + + // TODO not supported yet I don't think? + sanitize?: SanitizePolicies; +}; + +export type StepId = string; + +/** + * A thing to be run as part of a workflow + * (usually a job) + */ +export interface Step { + id?: StepId; + name?: string; // user-friendly name used in logging + + next?: string | Record; + previous?: StepId; +} + +/** + * Not actually keen on the node/edge semantics here + * Maybe StepLink? + */ +export type StepEdge = boolean | string | ConditionalStepEdge; + +export type ConditionalStepEdge = { + condition?: string; // Javascript expression (function body, not function) + label?: string; + disabled?: boolean; +}; + +/** + * A no-op type of Step + */ +export interface Trigger extends Step {} + +/** + * An expression which has been compiled, and so includes import and export statements + */ +export type CompiledExpression = Expression; + +export type ErrorReport = { + type: string; // The name/type of error, ie Error, TypeError + message: string; // simple human readable message + stepId: StepId; // ID of the associated job + error: Error; // the original underlying error object + + code?: string; // The error code, if any (found on node errors) + stack?: string; // not sure this is useful? + data?: any; // General store for related error information +}; + +// TODO standard shape of error object in our stack + +type UUID = string; + +export type Lazy = T | string; diff --git a/packages/lexicon/index.d.ts b/packages/lexicon/index.d.ts new file mode 100644 index 000000000..5ee3e64b9 --- /dev/null +++ b/packages/lexicon/index.d.ts @@ -0,0 +1,2 @@ +export * from './core'; +export * as lightning from './lighting'; diff --git a/packages/lexicon/index.js b/packages/lexicon/index.js new file mode 100644 index 000000000..3d2bc3b3d --- /dev/null +++ b/packages/lexicon/index.js @@ -0,0 +1 @@ +export * as lightning from './lightning'; diff --git a/packages/lexicon/lightning.d.ts b/packages/lexicon/lightning.d.ts new file mode 100644 index 000000000..c7e232366 --- /dev/null +++ b/packages/lexicon/lightning.d.ts @@ -0,0 +1,185 @@ +import type { SanitizePolicies } from '@openfn/logger'; +import { State } from './core'; + +export const API_VERSION: number; + +type StepId = string; + +/** + * Type definitions for Lightning and Worker interfaces + * + * This is the lightning-worker contract + * + * It is helpful to have these in the lexicon to avoid a circular dependency between lightning and the worker + * It's also kinda nice that the contract isn't in the worker itself, it's on neutral ground + */ + +/** + * An execution plan representing a Ligyhtning 'Run'. + * This represents the execution of a workflow. + * + * The data stucture that Lightning sends is converted by the Worker into + * a runtime ExecutionPlan (as found in Core) + */ +export type LightningPlan = { + id: string; + name?: string; + dataclip_id: string; + starting_node_id: string; + + triggers: Node[]; + jobs: Node[]; + edges: Edge[]; + + options?: LightningPlanOptions; +}; + +/** + * These are options that can be sent to the worker with an execution plan + * They broadly map to the Workflow Options that are fed straight into the runtime + * and saved to the plan itself + * (although at the time of writing timeout is handled by the worker, not the runtime) + */ +export type LightningPlanOptions = { + runTimeoutMs?: number; + sanitize?: SanitizePolicies; + start?: StepId; + output_dataclips?: boolean; +}; + +/** + * This is a Job or Trigger node in a Lightning plan, + * AKA a Step. + * + * Sticking with the Node/Edge semantics to help distinguish the + * Lightning and runtime typings + */ +export type Node = { + id: string; + name?: string; + body?: string; + adaptor?: string; + credential?: any; + credential_id?: string; + type?: 'webhook' | 'cron'; // trigger only + state?: State; +}; + +/** + * This is a Path (or link) between two Jobs in a Plan. + * + * Sticking with the Node/Edge semantics to help distinguish the + * Lightning and runtime typings + */ +export interface Edge { + id: string; + source_job_id?: string; + source_trigger_id?: string; + target_job_id: string; + name?: string; + condition?: string; + error_path?: boolean; + errors?: any; + enabled?: boolean; +} + +export type DataClip = Record; + +export type Credential = Record; + +// TODO export reason strings from this repo +// and explain what each reason means +export type ExitReasonStrings = + | 'success' + | 'fail' + | 'crash' + | 'kill' + | 'cancel' + | 'exception'; + +export type CONNECT = 'socket:connect'; + +// client left or joined a channel +export type CHANNEL_JOIN = 'socket:channel-join'; +export type CHANNEL_LEAVE = 'socket:channel-leave'; + +// Queue Channel + +// This is the event name +export type CLAIM = 'claim'; + +// This is the payload in the message sent to lightning +export type ClaimPayload = { demand?: number }; + +// This is the response from lightning +export type ClaimReply = { runs: Array }; +export type ClaimRun = { id: string; token: string }; + +// Run channel + +export type GET_PLAN = 'fetch:plan'; +export type GET_CREDENTIAL = 'fetch:credential'; +export type GET_DATACLIP = 'fetch:dataclip'; +export type RUN_START = 'run:start'; +export type RUN_COMPLETE = 'run:complete'; +export type RUN_LOG = 'run:log'; +export type STEP_START = 'step:start'; +export type STEP_COMPLETE = 'step:complete'; + +export type ExitReason = { + reason: ExitReasonStrings; + error_message: string | null; + error_type: string | null; +}; + +export type GetPlanPayload = void; // no payload +export type GetPlanReply = LightningPlan; + +export type GetCredentialPayload = { id: string }; +// credential in-line, no wrapper, arbitrary data +export type GetCredentialReply = {}; + +export type GetDataclipPayload = { id: string }; +export type GetDataClipReply = Uint8Array; // represents a json string Run + +export type RunStartPayload = void; // no payload +export type RunStartReply = {}; // no payload + +export type RunCompletePayload = ExitReason & { + final_dataclip_id?: string; // TODO this will be removed soon +}; +export type RunCompleteReply = undefined; + +export type RunLogPayload = { + message: Array; + timestamp: string; + run_id: string; + level?: string; + source?: string; // namespace + job_id?: string; + step_id?: string; +}; +export type RunLogReply = void; + +export type StepStartPayload = { + job_id: string; + step_id: string; + run_id?: string; + input_dataclip_id?: string; +}; +export type StepStartReply = void; + +export type StepCompletePayload = ExitReason & { + run_id?: string; + job_id: string; + step_id: string; + output_dataclip?: string; + output_dataclip_id?: string; + thread_id?: string; + mem: { + job: number; + system: number; + }; + duration: number; +}; +export type StepCompleteReply = void; diff --git a/packages/lexicon/lightning.js b/packages/lexicon/lightning.js new file mode 100644 index 000000000..de59e06c6 --- /dev/null +++ b/packages/lexicon/lightning.js @@ -0,0 +1,6 @@ +/* + * The API SPEC version represented in lighting.d.ts + * Note that the major version represents the API spec version, while the minor version + * represents the lexicon implementation of it + */ +export const API_VERSION = 1.1; diff --git a/packages/lexicon/package.json b/packages/lexicon/package.json new file mode 100644 index 000000000..0a19ddd8d --- /dev/null +++ b/packages/lexicon/package.json @@ -0,0 +1,26 @@ +{ + "name": "@openfn/lexicon", + "version": "1.0.0", + "description": "Central repo of names and type definitions", + "author": "Open Function Group ", + "license": "ISC", + "type": "module", + "main": "index.js", + "exports": { + ".": { + "import": { + "default": "./index.js", + "types": "./core.d.ts" + } + }, + "./lightning": { + "import": { + "default": "./lightning.js", + "types": "./lightning.d.ts" + } + } + }, + "devDependencies": { + "@openfn/logger": "workspace:^" + } +} diff --git a/packages/lightning-mock/CHANGELOG.md b/packages/lightning-mock/CHANGELOG.md index 1184018dd..edff55188 100644 --- a/packages/lightning-mock/CHANGELOG.md +++ b/packages/lightning-mock/CHANGELOG.md @@ -1,5 +1,30 @@ # @openfn/lightning-mock +## 2.0.0 + +### Major Changes + +- 86dd668: Symbolic 1.0 version release + +### Minor Changes + +- 29bff41: Optionally mock the run token + +### Patch Changes + +- Updated dependencies [5f24294] +- Updated dependencies [649ca43] +- Updated dependencies [86dd668] +- Updated dependencies [823b471] +- Updated dependencies [9f6c35d] +- Updated dependencies [86dd668] +- Updated dependencies [ea6fc05] +- Updated dependencies [86dd668] + - @openfn/engine-multi@1.0.0 + - @openfn/logger@1.0.0 + - @openfn/runtime@1.0.0 + - @openfn/lexicon@1.0.0 + ## 1.2.1 ### Patch Changes diff --git a/packages/lightning-mock/package.json b/packages/lightning-mock/package.json index 32b41dfd4..6d44a6698 100644 --- a/packages/lightning-mock/package.json +++ b/packages/lightning-mock/package.json @@ -1,6 +1,6 @@ { "name": "@openfn/lightning-mock", - "version": "1.2.1", + "version": "2.0.0", "private": true, "description": "A mock Lightning server", "main": "dist/index.js", @@ -18,6 +18,7 @@ "dependencies": { "@koa/router": "^12.0.0", "@openfn/engine-multi": "workspace:*", + "@openfn/lexicon": "workspace:^", "@openfn/logger": "workspace:*", "@openfn/runtime": "workspace:*", "@types/koa-logger": "^3.1.2", diff --git a/packages/lightning-mock/src/api-dev.ts b/packages/lightning-mock/src/api-dev.ts index 2ac6bd23b..2ab1a5fbd 100644 --- a/packages/lightning-mock/src/api-dev.ts +++ b/packages/lightning-mock/src/api-dev.ts @@ -2,19 +2,17 @@ * This module sets up a bunch of dev-only APIs * These are not intended to be reflected in Lightning itself */ +import crypto from 'node:crypto'; import Router from '@koa/router'; import { Logger } from '@openfn/logger'; -import crypto from 'node:crypto'; -import { RUN_COMPLETE } from './events'; - -import { ServerState } from './server'; - import type { + LightningPlan, RunCompletePayload, - Run, - DevServer, - LightningEvents, -} from './types'; +} from '@openfn/lexicon/lightning'; + +import { ServerState } from './server'; +import { RUN_COMPLETE } from './events'; +import type { DevServer, LightningEvents } from './types'; type Api = { startRun(runId: string): void; @@ -41,7 +39,7 @@ const setupDevAPI = ( app.getDataclip = (id: string) => state.dataclips[id]; - app.enqueueRun = (run: Run, workerId = 'rte') => { + app.enqueueRun = (run: LightningPlan, workerId = 'rte') => { state.runs[run.id] = run; state.results[run.id] = { workerId, // TODO @@ -140,7 +138,7 @@ const setupRestAPI = (app: DevServer, state: ServerState, logger: Logger) => { const router = new Router(); router.post('/run', (ctx) => { - const run = ctx.request.body as Run; + const run = ctx.request.body as LightningPlan; if (!run) { ctx.response.status = 400; @@ -157,7 +155,7 @@ const setupRestAPI = (app: DevServer, state: ServerState, logger: Logger) => { // convert credentials and dataclips run.jobs.forEach((job) => { - if (job.credential) { + if (job.credential && typeof job.credential !== 'string') { const cid = crypto.randomUUID(); state.credentials[cid] = job.credential; job.credential = cid; diff --git a/packages/lightning-mock/src/api-sockets.ts b/packages/lightning-mock/src/api-sockets.ts index 2aa013a55..f56d5c51d 100644 --- a/packages/lightning-mock/src/api-sockets.ts +++ b/packages/lightning-mock/src/api-sockets.ts @@ -1,26 +1,6 @@ import { WebSocketServer } from 'ws'; import createLogger, { LogLevel, Logger } from '@openfn/logger'; import type { Server } from 'http'; - -import createPheonixMockSocketServer, { - DevSocket, - PhoenixEvent, - PhoenixEventStatus, -} from './socket-server'; -import { - RUN_COMPLETE, - RUN_LOG, - RUN_START, - CLAIM, - GET_PLAN, - GET_CREDENTIAL, - GET_DATACLIP, - STEP_COMPLETE, - STEP_START, -} from './events'; -import { extractRunId, stringify } from './util'; - -import type { ServerState } from './server'; import type { RunStartPayload, RunStartReply, @@ -41,7 +21,27 @@ import type { StepCompleteReply, StepStartPayload, StepStartReply, -} from './types'; +} from '@openfn/lexicon/lightning'; + +import createPheonixMockSocketServer, { + DevSocket, + PhoenixEvent, + PhoenixEventStatus, +} from './socket-server'; +import { + RUN_COMPLETE, + RUN_LOG, + RUN_START, + CLAIM, + GET_PLAN, + GET_CREDENTIAL, + GET_DATACLIP, + STEP_COMPLETE, + STEP_START, +} from './events'; +import { generateRunToken } from './tokens'; +import { extractRunId, stringify } from './util'; +import type { ServerState } from './server'; // dumb cloning id // just an idea for unit tests @@ -102,8 +102,8 @@ const createSocketAPI = ( }); wss.registerEvents('worker:queue', { - [CLAIM]: (ws, event: PhoenixEvent) => { - const { runs } = pullClaim(state, ws, event); + [CLAIM]: async (ws, event: PhoenixEvent) => { + const { runs } = await pullClaim(state, ws, event); state.events.emit(CLAIM, { payload: runs, state: clone(state), @@ -166,13 +166,13 @@ const createSocketAPI = ( // pull claim will try and pull a claim off the queue, // and reply with the response // the reply ensures that only the calling worker will get the run - function pullClaim( + async function pullClaim( state: ServerState, ws: DevSocket, evt: PhoenixEvent ) { const { ref, join_ref, topic } = evt; - const { queue } = state; + const { queue, options } = state; let count = 1; const runs: ClaimRun[] = []; @@ -185,9 +185,10 @@ const createSocketAPI = ( // TODO assign the worker id to the run // Not needed by the mocks at the moment const next = queue.shift(); - // TODO the token in the mock is trivial because we're not going to do any validation on it yet - // TODO need to save the token associated with this run - runs.push({ id: next!, token: 'x.y.z' }); + + const token = await generateRunToken(next!, options.runPrivateKey); + + runs.push({ id: next!, token }); count -= 1; startRun(next!); @@ -232,10 +233,7 @@ const createSocketAPI = ( let payload = { status: 'ok' as PhoenixEventStatus, }; - if ( - !state.pending[runId] || - state.pending[runId].status !== 'started' - ) { + if (!state.pending[runId] || state.pending[runId].status !== 'started') { payload = { status: 'error', }; @@ -254,21 +252,30 @@ const createSocketAPI = ( evt: PhoenixEvent ) { const { ref, join_ref, topic, payload } = evt; - const response = state.credentials[payload.id]; - // console.log(topic, event, response); + const cred = state.credentials[payload.id]; + + let response; + if (cred) { + response = { + status: 'ok', + response: cred, + }; + } else { + response = { + status: 'error', + response: 'not_found', + }; + } + ws.reply({ ref, join_ref, topic, - payload: { - status: 'ok', - response, - }, + // @ts-ignore + payload: response, }); } - // TODO this mock function is broken in the phoenix package update - // (I am not TOO worried, the actual integration works fine) function getDataclip( state: ServerState, ws: DevSocket, @@ -277,11 +284,19 @@ const createSocketAPI = ( const { ref, topic, join_ref } = evt; const dataclip = state.dataclips[evt.payload.id]; - // Send the data as an ArrayBuffer (our stringify function will do this) - const payload = { - status: 'ok', - response: enc.encode(stringify(dataclip)), - }; + let payload; + if (dataclip) { + payload = { + status: 'ok', + response: enc.encode(stringify(dataclip)), + }; + } else { + // TODO I think this actually tidier than what lightning does... + payload = { + status: 'error', + response: 'not_found', + }; + } ws.reply({ ref, diff --git a/packages/lightning-mock/src/index.ts b/packages/lightning-mock/src/index.ts index 5f3b83d4f..94cc4ef21 100644 --- a/packages/lightning-mock/src/index.ts +++ b/packages/lightning-mock/src/index.ts @@ -1,2 +1,4 @@ import createLightningServer from './server'; export default createLightningServer; + +export { toBase64, generateKeys } from './util'; diff --git a/packages/lightning-mock/src/server.ts b/packages/lightning-mock/src/server.ts index 8191c23f9..a035d6da4 100644 --- a/packages/lightning-mock/src/server.ts +++ b/packages/lightning-mock/src/server.ts @@ -7,13 +7,14 @@ import createLogger, { LogLevel, Logger, } from '@openfn/logger'; +import type { StepId } from '@openfn/lexicon'; +import type { RunLogPayload, LightningPlan } from '@openfn/lexicon/lightning'; import createWebSocketAPI from './api-sockets'; import createDevAPI from './api-dev'; +import { fromBase64 } from './util'; +import type { DevServer } from './types'; -import type { RunLogPayload, Run, DevServer } from './types'; - -type StepId = string; type JobId = string; export type RunState = { @@ -29,7 +30,7 @@ export type ServerState = { credentials: Record; // list of runs by id - runs: Record; + runs: Record; // list of dataclips by id dataclips: Record; @@ -43,12 +44,17 @@ export type ServerState = { // event emitter for debugging and observability events: EventEmitter; + + options: LightningOptions; }; export type LightningOptions = { logger?: Logger; logLevel?: LogLevel; port?: string | number; + + // if passed, a JWT will be included in all claim responses + runPrivateKey?: string; }; export type RunId = string; @@ -57,6 +63,11 @@ export type RunId = string; const createLightningServer = (options: LightningOptions = {}) => { const logger = options.logger || createMockLogger(); + // decode the incoming private key from base 64 + const runPrivateKey = options.runPrivateKey + ? fromBase64(options.runPrivateKey) + : undefined; + const state = { credentials: {}, runs: {}, @@ -66,6 +77,11 @@ const createLightningServer = (options: LightningOptions = {}) => { queue: [] as RunId[], results: {}, events: new EventEmitter(), + + options: { + ...options, + runPrivateKey, + }, } as ServerState; const app = new Koa() as DevServer; diff --git a/packages/lightning-mock/src/tokens.ts b/packages/lightning-mock/src/tokens.ts new file mode 100644 index 000000000..cb3aa1608 --- /dev/null +++ b/packages/lightning-mock/src/tokens.ts @@ -0,0 +1,29 @@ +import * as jose from 'jose'; +import crypto from 'node:crypto'; + +export const generateRunToken = async ( + runId: string, + privateKey?: string +): Promise => { + if (privateKey) { + try { + const alg = 'RS256'; + + const key = crypto.createPrivateKey(privateKey); + + const jwt = await new jose.SignJWT({ id: runId }) + .setProtectedHeader({ alg }) + .setIssuedAt() + .setIssuer('Lightning') + .setExpirationTime('2h') + .sign(key); + return jwt; + } catch (e) { + console.error('ERROR IN MOCK LIGHTNING SERVER'); + console.error('Failed to generate JWT token for run ', runId); + console.error(e); + } + } + + return 'x.y.z'; +}; diff --git a/packages/lightning-mock/src/types.ts b/packages/lightning-mock/src/types.ts index ce9a492b3..b3613f986 100644 --- a/packages/lightning-mock/src/types.ts +++ b/packages/lightning-mock/src/types.ts @@ -1,50 +1,20 @@ import Koa from 'koa'; +import type { + LightningPlan, + DataClip, + Credential, +} from '@openfn/lexicon/lightning'; import type { ServerState } from './server'; -export type Node = { - id: string; - body?: string; - adaptor?: string; - credential?: any; // TODO tighten this up, string or object - type?: 'webhook' | 'cron'; // trigger only - state?: any; // Initial state / defaults -}; - -export interface Edge { - id: string; - source_job_id?: string; - source_trigger_id?: string; - target_job_id: string; - name?: string; - condition?: string; - error_path?: boolean; - errors?: any; -} - -// An run object returned by Lightning -export type Run = { - id: string; - dataclip_id: string; - starting_node_id: string; - - triggers: Node[]; - jobs: Node[]; - edges: Edge[]; - - options?: Record; // TODO type the expected options -}; - export type LightningEvents = 'log' | 'run-complete'; -export type DataClip = any; - export type DevServer = Koa & { state: ServerState; addCredential(id: string, cred: Credential): void; addDataclip(id: string, data: DataClip): void; - enqueueRun(run: Run): void; + enqueueRun(run: LightningPlan): void; destroy: () => void; - getRun(id: string): Run; + getRun(id: string): LightningPlan; getCredential(id: string): Credential; getDataclip(id: string): DataClip; getQueueLength(): number; @@ -57,80 +27,9 @@ export type DevServer = Koa & { runId: string, fn: (evt: any) => void ): void; - registerRun(run: Run): void; + registerRun(run: LightningPlan): void; removeAllListeners(): void; reset(): void; startRun(id: string): any; waitForResult(runId: string): Promise; }; - -/** - * These are duplicated from the worker and subject to drift! - * We cannot import them directly because it creates a circular build dependency mock <-> worker - * We cannot declare an internal private types module because the generated dts will try to import from it - * - * The list of types is small enough right now that this is just about manageable - **/ -export type ExitReasonStrings = - | 'success' - | 'fail' - | 'crash' - | 'kill' - | 'cancel' - | 'exception'; - -export type ExitReason = { - reason: ExitReasonStrings; - error_message: string | null; - error_type: string | null; -}; - -export type ClaimPayload = { demand?: number }; -export type ClaimReply = { runs: Array }; -export type ClaimRun = { id: string; token: string }; - -export type GetPlanPayload = void; // no payload -export type GetPlanReply = Run; - -export type GetCredentialPayload = { id: string }; -// credential in-line, no wrapper, arbitrary data -export type GetCredentialReply = {}; - -export type GetDataclipPayload = { id: string }; -export type GetDataClipReply = Uint8Array; // represents a json string Run - -export type RunStartPayload = void; // no payload -export type RunStartReply = {}; // no payload - -export type RunCompletePayload = ExitReason & { - final_dataclip_id?: string; // TODO this will be removed soon -}; -export type RunCompleteReply = undefined; - -export type RunLogPayload = { - message: Array; - timestamp: string; - run_id: string; - level?: string; - source?: string; // namespace - job_id?: string; - step_id?: string; -}; -export type RunLogReply = void; - -export type StepStartPayload = { - job_id: string; - step_id: string; - run_id?: string; - input_dataclip_id?: string; -}; -export type StepStartReply = void; - -export type StepCompletePayload = ExitReason & { - run_id?: string; - job_id: string; - step_id: string; - output_dataclip?: string; - output_dataclip_id?: string; -}; -export type StepCompleteReply = void; diff --git a/packages/lightning-mock/src/util.ts b/packages/lightning-mock/src/util.ts index 5f9c857b0..2ab35d4f8 100644 --- a/packages/lightning-mock/src/util.ts +++ b/packages/lightning-mock/src/util.ts @@ -1,9 +1,9 @@ import fss from 'fast-safe-stringify'; +import * as jose from 'jose'; export const RUN_PREFIX = 'run:'; -export const extractRunId = (topic: string) => - topic.substr(RUN_PREFIX.length); +export const extractRunId = (topic: string) => topic.substr(RUN_PREFIX.length); // This is copied out of ws-worker and untested here export const stringify = (obj: any): string => @@ -13,3 +13,18 @@ export const stringify = (obj: any): string => } return value; }); + +export const generateKeys = async () => { + const { publicKey, privateKey } = await jose.generateKeyPair('RS256'); + return { + // @ts-ignore export function + public: publicKey.export({ type: 'pkcs1', format: 'pem' }), + // @ts-ignore export function + private: privateKey.export({ type: 'pkcs1', format: 'pem' }), + }; +}; + +export const toBase64 = (key: string) => Buffer.from(key).toString('base64'); + +export const fromBase64 = (key: string) => + Buffer.from(key, 'base64').toString(); diff --git a/packages/lightning-mock/test/channels/claim.test.ts b/packages/lightning-mock/test/channels/claim.test.ts index f0c4fd6f8..54befecdb 100644 --- a/packages/lightning-mock/test/channels/claim.test.ts +++ b/packages/lightning-mock/test/channels/claim.test.ts @@ -8,8 +8,8 @@ const port = 4444; type Channel = any; -let server; -let client; +let server: any; +let client: any; test.before(async () => ({ server, client } = await setup(port))); @@ -31,7 +31,7 @@ const join = (channelName: string, params: any = {}): Promise => .receive('ok', () => { done(channel); }) - .receive('error', (err) => { + .receive('error', (err: any) => { // err will be the response message on the payload (ie, invalid_token, invalid_run_id etc) reject(new Error(err)); }); @@ -46,7 +46,7 @@ test.serial( const channel = await join('worker:queue'); // response is an array of run ids - channel.push(CLAIM).receive('ok', (response) => { + channel.push(CLAIM).receive('ok', (response: any) => { const { runs } = response; t.assert(Array.isArray(runs)); t.is(runs.length, 0); @@ -67,7 +67,7 @@ test.serial( const channel = await join('worker:queue'); // response is an array of run ids - channel.push(CLAIM).receive('ok', (response) => { + channel.push(CLAIM).receive('ok', (response: any) => { const { runs } = response; t.truthy(runs); t.is(runs.length, 1); diff --git a/packages/lightning-mock/test/channels/run.test.ts b/packages/lightning-mock/test/channels/run.test.ts index d01889c3d..5517e9b04 100644 --- a/packages/lightning-mock/test/channels/run.test.ts +++ b/packages/lightning-mock/test/channels/run.test.ts @@ -1,4 +1,10 @@ import test from 'ava'; +import type { + LightningPlan, + RunCompletePayload, + Credential, + DataClip, +} from '@openfn/lexicon/lightning'; import { setup } from '../util'; import { runs, credentials, dataclips } from '../data'; @@ -9,16 +15,14 @@ import { GET_DATACLIP, } from '../../src/events'; -import { RunCompletePayload } from '@openfn/ws-worker'; - const enc = new TextDecoder('utf-8'); type Channel = any; const port = 7777; -let server; -let client; +let server: any; +let client: any; // Set up a lightning server and a phoenix socket client before each test test.before(async () => ({ server, client } = await setup(port))); @@ -41,7 +45,7 @@ const join = (channelName: string, params: any = {}): Promise => .receive('ok', () => { done(channel); }) - .receive('error', (err) => { + .receive('error', (err: any) => { // err will be the response message on the payload (ie, invalid_token, invalid_run_id etc) reject(new Error(err)); }); @@ -72,7 +76,7 @@ test.serial('get run data through the run channel', async (t) => { server.startRun(run1.id); const channel = await join(`run:${run1.id}`, { token: 'a.b.c' }); - channel.push(GET_PLAN).receive('ok', (run) => { + channel.push(GET_PLAN).receive('ok', (run: LightningPlan) => { t.deepEqual(run, run1); done(); }); @@ -126,20 +130,39 @@ test.serial('get credential through the run channel', async (t) => { server.addCredential('a', credentials['a']); const channel = await join(`run:${run1.id}`, { token: 'a.b.c' }); - channel.push(GET_CREDENTIAL, { id: 'a' }).receive('ok', (result) => { - t.deepEqual(result, credentials['a']); - done(); - }); + channel + .push(GET_CREDENTIAL, { id: 'a' }) + .receive('ok', (result: Credential) => { + t.deepEqual(result, credentials['a']); + done(); + }); }); }); +test.serial( + 'get credential should error if the credential does not exist', + async (t) => { + return new Promise(async (done) => { + server.startRun(run1.id); + + const channel = await join(`run:${run1.id}`, { token: 'a.b.c' }); + channel + .push(GET_CREDENTIAL, { id: 'unknown' }) + .receive('error', (result: any) => { + t.is(result, 'not_found'); + done(); + }); + }); + } +); + test.serial('get dataclip through the run channel', async (t) => { return new Promise(async (done) => { server.startRun(run1.id); server.addDataclip('d', dataclips['d']); const channel = await join(`run:${run1.id}`, { token: 'a.b.c' }); - channel.push(GET_DATACLIP, { id: 'd' }).receive('ok', (result) => { + channel.push(GET_DATACLIP, { id: 'd' }).receive('ok', (result: any) => { const str = enc.decode(new Uint8Array(result)); const dataclip = JSON.parse(str); t.deepEqual(dataclip, dataclips['d']); @@ -148,6 +171,23 @@ test.serial('get dataclip through the run channel', async (t) => { }); }); +test.serial( + 'get dataclip should error if the dataclip does not exist', + async (t) => { + return new Promise(async (done) => { + server.startRun(run1.id); + + const channel = await join(`run:${run1.id}`, { token: 'a.b.c' }); + channel + .push(GET_DATACLIP, { id: 'x' }) + .receive('error', (result: any) => { + t.is(result, 'not_found'); + done(); + }); + }); + } +); + // TODO test that all events are proxied out to server.on test.serial( @@ -159,7 +199,7 @@ test.serial( server.startRun(run1.id); server.addDataclip('result', result); - server.waitForResult(run1.id).then((dataclip) => { + server.waitForResult(run1.id).then((dataclip: DataClip) => { t.deepEqual(result, dataclip); done(); }); diff --git a/packages/lightning-mock/test/events/log.test.ts b/packages/lightning-mock/test/events/log.test.ts index 99b326011..f57d020b4 100644 --- a/packages/lightning-mock/test/events/log.test.ts +++ b/packages/lightning-mock/test/events/log.test.ts @@ -3,8 +3,8 @@ import { RUN_LOG } from '../../src/events'; import { join, setup, createRun } from '../util'; -let server; -let client; +let server: any; +let client: any; const port = 5501; @@ -26,7 +26,7 @@ test.serial('acknowledge valid message (run log)', async (t) => { const channel = await join(client, run.id); - channel.push(RUN_LOG, event).receive('ok', (evt) => { + channel.push(RUN_LOG, event).receive('ok', () => { t.pass('event acknowledged'); done(); }); @@ -50,7 +50,7 @@ test.serial('acknowledge valid message (job log)', async (t) => { const channel = await join(client, run.id); - channel.push(RUN_LOG, event).receive('ok', (evt) => { + channel.push(RUN_LOG, event).receive('ok', () => { t.pass('event acknowledged'); done(); }); diff --git a/packages/lightning-mock/test/events/run-complete.test.ts b/packages/lightning-mock/test/events/run-complete.test.ts index 42ef2f878..9f00fb575 100644 --- a/packages/lightning-mock/test/events/run-complete.test.ts +++ b/packages/lightning-mock/test/events/run-complete.test.ts @@ -2,8 +2,8 @@ import test from 'ava'; import { join, setup, createRun } from '../util'; import { RUN_COMPLETE } from '../../src/events'; -let server; -let client; +let server: any; +let client: any; const port = 5501; diff --git a/packages/lightning-mock/test/events/run-start.test.ts b/packages/lightning-mock/test/events/run-start.test.ts index 30781d7e4..51419f0ab 100644 --- a/packages/lightning-mock/test/events/run-start.test.ts +++ b/packages/lightning-mock/test/events/run-start.test.ts @@ -2,8 +2,8 @@ import test from 'ava'; import { join, setup, createRun } from '../util'; import { RUN_START } from '../../src/events'; -let server; -let client; +let server: any; +let client: any; const port = 5500; diff --git a/packages/lightning-mock/test/events/step-complete.test.ts b/packages/lightning-mock/test/events/step-complete.test.ts index 5422b0671..a23d48d62 100644 --- a/packages/lightning-mock/test/events/step-complete.test.ts +++ b/packages/lightning-mock/test/events/step-complete.test.ts @@ -3,8 +3,8 @@ import { STEP_COMPLETE } from '../../src/events'; import { join, setup, createRun } from '../util'; -let server; -let client; +let server: any; +let client: any; const port = 5501; @@ -24,7 +24,7 @@ test.serial('acknowledge valid message', async (t) => { const channel = await join(client, run.id); - channel.push(STEP_COMPLETE, event).receive('ok', (evt) => { + channel.push(STEP_COMPLETE, event).receive('ok', () => { t.pass('event acknowledged'); done(); }); @@ -88,7 +88,7 @@ test.serial('error if no output dataclip', async (t) => { }; const channel = await join(client, run.id); - channel.push(STEP_COMPLETE, event).receive('error', (e) => { + channel.push(STEP_COMPLETE, event).receive('error', (e: any) => { t.is(e.toString(), 'no output_dataclip'); done(); }); @@ -108,7 +108,7 @@ test.serial('error if no output dataclip_id', async (t) => { }; const channel = await join(client, run.id); - channel.push(STEP_COMPLETE, event).receive('error', (e) => { + channel.push(STEP_COMPLETE, event).receive('error', (e: any) => { t.is(e.toString(), 'no output_dataclip_id'); done(); }); diff --git a/packages/lightning-mock/test/events/step-start.test.ts b/packages/lightning-mock/test/events/step-start.test.ts index f870ba9b7..3f1924905 100644 --- a/packages/lightning-mock/test/events/step-start.test.ts +++ b/packages/lightning-mock/test/events/step-start.test.ts @@ -2,8 +2,8 @@ import test from 'ava'; import { STEP_START } from '../../src/events'; import { join, setup, createRun } from '../util'; -let server; -let client; +let server: any; +let client: any; const port = 5501; diff --git a/packages/lightning-mock/test/server.test.ts b/packages/lightning-mock/test/server.test.ts index ee73bd3b5..5b0e9a3c1 100644 --- a/packages/lightning-mock/test/server.test.ts +++ b/packages/lightning-mock/test/server.test.ts @@ -2,12 +2,12 @@ import test from 'ava'; import { Socket } from 'phoenix'; import { WebSocket } from 'ws'; +import type { LightningPlan } from '@openfn/lexicon/lightning'; -import { createRun, setup } from './util'; -import type { Run } from '../src/types'; +import { setup } from './util'; -let server; -let client; +let server: any; +let client: any; const port = 3333; @@ -22,7 +22,7 @@ test.serial('should setup an run at /POST /run', async (t) => { t.is(Object.keys(state.runs).length, 0); t.is(Object.keys(state.runs).length, 0); - const run: Run = { + const run: LightningPlan = { id: 'a', dataclip_id: 'a', starting_node_id: 'j', @@ -82,10 +82,10 @@ test.serial('reject ws connections without a token', (t) => { }); test.serial('respond to channel join requests', (t) => { - return new Promise(async (done, reject) => { + return new Promise(async (done) => { const channel = client.channel('x', {}); - channel.join().receive('ok', (res) => { + channel.join().receive('ok', (res: any) => { t.is(res, 'ok'); done(); }); diff --git a/packages/lightning-mock/test/socket-server.test.ts b/packages/lightning-mock/test/socket-server.test.ts index d0fc34e0c..c21dd6a9f 100644 --- a/packages/lightning-mock/test/socket-server.test.ts +++ b/packages/lightning-mock/test/socket-server.test.ts @@ -4,9 +4,9 @@ import { Socket } from 'phoenix'; import { WebSocket } from 'ws'; import createSocketServer from '../src/socket-server'; -let socket; -let server; -let messages; +let socket: any; +let server: any; +let messages: any; const wait = (duration = 10) => new Promise((resolve) => { @@ -19,6 +19,7 @@ test.beforeEach( messages = []; // @ts-ignore I don't care about missing server options here server = createSocketServer({ + // @ts-ignore state: { events: new EventEmitter(), }, @@ -48,13 +49,13 @@ test.serial('respond to connection join requests', async (t) => { channel .join() - .receive('ok', (resp) => { + .receive('ok', (resp: any) => { t.is(resp, 'ok'); channel.push('hello'); resolve(); }) - .receive('error', (e) => { + .receive('error', (e: any) => { console.log(e); }); }); @@ -64,7 +65,7 @@ test.serial('send a message', async (t) => { return new Promise((resolve) => { const channel = socket.channel('x', {}); - server.listenToChannel('x', (_ws, { payload, event }) => { + server.listenToChannel('x', (_ws: any, { payload, event }: any) => { t.is(event, 'hello'); t.deepEqual(payload, { x: 1 }); diff --git a/packages/lightning-mock/test/tokens.test.ts b/packages/lightning-mock/test/tokens.test.ts new file mode 100644 index 000000000..bf4bdc041 --- /dev/null +++ b/packages/lightning-mock/test/tokens.test.ts @@ -0,0 +1,56 @@ +import test from 'ava'; +import * as jose from 'jose'; +import crypto from 'node:crypto'; + +import { generateRunToken } from '../src/tokens'; +import { generateKeys } from '../src/util'; + +let keys = { public: '.', private: '.' }; + +// util function to verify a token against a public key +const verify = async (token: string, publicKey: string) => { + const key = crypto.createPublicKey(publicKey); + + const { payload } = await jose.jwtVerify(token, key); + + return payload; +}; + +test.before(async () => { + keys = await generateKeys(); +}); + +test('generate a placeholder token if no key passed', async (t) => { + const result = await generateRunToken('.'); + t.is(result, 'x.y.z'); +}); + +test('generate a real token if a key is passed', async (t) => { + const result = await generateRunToken('.', keys.private); + t.true(result.length > 100); +}); + +test('token should be verified with the public key', async (t) => { + const result = await generateRunToken('.', keys.private); + + // Basically testing that this doesn't throw + const payload = await verify(result, keys.public); + t.log(payload); + t.truthy(payload); +}); + +test('token claims should include the run id', async (t) => { + const result = await generateRunToken('23', keys.private); + + const { id } = await verify(result, keys.public); + t.is(id, '23'); +}); + +test('token claims should include the issuer: Lightning', async (t) => { + const result = await generateRunToken('23', keys.private); + + const { iss } = await verify(result, keys.public); + t.is(iss, 'Lightning'); +}); + +// TODO - claim should include exp and nbf diff --git a/packages/lightning-mock/test/util.ts b/packages/lightning-mock/test/util.ts index 937ebf369..cabe11b1f 100644 --- a/packages/lightning-mock/test/util.ts +++ b/packages/lightning-mock/test/util.ts @@ -33,7 +33,7 @@ export const join = (client: any, runId: string): Promise => .receive('ok', () => { done(channel); }) - .receive('error', (err) => { + .receive('error', (err: any) => { reject(new Error(err)); }); }); diff --git a/packages/lightning-mock/tsconfig.json b/packages/lightning-mock/tsconfig.json index ba1452256..8906c56a5 100644 --- a/packages/lightning-mock/tsconfig.json +++ b/packages/lightning-mock/tsconfig.json @@ -1,6 +1,6 @@ { "extends": "../../tsconfig.common", - "include": ["src/**/*.ts"], + "include": ["src/**/*.ts", "test/**/*.ts"], "compilerOptions": { "module": "ESNext" } diff --git a/packages/logger/CHANGELOG.md b/packages/logger/CHANGELOG.md index 39c72e12a..e6ffd9de3 100644 --- a/packages/logger/CHANGELOG.md +++ b/packages/logger/CHANGELOG.md @@ -1,5 +1,17 @@ # @openfn/logger +## 1.0.0 + +### Major Changes + +- 86dd668: Symbolic 1.0 version release + +### Patch Changes + +- 649ca43: In JSON mode, do not stringify emitted messages. + Better handling of error objects +- 9f6c35d: Support proxy() on the mock logger + ## 0.0.20 ### Patch Changes diff --git a/packages/logger/package.json b/packages/logger/package.json index e0b75aa14..1b202bf12 100644 --- a/packages/logger/package.json +++ b/packages/logger/package.json @@ -1,6 +1,6 @@ { "name": "@openfn/logger", - "version": "0.0.20", + "version": "1.0.0", "description": "Cross-package logging utility", "module": "dist/index.js", "author": "Open Function Group ", diff --git a/packages/logger/src/logger.ts b/packages/logger/src/logger.ts index 74e59e486..05f2d3303 100644 --- a/packages/logger/src/logger.ts +++ b/packages/logger/src/logger.ts @@ -233,6 +233,7 @@ export default function (name?: string, options: LogOptions = {}): Logger { j = j as JSONLog; log(j.name, j.level, ...j.message); + return [j.name, j.level, ...j.message]; }; // print() will log without any metadata/overhead/santization diff --git a/packages/logger/test/logger.test.ts b/packages/logger/test/logger.test.ts index 53e9a7515..3acaf7522 100644 --- a/packages/logger/test/logger.test.ts +++ b/packages/logger/test/logger.test.ts @@ -549,27 +549,25 @@ test('log an error object', (t) => { test('proxy a json argument to string', (t) => { const logger = createLogger('x'); logger.proxy({ name: 'y', level: 'success', message: ['hello'] }); - - const { namespace, level, message } = logger._parse(logger._last); - t.is(namespace, 'y'); + const [level, name, _icon, message] = logger._last; + t.is(name, '[y]'); t.is(level, 'success'); - t.deepEqual(message, 'hello'); + t.is(message, 'hello'); }); test('proxy string arguments to string', (t) => { const logger = createLogger('x'); logger.proxy('y', 'success', ['hello']); - const { namespace, level, message } = logger._parse(logger._last); - t.is(namespace, 'y'); + const [level, name, _icon, message] = logger._last; + t.is(name, '[y]'); t.is(level, 'success'); - t.deepEqual(message, 'hello'); + t.is(message, 'hello'); }); test('proxy a json argument to json', (t) => { const logger = createLogger('x', { json: true }); logger.proxy({ name: 'y', level: 'success', message: ['hello'] }); - const { name, level, message } = logger._last; t.is(name, 'y'); t.is(level, 'success'); diff --git a/packages/runtime/CHANGELOG.md b/packages/runtime/CHANGELOG.md index c993ebd55..a3f02e071 100644 --- a/packages/runtime/CHANGELOG.md +++ b/packages/runtime/CHANGELOG.md @@ -1,5 +1,22 @@ # @openfn/runtime +## 1.0.0 + +### Major Changes + +- 86dd668: The 1.0 release of the runtime updates the signatures and language of the runtime to match Lightning. It also includes some housekeeping. + + - Update main run() signature + - Remove strict mode options + - Integrate with lexicon + +### Patch Changes + +- Updated dependencies [649ca43] +- Updated dependencies [9f6c35d] +- Updated dependencies [86dd668] + - @openfn/logger@1.0.0 + ## 0.2.6 ### Patch Changes diff --git a/packages/runtime/README.md b/packages/runtime/README.md index 52723b6cd..7ae05271b 100644 --- a/packages/runtime/README.md +++ b/packages/runtime/README.md @@ -54,29 +54,35 @@ It is expected that that long-running runtimes will have some kind of purge func ## Execution Plans -The runtime can accept an Execution Plan (or workflow) as an input. +The runtime can accept an Execution Plan (or workflow) as an input. This defines a graph of of jobs (expressions) to run in sequence. Each node in the graph is a job, and contains a set of edges which tell the runtime what to execute next. The runtime will return the final state when there is nothing left to execute. -A workflow looks like this: -``` +An execution plan looks like this: + +```js { - start: 'a', - jobs: [{ - id: 'a', - expression: "source or path", - state: { /* default state */ }, - configuration: { /* credentials */ }, - next: { - 'b': true, // edge to another job - 'c': { condition: "state.data.age > 18", // conditional edge to another job - } - adaptor: "common", // it's complicated - }] + workflow: { + jobs: [{ + id: 'a', + expression: "source or path", + state: { /* default state */ }, + configuration: { /* credentials */ }, + next: { + 'b': true, // edge to another job + 'c': { condition: "state.data.age > 18", // conditional edge to another job + } + adaptor: "common", // it's complicated + }] + }, + options: { + start: 'a', + } } ``` + State and start node can be passed to the runtime as inputs. If no start node is provided, the first job in the jobs array will run first. @@ -88,9 +94,10 @@ The runtime itself does not use the `adaptor` key, as it expects jobs to be comp See src/types.ts for a full definition of an execution plan, and `test/runtime.test.ts` for examples. At the time of writing, exectuion plans have some restrictions: -* Jobs execute in series (but parallisation can be simulated) -* A job can only have one input node (`a -> z <- b` is not allowed) -* Jobs cannot have circular references (`a -> b -> a` is not allowed) + +- Jobs execute in series (but parallisation can be simulated) +- A job can only have one input node (`a -> z <- b` is not allowed) +- Jobs cannot have circular references (`a -> b -> a` is not allowed) Support for more complex plans will be introduced later. @@ -149,6 +156,7 @@ When a job calls `import` to import a dependent module, the runtime must resolve It does this through a `linker` function, which takes as arguments a package specifier and `vm` context, and an options object. It will load the module using a dynamic `import` and proxy the interface through a `vm.SyntheticModules`, usng the experimental `vm.SourceTextModule` API. Modules can be loaded from: + - An explicit path (pass as a dictionary of name: path strings into the options) - The current working repo (see below) - The current working node_modules (should we somehow disallow this?) diff --git a/packages/runtime/package.json b/packages/runtime/package.json index d176dc9a5..8c00db466 100644 --- a/packages/runtime/package.json +++ b/packages/runtime/package.json @@ -1,6 +1,6 @@ { "name": "@openfn/runtime", - "version": "0.2.6", + "version": "1.0.0", "description": "Job processing runtime.", "type": "module", "exports": { @@ -27,6 +27,7 @@ "license": "ISC", "devDependencies": { "@openfn/language-common": "2.0.0-rc3", + "@openfn/lexicon": "workspace:^", "@types/mock-fs": "^4.13.1", "@types/node": "^18.15.13", "@types/semver": "^7.5.0", diff --git a/packages/runtime/src/execute/compile-plan.ts b/packages/runtime/src/execute/compile-plan.ts index f5c7291c0..ccb692f0c 100644 --- a/packages/runtime/src/execute/compile-plan.ts +++ b/packages/runtime/src/execute/compile-plan.ts @@ -1,17 +1,16 @@ import type { + CompiledEdge, CompiledExecutionPlan, - CompiledJobEdge, - CompiledJobNode, - ExecutionPlan, - JobEdge, + CompiledStep, } from '../types'; import compileFunction from '../modules/compile-function'; import { conditionContext, Context } from './context'; +import { ExecutionPlan, Job, StepEdge, Workflow } from '@openfn/lexicon'; const compileEdges = ( from: string, - edges: string | Record, + edges: string | Record, context: Context ) => { if (typeof edges === 'string') { @@ -19,7 +18,7 @@ const compileEdges = ( } const errs = []; - const result = {} as Record; + const result = {} as Record; for (const edgeId in edges) { try { const edge = edges[edgeId]; @@ -34,7 +33,7 @@ const compileEdges = ( if (typeof edge.condition === 'string') { (newEdge as any).condition = compileFunction(edge.condition, context); } - result[edgeId] = newEdge as CompiledJobEdge; + result[edgeId] = newEdge as CompiledEdge; } } catch (e: any) { errs.push( @@ -55,8 +54,8 @@ const compileEdges = ( // find the upstream job for a given job // Inefficient but fine for now (note that validation does something similar) // Note that right now we only support one upstream job -const findUpstream = (plan: ExecutionPlan, id: string) => { - for (const job of plan.jobs) { +const findUpstream = (workflow: Workflow, id: string) => { + for (const job of workflow.steps) { if (job.next) if (typeof job.next === 'string') { if (job.next === id) { @@ -69,7 +68,9 @@ const findUpstream = (plan: ExecutionPlan, id: string) => { }; export default (plan: ExecutionPlan) => { + const { workflow, options = {} } = plan; let autoJobId = 0; + const generateJobId = () => `job-${++autoJobId}`; const context = conditionContext(); @@ -89,42 +90,50 @@ export default (plan: ExecutionPlan) => { } }; - // ensure ids before we start - for (const job of plan.jobs) { + for (const job of workflow.steps) { if (!job.id) { job.id = generateJobId(); } } - const newPlan = { - jobs: {}, - start: plan.start, - initialState: plan.initialState, - } as Pick; - - for (const job of plan.jobs) { - const jobId = job.id!; - if (!newPlan.start) { - // Default the start job to the first - newPlan.start = jobId; - } - const newJob: CompiledJobNode = { - id: jobId, - expression: job.expression, // TODO we should compile this here + const newPlan: CompiledExecutionPlan = { + workflow: { + steps: {}, + }, + options: { + ...options, + start: options.start ?? workflow.steps[0]?.id!, + }, + }; + + const maybeAssign = (a: any, b: any, keys: Array) => { + keys.forEach((key) => { + if (a.hasOwnProperty(key)) { + b[key] = a[key]; + } + }); + }; + + for (const step of workflow.steps) { + const stepId = step.id!; + const newStep: CompiledStep = { + id: stepId, }; - if (job.state) { - newJob.state = job.state; - } - if (job.configuration) { - newJob.configuration = job.configuration; - } - if (job.next) { + + maybeAssign(step, newStep, [ + 'expression', + 'state', + 'configuration', + 'name', + ]); + + if (step.next) { trapErrors(() => { - newJob.next = compileEdges(jobId, job.next!, context); + newStep.next = compileEdges(stepId, step.next!, context); }); } - newJob.previous = findUpstream(plan, jobId); - newPlan.jobs[jobId] = newJob; + newStep.previous = findUpstream(workflow, stepId); + newPlan.workflow.steps[stepId] = newStep; } if (errs.length) { diff --git a/packages/runtime/src/execute/context.ts b/packages/runtime/src/execute/context.ts index 585567199..afe45cc52 100644 --- a/packages/runtime/src/execute/context.ts +++ b/packages/runtime/src/execute/context.ts @@ -1,5 +1,5 @@ import vm from 'node:vm'; -import type { State } from '../types'; +import type { State } from '@openfn/lexicon'; import type { Options } from '../runtime'; const freezeAll = ( @@ -15,7 +15,10 @@ const freezeAll = ( // Build a safe and helpful execution context // This will be shared by all jobs -export default (state: State, options: Pick) => { +export default ( + state: State, + options: Pick +) => { const logger = options.jobLogger ?? console; const globals = options.globals || {}; const context = vm.createContext( diff --git a/packages/runtime/src/execute/expression.ts b/packages/runtime/src/execute/expression.ts index 324f611bb..f2f4bc20a 100644 --- a/packages/runtime/src/execute/expression.ts +++ b/packages/runtime/src/execute/expression.ts @@ -1,8 +1,9 @@ import { printDuration, Logger } from '@openfn/logger'; import stringify from 'fast-safe-stringify'; +import type { Operation, State } from '@openfn/lexicon'; + import loadModule from '../modules/module-loader'; -import { Operation, JobModule, State, ExecutionContext } from '../types'; -import { Options, TIMEOUT } from '../runtime'; +import { Options, DEFAULT_TIMEOUT_MS } from '../runtime'; import buildContext, { Context } from './context'; import defaultExecute from '../util/execute'; import clone from '../util/clone'; @@ -16,25 +17,27 @@ import { assertRuntimeError, assertSecurityKill, } from '../errors'; +import type { JobModule, ExecutionContext } from '../types'; export type ExecutionErrorWrapper = { state: any; error: any; }; +// TODO don't send the whole context because it's a bit confusing - just the options maybe? export default ( ctx: ExecutionContext, expression: string | Operation[], - initialState: State + input: State ) => new Promise(async (resolve, reject) => { let duration = Date.now(); - const { logger, opts = {} } = ctx; + const { logger, plan, opts = {} } = ctx; try { - const timeout = opts.timeout ?? TIMEOUT; + const timeout = plan.options?.timeout ?? DEFAULT_TIMEOUT_MS; // Setup an execution context - const context = buildContext(initialState, opts); + const context = buildContext(input, opts); const { operations, execute } = await prepareJob( expression, @@ -61,19 +64,27 @@ export default ( } // Note that any errors will be trapped by the containing Job - const result = await reducer(initialState); + const result = await reducer(input); clearTimeout(tid); logger.debug('Expression complete!'); duration = Date.now() - duration; - const finalState = prepareFinalState(opts, result, logger); + const finalState = prepareFinalState( + result, + logger, + opts.statePropsToRemove + ); // return the final state resolve(finalState); } catch (e: any) { // whatever initial state looks like now, clean it and report it back - const finalState = prepareFinalState(opts, initialState, logger); + const finalState = prepareFinalState( + input, + logger, + opts.statePropsToRemove + ); duration = Date.now() - duration; let finalError; try { @@ -106,7 +117,7 @@ export const wrapOperation = ( // TODO should we warn if an operation does not return state? // the trick is saying WHICH operation without source mapping const duration = printDuration(new Date().getTime() - start); - logger.info(`Operation ${name} complete in ${duration}`); + logger.debug(`Operation ${name} complete in ${duration}`); return result; }; }; @@ -135,43 +146,27 @@ const prepareJob = async ( } }; -const assignKeys = ( - source: Record, - target: Record, - keys: string[] -) => { - keys.forEach((k) => { - if (source.hasOwnProperty(k)) { - target[k] = source[k]; - } - }); - return target; -}; - // TODO this is suboptimal and may be slow on large objects // (especially as the result get stringified again downstream) -const prepareFinalState = (opts: Options, state: any, logger: Logger) => { +const prepareFinalState = ( + state: any, + logger: Logger, + statePropsToRemove?: string[] +) => { if (state) { - let statePropsToRemove; - if (opts.hasOwnProperty('statePropsToRemove')) { - ({ statePropsToRemove } = opts); - } else { + if (!statePropsToRemove) { // As a strict default, remove the configuration key // tbh this should happen higher up in the stack but it causes havoc in unit testing statePropsToRemove = ['configuration']; } - if (statePropsToRemove && statePropsToRemove.forEach) { - statePropsToRemove.forEach((prop) => { - if (state.hasOwnProperty(prop)) { - delete state[prop]; - logger.debug(`Removed ${prop} from final state`); - } - }); - } - if (opts.strict) { - state = assignKeys(state, {}, ['data', 'error', 'references']); - } + statePropsToRemove.forEach((prop) => { + if (state.hasOwnProperty(prop)) { + delete state[prop]; + logger.debug(`Removed ${prop} from final state`); + } + }); + const cleanState = stringify(state); return JSON.parse(cleanState); } diff --git a/packages/runtime/src/execute/plan.ts b/packages/runtime/src/execute/plan.ts index b4085d2e3..ee32431fa 100644 --- a/packages/runtime/src/execute/plan.ts +++ b/packages/runtime/src/execute/plan.ts @@ -1,19 +1,22 @@ import type { Logger } from '@openfn/logger'; -import executeJob from './job'; +import type { ExecutionPlan, State, Lazy } from '@openfn/lexicon'; + +import executeStep from './step'; import compilePlan from './compile-plan'; -import type { ExecutionPlan } from '../types'; import type { Options } from '../runtime'; import validatePlan from '../util/validate-plan'; import createErrorReporter from '../util/log-error'; import { NOTIFY_STATE_LOAD } from '../events'; +import { CompiledExecutionPlan } from '../types'; const executePlan = async ( plan: ExecutionPlan, + input: Lazy | undefined, opts: Options, logger: Logger ) => { - let compiledPlan; + let compiledPlan: CompiledExecutionPlan; try { validatePlan(plan); compiledPlan = compilePlan(plan); @@ -23,8 +26,11 @@ const executePlan = async ( logger.error('Aborting'); throw e; } + logger.info(`Executing ${plan.workflow.name || plan.id}`); + + const { workflow, options } = compiledPlan; - let queue: string[] = [opts.start || compiledPlan.start]; + let queue: string[] = [options.start]; const ctx = { plan: compiledPlan, @@ -34,35 +40,31 @@ const executePlan = async ( notify: opts.callbacks?.notify ?? (() => {}), }; - type State = any; // record of state returned by every job const stateHistory: Record = {}; + // Record of state on lead nodes (nodes with no next) const leaves: Record = {}; - let { initialState } = compiledPlan; - if (typeof initialState === 'string') { - const id = initialState; + if (typeof input === 'string') { + const id = input; const startTime = Date.now(); logger.debug(`fetching intial state ${id}`); - initialState = await opts.callbacks?.resolveState?.(id); - + input = await opts.callbacks?.resolveState?.(id); const duration = Date.now() - startTime; opts.callbacks?.notify?.(NOTIFY_STATE_LOAD, { duration, jobId: id }); logger.success(`loaded state for ${id} in ${duration}ms`); - - // TODO catch and re-throw } // Right now this executes in series, even if jobs are parallelised while (queue.length) { const next = queue.shift()!; - const job = compiledPlan.jobs[next]; + const job = workflow.steps[next]; - const prevState = stateHistory[job.previous || ''] ?? initialState; + const prevState = stateHistory[job.previous || ''] ?? input; - const result = await executeJob(ctx, job, prevState); + const result = await executeStep(ctx, job, prevState); stateHistory[next] = result.state; if (!result.next.length) { @@ -78,7 +80,8 @@ const executePlan = async ( if (Object.keys(leaves).length > 1) { return leaves; } - // Return a single value + + // Otherwise return a single value return Object.values(leaves)[0]; }; diff --git a/packages/runtime/src/execute/job.ts b/packages/runtime/src/execute/step.ts similarity index 69% rename from packages/runtime/src/execute/job.ts rename to packages/runtime/src/execute/step.ts index b5880a59d..47ee18168 100644 --- a/packages/runtime/src/execute/job.ts +++ b/packages/runtime/src/execute/step.ts @@ -1,16 +1,12 @@ // TODO hmm. I have a horrible feeling that the callbacks should go here // at least the resolvesrs -import executeExpression, { ExecutionErrorWrapper } from './expression'; +import type { Job, State, StepId } from '@openfn/lexicon'; +import type { Logger } from '@openfn/logger'; +import executeExpression, { ExecutionErrorWrapper } from './expression'; import clone from '../util/clone'; import assembleState from '../util/assemble-state'; -import type { - CompiledJobNode, - ExecutionContext, - JobNodeID, - State, -} from '../types'; -import { Logger } from '@openfn/logger'; +import type { CompiledStep, ExecutionContext } from '../types'; import { EdgeConditionError } from '../errors'; import { NOTIFY_INIT_COMPLETE, @@ -21,7 +17,7 @@ import { } from '../events'; const loadCredentials = async ( - job: CompiledJobNode, + job: Job, resolver: (id: string) => Promise ) => { if (typeof job.configuration === 'string') { @@ -32,10 +28,7 @@ const loadCredentials = async ( return job.configuration; }; -const loadState = async ( - job: CompiledJobNode, - resolver: (id: string) => Promise -) => { +const loadState = async (job: Job, resolver: (id: string) => Promise) => { if (typeof job.state === 'string') { // TODO let's log and notify something useful if we're lazy loading // TODO throw a controlled error if there's no resolver @@ -44,7 +37,7 @@ const loadState = async ( return job.state; }; -const calculateNext = (job: CompiledJobNode, result: any, logger: Logger) => { +const calculateNext = (job: CompiledStep, result: any, logger: Logger) => { const next: string[] = []; if (job.next) { for (const nextJobId in job.next) { @@ -82,50 +75,54 @@ const calculateNext = (job: CompiledJobNode, result: any, logger: Logger) => { // The job handler is responsible for preparing the job // and working out where to go next // it'll resolve credentials and state and notify how long init took -const executeJob = async ( +const executeStep = async ( ctx: ExecutionContext, - job: CompiledJobNode, - initialState: State = {} -): Promise<{ next: JobNodeID[]; state: any }> => { + step: CompiledStep, + input: State = {} +): Promise<{ next: StepId[]; state: any }> => { const { opts, notify, logger, report } = ctx; const duration = Date.now(); - const jobId = job.id; + const stepId = step.id; + + // The expression SHOULD return state, but COULD return anything + let result: any = input; + let next: string[] = []; + let didError = false; + + if (step.expression) { + const job = step as Job; + const jobId = job.id!; + const jobName = job.name || job.id; - notify(NOTIFY_INIT_START, { jobId }); + // The notify events only apply to jobs - not steps - so names don't need to be changed here + notify(NOTIFY_INIT_START, { jobId }); - // lazy load config and state - const configuration = await loadCredentials( - job, - opts.callbacks?.resolveCredential! // cheat - we need to handle the error case here - ); + // lazy load config and state + const configuration = await loadCredentials( + job, + opts.callbacks?.resolveCredential! // cheat - we need to handle the error case here + ); - const globals = await loadState( - job, - opts.callbacks?.resolveState! // and here - ); + const globals = await loadState( + job, + opts.callbacks?.resolveState! // and here + ); - const state = assembleState( - clone(initialState), - configuration, - globals, - opts.strict - ); + const state = assembleState(clone(input), configuration, globals); - notify(NOTIFY_INIT_COMPLETE, { jobId, duration: Date.now() - duration }); + notify(NOTIFY_INIT_COMPLETE, { + jobId, + duration: Date.now() - duration, + }); - // We should by this point have validated the plan, so the job MUST exist + // We should by this point have validated the plan, so the step MUST exist - const timerId = `job-${jobId}`; - logger.timer(timerId); - logger.always('Starting job', jobId); + const timerId = `step-${jobId}`; + logger.timer(timerId); + logger.info(`Starting step ${jobName}`); - // The expression SHOULD return state, but COULD return anything - let result: any = state; - let next: string[] = []; - let didError = false; - if (job.expression) { const startTime = Date.now(); try { // TODO include the upstream job? @@ -140,10 +137,10 @@ const executeJob = async ( result = state; const duration = logger.timer(timerId); - logger.error(`Failed job ${jobId} after ${duration}`); + logger.error(`Failed step ${jobName} after ${duration}`); report(state, jobId, error); - next = calculateNext(job, result, logger); + next = calculateNext(step, result, logger); notify(NOTIFY_JOB_ERROR, { duration: Date.now() - startTime, @@ -165,7 +162,7 @@ const executeJob = async ( if (!didError) { const humanDuration = logger.timer(timerId); - logger.success(`Completed job ${jobId} in ${humanDuration}`); + logger.success(`Completed step ${jobName} in ${humanDuration}`); // Take a memory snapshot // IMPORTANT: this runs _after_ the state object has been serialized @@ -179,10 +176,10 @@ const executeJob = async ( const humanJobMemory = Math.round(jobMemory / 1024 / 1024); const humanSystemMemory = Math.round(systemMemory / 1024 / 1024); logger.debug( - `Final memory usage: [job ${humanJobMemory}mb] [system ${humanSystemMemory}mb]` + `Final memory usage: [step ${humanJobMemory}mb] [system ${humanSystemMemory}mb]` ); - next = calculateNext(job, result, logger); + next = calculateNext(step, result, logger); notify(NOTIFY_JOB_COMPLETE, { duration: Date.now() - duration, state: result, @@ -196,16 +193,16 @@ const executeJob = async ( } } else { // calculate next for trigger nodes - next = calculateNext(job, result, logger); + next = calculateNext(step, result, logger); } if (next.length && !didError && !result) { logger.warn( - `WARNING: job ${jobId} did not return a state object. This may cause downstream jobs to fail.` + `WARNING: step ${stepId} did not return a state object. This may cause downstream jobs to fail.` ); } return { next, state: result }; }; -export default executeJob; +export default executeStep; diff --git a/packages/runtime/src/modules/module-loader.ts b/packages/runtime/src/modules/module-loader.ts index 5e87653bb..fa239b319 100644 --- a/packages/runtime/src/modules/module-loader.ts +++ b/packages/runtime/src/modules/module-loader.ts @@ -4,7 +4,7 @@ import vm, { Context } from './experimental-vm'; import mainLinker, { Linker, LinkerOptions } from './linker'; -import type { Operation } from '../types'; +import type { Operation } from '@openfn/lexicon'; import type { Logger } from '@openfn/logger'; type Options = LinkerOptions & { diff --git a/packages/runtime/src/runtime.ts b/packages/runtime/src/runtime.ts index 6d91ea408..4c8e3a892 100644 --- a/packages/runtime/src/runtime.ts +++ b/packages/runtime/src/runtime.ts @@ -1,27 +1,16 @@ import { createMockLogger, Logger } from '@openfn/logger'; - -import type { - Operation, - ExecutionPlan, - State, - JobNodeID, - ExecutionCallbacks, -} from './types'; +import type { ExecutionPlan, State } from '@openfn/lexicon'; +import type { ExecutionCallbacks } from './types'; import type { LinkerOptions } from './modules/linker'; import executePlan from './execute/plan'; -import clone from './util/clone'; -import parseRegex from './util/regex'; +import { defaultState, parseRegex, clone } from './util/index'; -export const TIMEOUT = 5 * 60 * 1000; // 5 minutes +export const DEFAULT_TIMEOUT_MS = 5 * 60 * 1000; // 5 minutes export type Options = { - start?: JobNodeID; logger?: Logger; jobLogger?: Logger; - timeout?: number; // this is timeout used per job, not per workflow - strict?: boolean; // Be strict about handling of state returned from jobs - // Treat state as immutable (likely to break in legacy jobs) immutableState?: boolean; @@ -35,9 +24,9 @@ export type Options = { callbacks?: ExecutionCallbacks; // inject globals into the environment + // TODO leaving this here for now, but maybe its actually on the xplan? globals?: any; - // all listed props will be removed from the state object at the end of a job statePropsToRemove?: string[]; }; @@ -47,27 +36,46 @@ type RawOptions = Omit & { }; }; -const defaultState = { data: {}, configuration: {} }; - // Log nothing by default const defaultLogger = createMockLogger(); -// TODO doesn't really make sense to pass in a state object to an xplan, -// so maybe state becomes an option in the opts object +const loadPlanFromString = (expression: string, logger: Logger) => { + const plan: ExecutionPlan = { + workflow: { + steps: [ + { + expression, + }, + ], + }, + options: {}, + }; + + logger.debug('Generated execution plan for incoming expression'); + logger.debug(plan); + + return plan; +}; + const run = ( - expressionOrXPlan: string | Operation[] | ExecutionPlan, - state?: State, + xplan: Partial | string, + input?: State, opts: RawOptions = {} ) => { const logger = opts.logger || defaultLogger; - // Strict state handling by default - if (!opts.hasOwnProperty('strict')) { - opts.strict = true; + if (typeof xplan === 'string') { + xplan = loadPlanFromString(xplan, logger); } - if (!opts.hasOwnProperty('statePropsToRemove')) { - opts.statePropsToRemove = ['configuration']; + + if (!xplan.options) { + xplan.options = {}; } + + if (!input) { + input = clone(defaultState); + } + if (opts.linker?.whitelist) { opts.linker.whitelist = opts.linker.whitelist.map((w) => { if (typeof w === 'string') { @@ -76,36 +84,7 @@ const run = ( return w; }); } - - // TODO the plan doesn't have an id, should it be given one? - // Ditto the jobs? - let plan: ExecutionPlan; - if ( - typeof expressionOrXPlan == 'string' || - !expressionOrXPlan.hasOwnProperty('jobs') - ) { - // Build an execution plan for an incoming expression - plan = { - jobs: [ - { - expression: expressionOrXPlan, - }, - ], - } as ExecutionPlan; - logger.debug('Generated execution plan for incoming expression'); - // TODO how do we sanitise state.config? - logger.debug(plan); - } else { - plan = expressionOrXPlan as ExecutionPlan; - } - - if (state) { - plan.initialState = clone(state); - } else if (!plan.initialState) { - plan.initialState = defaultState; - } - - return executePlan(plan, opts as Options, logger); + return executePlan(xplan as ExecutionPlan, input, opts as Options, logger); }; export default run; diff --git a/packages/runtime/src/types.ts b/packages/runtime/src/types.ts index 296907941..a869cc73a 100644 --- a/packages/runtime/src/types.ts +++ b/packages/runtime/src/types.ts @@ -1,4 +1,4 @@ -// TMP just thinking through things +import { Operation, StepId, WorkflowOptions, Step } from '@openfn/lexicon'; import { Logger } from '@openfn/logger'; import { Options } from './runtime'; @@ -12,100 +12,29 @@ import { NOTIFY_STATE_LOAD, } from './events'; -// I dont think this is useufl? We can just use error.name of the error object -export type ErrorTypes = - | 'AdaptorNotFound' // probably a CLI validation thing - | 'PackageNotFound' // Linker failed to load a dependency - | 'ExpressionTimeout' // An expression (job) failed to return before the timeout - | 'AdaptorException' // Bubbled out of adaptor code - | 'RuntimeException'; // Caused by an exception in a job. JobException? What about "expected" errors from adaptors? - -export type ErrorReport = { - type: string; // The name/type of error, ie Error, TypeError - message: string; // simple human readable message - jobId: JobNodeID; // ID of the associated job - error: Error; // the original underlying error object - - code?: string; // The error code, if any (found on node errors) - stack?: string; // not sure this is useful? - data?: any; // General store for related error information -}; - -export declare interface State { - configuration?: C; - state?: S; - references?: Array; - index?: number; - - // New error capture object - // Synonyms: exceptions, problems, issues, err, failures - errors?: Record; - - // Legacy error property from old platform - // Adaptors may use this? - error?: any[]; - - // Note that other properties written to state may be lost between jobs - [other: string]: any; -} - -export declare interface Operation | State> { - (state: State): T; -} - -export type ExecutionPlan = { - id?: string; // UUID for this plan - jobs: JobNode[]; - start?: JobNodeID; - initialState?: State | string; -}; - -export type JobNode = { - id?: JobNodeID; - - // The runtime itself will ignore the adaptor flag - // The adaptor import should be compiled in by the compiler, and dependency managed by the runtime manager - adaptor?: string; - - expression?: string | Operation[]; // the code we actually want to execute. Can be a path. - - configuration?: object | string; // credential object - - // TODO strings aren't actually suppored here yet - state?: Omit | string; // default state (globals) - - next?: string | Record; - previous?: JobNodeID; -}; - -export type JobEdge = - | boolean - | string - | { - condition?: string; // Javascript expression (function body, not function) - label?: string; - disabled?: boolean; - }; - -export type JobNodeID = string; - -export type CompiledJobEdge = +export type CompiledEdge = | boolean | { condition?: Function; disabled?: boolean; }; -export type CompiledJobNode = Omit & { - id: JobNodeID; - next?: Record; +export type CompiledStep = Omit & { + id: StepId; + next?: Record; + + [other: string]: any; }; +export type Lazy = string | T; + export type CompiledExecutionPlan = { - id?: string; - start: JobNodeID; - jobs: Record; - initialState?: State | string; + workflow: { + steps: Record; + }; + options: WorkflowOptions & { + start: StepId; + }; }; export type JobModule = { @@ -119,7 +48,6 @@ type NotifyHandler = ( payload: NotifyEventsLookup[typeof event] ) => void; -// TODO difficulty: this is not the same as a vm execution context export type ExecutionContext = { plan: CompiledExecutionPlan; logger: Logger; @@ -183,7 +111,7 @@ export type NotifyEventsLookup = { }; export type ExecutionCallbacks = { - notify: NotifyHandler; + notify?: NotifyHandler; resolveState?: (stateId: string) => Promise; resolveCredential?: (credentialId: string) => Promise; }; diff --git a/packages/runtime/src/util/assemble-state.ts b/packages/runtime/src/util/assemble-state.ts index 2f1f69204..84f5fc12e 100644 --- a/packages/runtime/src/util/assemble-state.ts +++ b/packages/runtime/src/util/assemble-state.ts @@ -13,15 +13,12 @@ const assembleData = (initialData: any, defaultData = {}) => { const assembleState = ( initialState: any = {}, // previous or initial state configuration = {}, - defaultState: any = {}, // This is default state provided by the job - strictState: boolean = true + defaultState: any = {} // This is default state provided by the job ) => { - const obj = strictState - ? {} - : { - ...defaultState, - ...initialState, - }; + const obj = { + ...defaultState, + ...initialState, + }; if (initialState.references) { obj.references = initialState.references; diff --git a/packages/runtime/src/util/clone.ts b/packages/runtime/src/util/clone.ts index d81320f4a..408f108a6 100644 --- a/packages/runtime/src/util/clone.ts +++ b/packages/runtime/src/util/clone.ts @@ -1,4 +1,4 @@ -import type { State } from '../types'; +import type { State } from '@openfn/lexicon'; // TODO I'm in the market for the best solution here - immer? deep-clone? // What should we do if functions are in the state? diff --git a/packages/runtime/src/util/default-state.ts b/packages/runtime/src/util/default-state.ts new file mode 100644 index 000000000..4d4dc5450 --- /dev/null +++ b/packages/runtime/src/util/default-state.ts @@ -0,0 +1 @@ +export default { data: {}, configuration: {} }; diff --git a/packages/runtime/src/util/execute.ts b/packages/runtime/src/util/execute.ts index bd2d6aaa5..7c5f03439 100644 --- a/packages/runtime/src/util/execute.ts +++ b/packages/runtime/src/util/execute.ts @@ -1,4 +1,4 @@ -import type { Operation, State } from '../types'; +import type { Operation, State } from '@openfn/lexicon'; // Standard execute factory export default (...operations: Operation[]): Operation => { diff --git a/packages/runtime/src/util/index.ts b/packages/runtime/src/util/index.ts new file mode 100644 index 000000000..1ad364095 --- /dev/null +++ b/packages/runtime/src/util/index.ts @@ -0,0 +1,19 @@ +import assembleState from './assemble-state'; +import clone from './clone'; +import defaultState from './default-state'; +import exec from './exec'; +import execute from './execute'; +import logError from './log-error'; +import parseRegex from './regex'; +import validatePlan from './validate-plan'; + +export { + assembleState, + clone, + defaultState, + exec, + execute, + logError, + parseRegex, + validatePlan, +}; diff --git a/packages/runtime/src/util/log-error.ts b/packages/runtime/src/util/log-error.ts index af13aec87..7c23e5021 100644 --- a/packages/runtime/src/util/log-error.ts +++ b/packages/runtime/src/util/log-error.ts @@ -1,9 +1,9 @@ import { Logger } from '@openfn/logger'; -import { ErrorReport, JobNodeID, State } from '../types'; +import type { State, ErrorReport, StepId } from '@openfn/lexicon'; export type ErrorReporter = ( state: State, - jobId: JobNodeID, + stepId: StepId, error: NodeJS.ErrnoException & { severity?: string; handled?: boolean; @@ -16,10 +16,10 @@ export type ErrorReporter = ( // Because we're taking closer control of errors // we should be able to report more simply const createErrorReporter = (logger: Logger): ErrorReporter => { - return (state, jobId, error) => { + return (state, stepId, error) => { const report: ErrorReport = { type: error.subtype || error.type || error.name, - jobId, + stepId, message: error.message, error: error, }; @@ -45,13 +45,13 @@ const createErrorReporter = (logger: Logger): ErrorReporter => { } if (error.severity === 'fail') { - logger.error(`Check state.errors.${jobId} for details.`); + logger.error(`Check state.errors.${stepId} for details.`); if (!state.errors) { state.errors = {}; } - state.errors[jobId] = report; + state.errors[stepId] = report; } return report as ErrorReport; diff --git a/packages/runtime/src/util/validate-plan.ts b/packages/runtime/src/util/validate-plan.ts index b1b058105..2dd86628d 100644 --- a/packages/runtime/src/util/validate-plan.ts +++ b/packages/runtime/src/util/validate-plan.ts @@ -1,5 +1,5 @@ +import { ExecutionPlan, Step } from '@openfn/lexicon'; import { ValidationError } from '../errors'; -import { ExecutionPlan, JobNode } from '../types'; type ModelNode = { up: Record; @@ -20,16 +20,16 @@ export default (plan: ExecutionPlan) => { return true; }; -export const buildModel = (plan: ExecutionPlan) => { +export const buildModel = ({ workflow }: ExecutionPlan) => { const model: Model = {}; - const jobIdx = plan.jobs.reduce((obj, item) => { + const jobIdx = workflow.steps.reduce((obj, item) => { if (item.id) { obj[item.id] = item; } // TODO warn if there's no id? It's usually fine (until it isn't) return obj; - }, {} as Record); + }, {} as Record); const ensureModel = (jobId: string) => { if (!model[jobId]) { @@ -48,7 +48,7 @@ export const buildModel = (plan: ExecutionPlan) => { } }; - for (const job of plan.jobs) { + for (const job of workflow.steps) { let node = job.id ? ensureModel(job.id) : { up: {}, down: {} }; if (typeof job.next === 'string') { validateJob(job.next); @@ -71,9 +71,10 @@ export const buildModel = (plan: ExecutionPlan) => { }; const assertStart = (plan: ExecutionPlan) => { - if (typeof plan.start === 'string') { - if (!plan.jobs.find(({ id }) => id == plan.start)) { - throw new ValidationError(`Could not find start job: ${plan.start}`); + const { start } = plan.options; + if (typeof start === 'string') { + if (!plan.workflow.steps.find(({ id }) => id == start)) { + throw new ValidationError(`Could not find start job: ${start}`); } } }; diff --git a/packages/runtime/test/context.test.ts b/packages/runtime/test/context.test.ts index 11909a604..4583837cb 100644 --- a/packages/runtime/test/context.test.ts +++ b/packages/runtime/test/context.test.ts @@ -2,21 +2,23 @@ import test from 'ava'; import run from '../src/runtime'; import { createMockLogger } from '@openfn/logger'; +import { State } from '@openfn/lexicon'; const createState = (data = {}) => ({ data, configuration: {} }); test('makes parseInt available inside the job', async (t) => { - const job = ` + const expression = ` export default [ (s) => { s.data.count = parseInt(s.data.count); return s; } ];`; + const input = createState({ count: '22' }); - const result = await run(job, createState({ count: '22' })); + const result = await run(expression, input); t.deepEqual(result.data, { count: 22 }); }); test('makes Set available inside the job', async (t) => { - const job = ` + const expression = ` export default [ (s) => { new Set(); // should not throw @@ -24,13 +26,15 @@ test('makes Set available inside the job', async (t) => { } ];`; - const result = await run(job, createState({ count: '33' })); + const state = createState({ count: '33' }); + + const result = await run(expression, state); t.deepEqual(result.data, { count: '33' }); }); test("doesn't allow process inside the job", async (t) => { const logger = createMockLogger(undefined, { level: 'default' }); - const job = ` + const expression = ` export default [ (s) => { process.exit() @@ -38,9 +42,7 @@ test("doesn't allow process inside the job", async (t) => { } ];`; - const state = createState(); - - await t.throwsAsync(() => run(job, state, { logger }), { + await t.throwsAsync(() => run(expression, {}, { logger }), { name: 'RuntimeCrash', message: 'ReferenceError: process is not defined', }); @@ -48,17 +50,13 @@ test("doesn't allow process inside the job", async (t) => { test("doesn't allow eval inside a job", async (t) => { const logger = createMockLogger(undefined, { level: 'default' }); - const job = ` + const expression = ` export default [ (state) => eval('ok') // should throw ];`; - const state = createState(); - await t.throwsAsync(() => run(job, state, { logger }), { + await t.throwsAsync(() => run(expression, {}, { logger }), { name: 'SecurityError', message: /Illegal eval statement detected/, }); }); - -// TODO exhaustive test of globals? -// TODO ensure an imported module can't access eval/process diff --git a/packages/runtime/test/errors.test.ts b/packages/runtime/test/errors.test.ts index 90a9d9d16..a18f3ba5a 100644 --- a/packages/runtime/test/errors.test.ts +++ b/packages/runtime/test/errors.test.ts @@ -1,16 +1,27 @@ import test from 'ava'; import path from 'node:path'; +import type { WorkflowOptions } from '@openfn/lexicon'; + import run from '../src/runtime'; -// This is irrelevant now as state and credentials are preloaded -test.todo('lazy state & credential loading'); +const createPlan = (expression: string, options: WorkflowOptions = {}) => ({ + workflow: { + steps: [ + { + expression, + }, + ], + }, + options, +}); test('crash on timeout', async (t) => { const expression = 'export default [(s) => new Promise((resolve) => {})]'; + const plan = createPlan(expression, { timeout: 1 }); let error; try { - await run(expression, {}, { timeout: 1 }); + await run(plan); } catch (e) { error = e; } @@ -72,24 +83,27 @@ test('crash on eval with SecurityError', async (t) => { }); test('crash on edge condition error with EdgeConditionError', async (t) => { - const workflow = { - jobs: [ - { - id: 'a', - next: { - b: { - // Will throw a reference error - condition: 'wibble', + const plan = { + workflow: { + steps: [ + { + id: 'a', + expression: '.', + next: { + b: { + // Will throw a reference error + condition: 'wibble', + }, }, }, - }, - { id: 'b' }, - ], + { id: 'b', expression: '.' }, + ], + }, }; let error; try { - await run(workflow); + await run(plan); } catch (e) { error = e; } diff --git a/packages/runtime/test/execute/compile-plan.test.ts b/packages/runtime/test/execute/compile-plan.test.ts index 23ef3518e..ec99bd574 100644 --- a/packages/runtime/test/execute/compile-plan.test.ts +++ b/packages/runtime/test/execute/compile-plan.test.ts @@ -1,193 +1,255 @@ import test from 'ava'; -import { ExecutionPlan, JobEdge } from '../../src'; +import { ExecutionPlan, StepEdge } from '@openfn/lexicon'; import compilePlan from '../../src/execute/compile-plan'; const testPlan: ExecutionPlan = { - start: 'a', - jobs: [ - { id: 'a', expression: 'x', next: { b: true } }, - { id: 'b', expression: 'y' }, - ], + workflow: { + steps: [ + { id: 'a', expression: 'x', name: 'a', next: { b: true } }, + { id: 'b', expression: 'y' }, + ], + }, + options: { + start: 'a', + }, }; -const planWithEdge = (edge: JobEdge) => - ({ - ...testPlan, - jobs: [{ id: 'a', next: { b: edge } }], - } as ExecutionPlan); +const planWithEdge = (edge: Partial) => ({ + workflow: { + steps: [ + { + id: 'a', + expression: 'x', + next: { + b: edge, + }, + }, + { id: 'b', expression: 'y' }, + ], + }, + options: { + start: 'a', + }, +}); -test('should preserve initial state as an object', (t) => { - const state = { x: 123 }; +test('should preserve the start option', (t) => { const compiledPlan = compilePlan({ id: 'a', - initialState: state, - jobs: [], + workflow: { + steps: [{ id: 'a', expression: 'a' }], + }, + options: { + start: 'a', + }, }); - t.deepEqual(state, compiledPlan.initialState); + + t.is(compiledPlan.options.start, 'a'); }); -test('should preserve initial state a string', (t) => { +test('should preserve arbitrary options', (t) => { const compiledPlan = compilePlan({ id: 'a', - initialState: 'abc', - jobs: [], + workflow: { + steps: [{ id: 'a', expression: 'a' }], + }, + options: { + // @ts-ignore + a: 1, + z: 2, + '-': 3, + }, + }); + + t.deepEqual(compiledPlan.options, { + a: 1, + z: 2, + '-': 3, + start: 'a', }); - t.is(compiledPlan.initialState, 'abc'); }); -test('should convert jobs to an object', (t) => { - const compiledPlan = compilePlan(testPlan); - t.truthy(compiledPlan.jobs.a); - t.is(compiledPlan.jobs.a.expression, 'x'); +test('should convert steps to an object', (t) => { + const { workflow } = compilePlan(testPlan); + t.deepEqual(workflow.steps.a, { + id: 'a', + name: 'a', + expression: 'x', + next: { b: true }, + previous: undefined, + }); - t.truthy(compiledPlan.jobs.b); - t.is(compiledPlan.jobs.b.expression, 'y'); + t.truthy(workflow.steps.b); + t.is(workflow.steps.b.expression, 'y'); }); -test('should set previous job with 2 jobs', (t) => { +test('should set previous job with 2 steps', (t) => { const plan: ExecutionPlan = { - start: 'a', - jobs: [ - { id: 'a', expression: 'x', next: { b: true } }, - { id: 'b', expression: 'y' }, - ], + workflow: { + steps: [ + { id: 'a', expression: 'x', next: { b: true } }, + { id: 'b', expression: 'y' }, + ], + }, + options: {}, }; - const compiledPlan = compilePlan(plan); - t.is(compiledPlan.jobs.a.previous, undefined); - t.is(compiledPlan.jobs.b.previous, 'a'); + const { workflow } = compilePlan(plan); + t.is(workflow.steps.a.previous, undefined); + t.is(workflow.steps.b.previous, 'a'); }); -test('should set previous job with 2 jobs and shorthand syntax', (t) => { +test('should set previous job with 2 steps and shorthand syntax', (t) => { const plan: ExecutionPlan = { - start: 'a', - jobs: [ - { id: 'a', expression: 'x', next: 'b' }, - { id: 'b', expression: 'y' }, - ], + workflow: { + steps: [ + { id: 'a', expression: 'x', next: 'b' }, + { id: 'b', expression: 'y' }, + ], + }, + options: {}, }; - const compiledPlan = compilePlan(plan); - t.is(compiledPlan.jobs.a.previous, undefined); - t.is(compiledPlan.jobs.b.previous, 'a'); + const { workflow } = compilePlan(plan); + t.is(workflow.steps.a.previous, undefined); + t.is(workflow.steps.b.previous, 'a'); }); -test('should set previous job with 2 jobs and no start', (t) => { +test('should set previous job with 2 steps and no start', (t) => { const plan: ExecutionPlan = { - jobs: [ - { id: 'a', expression: 'x', next: { b: true } }, - { id: 'b', expression: 'y' }, - ], + workflow: { + steps: [ + { id: 'a', expression: 'x', next: { b: true } }, + { id: 'b', expression: 'y' }, + ], + }, + options: {}, }; - const compiledPlan = compilePlan(plan); - t.is(compiledPlan.jobs.a.previous, undefined); - t.is(compiledPlan.jobs.b.previous, 'a'); + const { workflow } = compilePlan(plan); + t.is(workflow.steps.a.previous, undefined); + t.is(workflow.steps.b.previous, 'a'); }); -test('should set previous job with 3 jobs', (t) => { +test('should set previous job with 3 steps', (t) => { const plan: ExecutionPlan = { - start: 'a', - jobs: [ - { id: 'a', expression: 'x', next: { b: true } }, - { id: 'b', expression: 'y', next: { c: true } }, - { id: 'c', expression: 'z' }, - ], + workflow: { + steps: [ + { id: 'a', expression: 'x', next: { b: true } }, + { id: 'b', expression: 'y', next: { c: true } }, + { id: 'c', expression: 'z' }, + ], + }, + options: {}, }; - const compiledPlan = compilePlan(plan); - t.is(compiledPlan.jobs.a.previous, undefined); - t.is(compiledPlan.jobs.b.previous, 'a'); - t.is(compiledPlan.jobs.c.previous, 'b'); + const { workflow } = compilePlan(plan); + t.is(workflow.steps.a.previous, undefined); + t.is(workflow.steps.b.previous, 'a'); + t.is(workflow.steps.c.previous, 'b'); }); -test('should set previous job with 3 jobs and shorthand syntax', (t) => { +test('should set previous job with 3 steps and shorthand syntax', (t) => { const plan: ExecutionPlan = { - start: 'a', - jobs: [ - { id: 'c', expression: 'z' }, - { id: 'a', expression: 'x', next: 'b' }, - { id: 'b', expression: 'y', next: 'c' }, - ], + workflow: { + steps: [ + { id: 'c', expression: 'z' }, + { id: 'a', expression: 'x', next: 'b' }, + { id: 'b', expression: 'y', next: 'c' }, + ], + }, + options: {}, }; - const compiledPlan = compilePlan(plan); - t.is(compiledPlan.jobs.a.previous, undefined); - t.is(compiledPlan.jobs.b.previous, 'a'); - t.is(compiledPlan.jobs.c.previous, 'b'); + const { workflow } = compilePlan(plan); + t.is(workflow.steps.a.previous, undefined); + t.is(workflow.steps.b.previous, 'a'); + t.is(workflow.steps.c.previous, 'b'); }); -test('should auto generate ids for jobs', (t) => { +test('should auto generate ids for steps', (t) => { const plan = { - start: 'a', - jobs: [{ expression: 'x' }, { expression: 'y' }], + workflow: { + steps: [{ expression: 'x' }, { expression: 'y' }], + }, + options: {}, }; - const compiledPlan = compilePlan(plan); - const ids = Object.keys(compiledPlan.jobs); + const { workflow } = compilePlan(plan); + const ids = Object.keys(workflow.steps); t.truthy(ids[0]); t.truthy(ids[1]); t.assert(ids[0] !== ids[1]); }); -test('should convert jobs to an object with auto ids', (t) => { +test('should convert steps to an object with auto ids', (t) => { const plan: ExecutionPlan = { - jobs: [ - // silly use case but it doens't matter - { expression: 'x' }, - { expression: 'y' }, - ], + workflow: { + steps: [{ expression: 'x' }, { expression: 'y' }], + }, + options: {}, }; - const compiledPlan = compilePlan(plan); - t.deepEqual(Object.keys(compiledPlan.jobs), ['job-1', 'job-2']); + const { workflow } = compilePlan(plan); + t.deepEqual(Object.keys(workflow.steps), ['job-1', 'job-2']); }); test('should reset job ids for each call', (t) => { const plan: ExecutionPlan = { - jobs: [{ expression: 'x' }], + workflow: { + steps: [{ expression: 'x' }], + }, + options: {}, }; const first = compilePlan(plan); - t.is(first.jobs['job-1'].expression, 'x'); + t.is(first.workflow.steps['job-1'].expression, 'x'); const second = compilePlan(plan); - t.is(second.jobs['job-1'].expression, 'x'); + t.is(second.workflow.steps['job-1'].expression, 'x'); }); -test('should set the start to jobs[0]', (t) => { +test('should set the start to steps[0]', (t) => { const plan: ExecutionPlan = { - jobs: [ - { id: 'a', expression: 'x' }, - { id: 'b', expression: 'y' }, - { id: 'c', expression: 'z' }, - ], + workflow: { + steps: [ + { id: 'a', expression: 'x' }, + { id: 'b', expression: 'y' }, + { id: 'c', expression: 'z' }, + ], + }, + options: {}, }; - const compiledPlan = compilePlan(plan); - t.is(compiledPlan.start, 'a'); + const { options } = compilePlan(plan); + t.is(options.start, 'a'); }); test('should not override the start', (t) => { const plan: ExecutionPlan = { - start: 'c', - jobs: [ - { id: 'a', expression: 'x' }, - { id: 'b', expression: 'y' }, - { id: 'c', expression: 'z' }, - ], + options: { + start: 'c', + }, + workflow: { + steps: [ + { id: 'a', expression: 'x' }, + { id: 'b', expression: 'y' }, + { id: 'c', expression: 'z' }, + ], + }, }; - const compiledPlan = compilePlan(plan); - t.is(compiledPlan.start, 'c'); + const { options } = compilePlan(plan); + t.is(options.start, 'c'); }); test('should compile a shorthand edge', (t) => { const plan: ExecutionPlan = { - start: 'a', - jobs: [ - { - id: 'a', - expression: 'x', - next: 'y', - }, - ], + workflow: { + steps: [ + { + id: 'a', + expression: 'x', + next: 'y', + }, + ], + }, + options: {}, }; - const compiledPlan = compilePlan(plan); + const { workflow } = compilePlan(plan); - t.deepEqual(compiledPlan.jobs.a.next!, { + t.deepEqual(workflow.steps.a.next!, { y: true, }); }); @@ -198,69 +260,69 @@ test('should not recompile a functional edge', (t) => { condition: () => true, }); - const compiledPlan = compilePlan(plan); + const { workflow } = compilePlan(plan); // @ts-ignore - const result = compiledPlan.jobs.a.next!.b.condition({}); + const result = workflow.steps.a.next!.b.condition({}); t.true(result); }); test('should compile a truthy edge', (t) => { const plan = planWithEdge({ condition: 'true' }); - const compiledPlan = compilePlan(plan); + const { workflow } = compilePlan(plan); // @ts-ignore - const result = compiledPlan.jobs.a.next!.b.condition({}); + const result = workflow.steps.a.next!.b.condition({}); t.true(result); }); test('should compile a string edge', (t) => { const plan = planWithEdge('true'); - const compiledPlan = compilePlan(plan); + const { workflow } = compilePlan(plan); // @ts-ignore - const result = compiledPlan.jobs.a.next!.b.condition(); + const result = workflow.steps.a.next!.b.condition(); t.true(result); }); test('should compile a falsy edge', (t) => { const plan = planWithEdge({ condition: 'false' }); - const compiledPlan = compilePlan(plan); + const { workflow } = compilePlan(plan); // @ts-ignore - const result = compiledPlan.jobs.a.next!.b.condition({}); + const result = workflow.steps.a.next!.b.condition({}); t.false(result); }); test('should compile an edge with arithmetic', (t) => { const plan = planWithEdge({ condition: '1 + 1' }); - const compiledPlan = compilePlan(plan); + const { workflow } = compilePlan(plan); // @ts-ignore - const result = compiledPlan.jobs.a.next!.b.condition({}); + const result = workflow.steps.a.next!.b.condition({}); t.is(result, 2); }); test('should compile an edge which uses state', (t) => { const plan = planWithEdge({ condition: '!state.hasOwnProperty("error")' }); - const compiledPlan = compilePlan(plan); + const { workflow } = compilePlan(plan); // @ts-ignore - const result = compiledPlan.jobs.a.next!.b.condition({}); + const result = workflow.steps.a.next!.b.condition({}); t.true(result); }); test('condition cannot require', (t) => { const plan = planWithEdge({ condition: 'require("axios")' }); - const compiledPlan = compilePlan(plan); + const { workflow } = compilePlan(plan); // @ts-ignore - t.throws(() => compiledPlan.jobs.a.next!.b.condition({ data: {} }), { + t.throws(() => workflow.steps.a.next!.b.condition({ data: {} }), { message: 'require is not defined', }); }); @@ -268,10 +330,10 @@ test('condition cannot require', (t) => { test('condition cannot access process', (t) => { const plan = planWithEdge({ condition: 'process.exit()' }); - const compiledPlan = compilePlan(plan); + const { workflow } = compilePlan(plan); // @ts-ignore - t.throws(() => compiledPlan.jobs.a.next!.b.condition({ data: {} }), { + t.throws(() => workflow.steps.a.next!.b.condition({ data: {} }), { message: 'process is not defined', }); }); @@ -279,10 +341,10 @@ test('condition cannot access process', (t) => { test('condition cannot access process #2', (t) => { const plan = planWithEdge({ condition: '(() => process.exit())()' }); - const compiledPlan = compilePlan(plan); + const { workflow } = compilePlan(plan); // @ts-ignore - t.throws(() => compiledPlan.jobs.a.next!.b.condition({ data: {} }), { + t.throws(() => workflow.steps.a.next!.b.condition({ data: {} }), { message: 'process is not defined', }); }); @@ -290,10 +352,10 @@ test('condition cannot access process #2', (t) => { test('condition cannot eval', (t) => { const plan = planWithEdge({ condition: 'eval("process.exit()")' }); - const compiledPlan = compilePlan(plan); + const { workflow } = compilePlan(plan); // @ts-ignore - t.throws(() => compiledPlan.jobs.a.next!.b.condition({ data: {} }), { + t.throws(() => workflow.steps.a.next!.b.condition({ data: {} }), { message: 'Code generation from strings disallowed for this context', }); }); @@ -310,25 +372,28 @@ test('throw for a syntax error on a job edge', (t) => { test('throw for multiple errors', (t) => { const plan = { - jobs: [ - { - id: 'a', - expression: 'x', - next: { - b: { - condition: '@£^!!', - }, - c: { - condition: '@£^!!', + workflow: { + steps: [ + { + id: 'a', + expression: 'x', + next: { + b: { + condition: '@£^!!', + }, + c: { + condition: '@£^!!', + }, }, }, - }, - ], + ], + }, + options: {}, }; try { compilePlan(plan); - } catch (e) { + } catch (e: any) { // the message will have have one error per line const { message } = e; const lines = message.split('\n\n'); diff --git a/packages/runtime/test/execute/expression.test.ts b/packages/runtime/test/execute/expression.test.ts index 2258e43c2..5b14567e4 100644 --- a/packages/runtime/test/execute/expression.test.ts +++ b/packages/runtime/test/execute/expression.test.ts @@ -1,8 +1,10 @@ import test from 'ava'; import { fn } from '@openfn/language-common'; import { createMockLogger } from '@openfn/logger'; +import type { Operation, State } from '@openfn/lexicon'; + import execute from '../../src/execute/expression'; -import type { State, Operation, ExecutionContext } from '../../src/types'; +import type { ExecutionContext } from '../../src/types'; type TestState = State & { data: { @@ -17,15 +19,18 @@ const createState = (data = {}) => ({ const logger = createMockLogger(undefined, { level: 'debug' }); -const createContext = (args = {}) => +const createContext = (args = {}, options = {}) => + // @ts-ignore ({ logger, plan: {}, - opts: {}, + opts: { + ...options, + }, notify: () => {}, report: () => {}, ...args, - } as unknown as ExecutionContext); + } as ExecutionContext); test.afterEach(() => { logger._reset(); @@ -38,7 +43,6 @@ test.afterEach(() => { test('run a live no-op job with one operation', async (t) => { const job = [(s: State) => s]; const state = createState(); - const context = createContext(); const result = await execute(context, job, state); @@ -108,7 +112,7 @@ test('configuration is removed from the result by default', async (t) => { test('statePropsToRemove removes multiple props from state', async (t) => { const job = [async (s: State) => s]; const statePropsToRemove = ['x', 'y']; - const context = createContext({ opts: { statePropsToRemove } }); + const context = createContext({}, { statePropsToRemove }); const result = await execute(context, job, { x: 1, y: 1, z: 1 }); t.deepEqual(result, { z: 1 }); @@ -118,7 +122,7 @@ test('statePropsToRemove logs to debug when a prop is removed', async (t) => { const job = [async (s: State) => s]; const statePropsToRemove = ['x']; - const context = createContext({ opts: { statePropsToRemove } }); + const context = createContext({}, { statePropsToRemove }); const result = await execute(context, job, { x: 1, y: 1, z: 1 }); t.deepEqual(result, { y: 1, z: 1 }); @@ -130,7 +134,7 @@ test('statePropsToRemove logs to debug when a prop is removed', async (t) => { test('no props are removed from state if an empty array is passed to statePropsToRemove', async (t) => { const job = [async (s: State) => s]; const statePropsToRemove = ['x', 'y']; - const context = createContext({ opts: { statePropsToRemove } }); + const context = createContext({}, { statePropsToRemove }); const state = { x: 1, configuration: 1 }; const result = await execute(context, job, state as any); @@ -140,48 +144,22 @@ test('no props are removed from state if an empty array is passed to statePropsT test('no props are removed from state if a falsy value is passed to statePropsToRemove', async (t) => { const job = [async (s: State) => s]; const statePropsToRemove = undefined; - const context = createContext({ opts: { statePropsToRemove } }); + const context = createContext({}, { statePropsToRemove }); const state = { x: 1, configuration: 1 }; const result = await execute(context, job, state as any); t.deepEqual(result, state); }); -test('config is removed from the result (strict)', async (t) => { +test('config is removed from the result', async (t) => { const job = [async (s: State) => s]; - const context = createContext({ opts: { strict: true } }); - - const result = await execute(context, job, { configuration: {} }); - t.deepEqual(result, {}); -}); + const context = createContext({ opts: {} }); -test('config is removed from the result (non-strict)', async (t) => { - const job = [async (s: State) => s]; - const context = createContext({ opts: { strict: false } }); const result = await execute(context, job, { configuration: {} }); t.deepEqual(result, {}); }); -test('output state is cleaned in strict mode', async (t) => { - const job = [ - async () => ({ - data: {}, - references: [], - configuration: {}, - x: true, - }), - ]; - - const context = createContext({ opts: { strict: true } }); - - const result = await execute(context, job, {}); - t.deepEqual(result, { - data: {}, - references: [], - }); -}); - -test('output state is left alone in non-strict mode', async (t) => { +test('output state is returned verbatim, apart from config', async (t) => { const state = { data: {}, references: [], @@ -190,7 +168,7 @@ test('output state is left alone in non-strict mode', async (t) => { }; const job = [async () => ({ ...state })]; - const context = createContext({ opts: { strict: false } }); + const context = createContext(); const result = await execute(context, job, {}); t.deepEqual(result, { @@ -352,7 +330,8 @@ test('Throws after custom timeout', async (t) => { const job = `export default [() => new Promise((resolve) => setTimeout(resolve, 100))];`; const context = createContext({ - opts: { jobLogger: logger, timeout: 10 }, + plan: { options: { timeout: 10 } }, + opts: { jobLogger: logger }, }); const state = createState(); await t.throwsAsync(async () => execute(context, job, state), { @@ -370,6 +349,6 @@ test('Operations log on start and end', async (t) => { const start = logger._find('debug', /starting operation /i); t.truthy(start); - const end = logger._find('info', /operation 1 complete in \dms/i); + const end = logger._find('debug', /operation 1 complete in \dms/i); t.truthy(end); }); diff --git a/packages/runtime/test/execute/plan.test.ts b/packages/runtime/test/execute/plan.test.ts index 1cdd96682..c7657163f 100644 --- a/packages/runtime/test/execute/plan.test.ts +++ b/packages/runtime/test/execute/plan.test.ts @@ -1,142 +1,112 @@ import test from 'ava'; import path from 'node:path'; import { createMockLogger } from '@openfn/logger'; -import { ExecutionPlan, JobNode } from '../../src/types'; -import execute from './../../src/execute/plan'; +import type { ExecutionPlan, Job } from '@openfn/lexicon'; + +import executePlan from './../../src/execute/plan'; +import { CompiledExecutionPlan } from '../../src'; let mockLogger = createMockLogger(undefined, { level: 'debug' }); +const createPlan = ( + steps: Job[], + options: Partial = {} +): ExecutionPlan => ({ + workflow: { + steps, + }, + options, +}); + +const createJob = ({ id, expression, next, state }: any): Job => ({ + id: id ?? 'job1', + expression: expression ?? 'export default [s => s]', + state, + next, +}); + test('throw for a circular job', async (t) => { - const plan: ExecutionPlan = { - start: 'job1', - jobs: [ - { - id: 'job1', - expression: 'export default [s => s]', - next: { job2: true }, - }, - { - id: 'job2', - expression: 'export default [s => s]', - next: { job1: true }, - }, - ], - }; - const e = await t.throwsAsync(() => execute(plan, {}, mockLogger)); + const plan = createPlan([ + createJob({ next: { job2: true } }), + createJob({ id: 'job2', next: { job1: true } }), + ]); + const e = await t.throwsAsync(() => executePlan(plan, {}, {}, mockLogger)); t.regex(e!.message, /circular dependency/i); }); test('throw for a job with multiple inputs', async (t) => { - // TODO maybe this isn't a good test - job1 and job2 both input to job3, but job2 never gets called - const plan: ExecutionPlan = { - start: 'job1', - jobs: [ - { - id: 'job1', - expression: 'export default [s => s]', - next: { job3: true }, - }, - { - id: 'job2', - expression: 'export default [s => s]', - next: { job3: true }, - }, - { - id: 'job3', - expression: 'export default [s => s]', - next: {}, - }, - ], - }; - const e = await t.throwsAsync(() => execute(plan, {}, mockLogger)); + const plan = createPlan([ + createJob({ next: { job3: true } }), + createJob({ id: 'job2', next: { job3: true } }), + createJob({ id: 'job3' }), + ]); + + const e = await t.throwsAsync(() => executePlan(plan, {}, {}, mockLogger)); t.regex(e!.message, /multiple dependencies/i); }); test('throw for a plan which references an undefined job', async (t) => { - const plan: ExecutionPlan = { - start: 'job1', - jobs: [ - { - id: 'job1', - expression: 'export default [s => s]', - next: { job3: true }, - }, - ], - }; - const e = await t.throwsAsync(() => execute(plan, {}, mockLogger)); + const plan = createPlan([createJob({ next: { job3: true } })]); + + const e = await t.throwsAsync(() => executePlan(plan, {}, {}, mockLogger)); t.regex(e!.message, /cannot find job/i); }); test('throw for an illegal edge condition', async (t) => { - const plan: ExecutionPlan = { - jobs: [ - { - id: 'a', - expression: '.', - next: { - b: { - condition: '!!!', - }, + const plan = createPlan([ + createJob({ + next: { + job2: { + condition: '!!!', }, }, - { id: 'b' }, - ], - }; - const e = await t.throwsAsync(() => execute(plan, {}, mockLogger)); - t.regex(e!.message, /failed to compile edge condition a->b/i); -}); - -test('throw for an edge condition', async (t) => { - const plan: ExecutionPlan = { - jobs: [ - { - expression: 'x', - next: { - b: { - condition: '!!!!', - }, - }, - }, - { id: 'b' }, - ], - }; - const e = await t.throwsAsync(() => execute(plan, {}, mockLogger)); - t.regex(e!.message, /failed to compile edge condition/i); + }), + createJob({ id: 'job2' }), + ]); + const e = await t.throwsAsync(() => executePlan(plan, {}, {}, mockLogger)); + t.regex(e!.message, /failed to compile edge condition job1->job2/i); }); test('execute a one-job execution plan with inline state', async (t) => { - const plan: ExecutionPlan = { - jobs: [ - { - expression: 'export default [s => s.data.x]', - state: { data: { x: 22 } }, - }, - ], - }; - const result = (await execute(plan, {}, mockLogger)) as unknown as number; + const plan = createPlan([ + createJob({ + expression: 'export default [s => s.data.x]', + state: { data: { x: 22 } }, + }), + ]); + + const result: any = (await executePlan( + plan, + {}, + {}, + mockLogger + )) as unknown as number; t.is(result, 22); }); test('execute a one-job execution plan with initial state', async (t) => { - const plan: ExecutionPlan = { - initialState: { - data: { x: 33 }, - }, - jobs: [ - { - expression: 'export default [s => s.data.x]', - }, - ], + const plan = createPlan([ + createJob({ + expression: 'export default [s => s.data.x]', + }), + ]); + const input = { + data: { x: 33 }, }; - const result = (await execute(plan, {}, mockLogger)) as unknown as number; + + const result: any = await executePlan(plan, input, {}, mockLogger); + t.is(result, 33); }); test('lazy load initial state', async (t) => { - const plan: ExecutionPlan = { - initialState: 's1', - jobs: [{ id: 'a', expression: 'export default [s => s]' }], - }; + const plan = createPlan([ + createJob({ + expression: 'export default [s => s]', + }), + ]); + const state = 's1'; + const states = { s1: { data: { result: 42 } } }; const options = { callbacks: { @@ -144,13 +114,10 @@ test('lazy load initial state', async (t) => { }, }; - const result = await execute(plan, options, mockLogger); + const result: any = await executePlan(plan, state, options, mockLogger); t.deepEqual(result, states.s1); }); -test.todo('lazy load initial state with log'); -test.todo('lazy load initial state with notify'); - test('execute a one-job execution plan and notify init-start and init-complete', async (t) => { let notifications: Record = {}; @@ -158,14 +125,11 @@ test('execute a one-job execution plan and notify init-start and init-complete', data: { x: 33 }, }; - const plan: ExecutionPlan = { - initialState: state, - jobs: [ - { - expression: 'export default [s => s.data.x]', - }, - ], - }; + const plan = createPlan([ + createJob({ + expression: 'export default [s => s.data.x]', + }), + ]); const notify = (event: string, payload: any) => { if (notifications[event]) { @@ -176,7 +140,7 @@ test('execute a one-job execution plan and notify init-start and init-complete', const options = { callbacks: { notify } }; - await execute(plan, options, mockLogger); + await executePlan(plan, state, options, mockLogger); t.truthy(notifications['init-start']); t.truthy(notifications['init-complete']); @@ -184,203 +148,172 @@ test('execute a one-job execution plan and notify init-start and init-complete', }); test('execute a job with a simple truthy "precondition" or "trigger node"', async (t) => { - const plan: ExecutionPlan = { - jobs: [ - { - next: { - job: { - condition: 'true', - }, + const plan = createPlan([ + createJob({ + next: { + job: { + condition: 'true', }, }, - { - id: 'job', - expression: 'export default [() => ({ data: { done: true } })]', - }, - ], - }; - const result = await execute(plan, {}, mockLogger); + }), + createJob({ + id: 'job', + expression: 'export default [() => ({ data: { done: true } })]', + }), + ]); + + const result: any = await executePlan(plan, {}, {}, mockLogger); t.true(result.data.done); }); test('do not execute a job with a simple falsy "precondition" or "trigger node"', async (t) => { - const plan: ExecutionPlan = { - jobs: [ - { - next: { - job: { - condition: 'false', - }, + const plan = createPlan([ + createJob({ + next: { + job: { + condition: 'false', }, }, - { - id: 'job', - expression: 'export default [() => ({ data: { done: true } })]', - }, - ], - }; - const result = await execute(plan, {}, mockLogger); + }), + createJob({ + id: 'job', + expression: 'export default [() => ({ data: { done: true } })]', + }), + ]); + + const result: any = await executePlan(plan, {}, {}, mockLogger); t.falsy(result.data.done); }); test('execute a job with a valid "precondition" or "trigger node"', async (t) => { - const plan: ExecutionPlan = { - initialState: { data: { x: 10 } }, - jobs: [ + const plan = createPlan( + [ + // @ts-ignore TODO make this a trigger node when we have the types { + id: 'a', next: { job: { - condition: 'state.data.x === 10', + condition: 'true', }, }, }, - { + createJob({ id: 'job', expression: 'export default [() => ({ data: { done: true } })]', - }, + }), ], - }; - const result = await execute(plan, {}, mockLogger); + { + initialState: { data: { x: 10 } }, + } + ); + + const result: any = await executePlan(plan, {}, {}, mockLogger); t.true(result.data.done); }); test('merge initial and inline state', async (t) => { - const plan: ExecutionPlan = { - initialState: { data: { x: 33 } }, - jobs: [ - { - expression: 'export default [s => s]', - state: { data: { y: 11 } }, - }, - ], - }; - const result = await execute(plan, {}, mockLogger); + const plan = createPlan([ + createJob({ + expression: 'export default [s => s]', + state: { data: { y: 11 } }, + }), + ]); + const state = { data: { x: 33 } }; + + const result: any = await executePlan(plan, state, {}, mockLogger); t.is(result.data.x, 33); t.is(result.data.y, 11); }); test('Initial state overrides inline data', async (t) => { - const plan: ExecutionPlan = { - initialState: { data: { x: 34 } }, - jobs: [ - { - expression: 'export default [s => s]', - state: { data: { x: 11 } }, - }, - ], - }; - const result = await execute(plan, {}, mockLogger); + const plan = createPlan([ + createJob({ + expression: 'export default [s => s]', + state: { data: { y: 11 } }, + }), + ]); + const state = { data: { x: 34 } }; + + const result: any = await executePlan(plan, state, {}, mockLogger); t.is(result.data.x, 34); }); test('Previous state overrides inline data', async (t) => { - const plan: ExecutionPlan = { - jobs: [ - // This will return x as 5 - { - id: 'job1', - expression: 'export default [s => s]', - state: { data: { x: 5 } }, - next: { - job2: true, - }, - }, - - // This will receive x as 5, prefer it to the default x as 88, and return it plus 1 - { - id: 'job2', - expression: 'export default [s => { s.data.x +=1 ; return s; }]', - state: { data: { x: 88 } }, - }, - ], - }; - const result = await execute(plan, {}, mockLogger); + const plan = createPlan([ + // This will return x as 5 + createJob({ + state: { data: { x: 5 } }, + next: { + job2: true, + }, + }), + // This will receive x as 5, prefer it to the default x as 88, and return it plus 1 + createJob({ + id: 'job2', + expression: 'export default [s => { s.data.x +=1 ; return s; }]', + state: { data: { x: 88 } }, + }), + ]); + + const result: any = await executePlan(plan, {}, {}, mockLogger); t.is(result.data.x, 6); }); -test('only allowed state is passed through in strict mode', async (t) => { - const plan: ExecutionPlan = { - jobs: [ - { - expression: - 'export default [s => ({ data: {}, references: [], x: 22, y: 33 })]', - next: { - job2: true, - }, - }, - - { - id: 'job2', - // Throw if we receive unexpected stuff in state - expression: - 'export default [s => { if (s.x || s.y) { throw new Error() }; return s;}]', - }, - ], - }; - const result = await execute(plan, { strict: true }, mockLogger); - t.deepEqual(result, { - data: {}, - references: [], - }); -}); - -test('Jobs only receive state from upstream jobs', async (t) => { +test('steps only receive state from upstream steps', async (t) => { const assert = (expr: string) => `if (!(${expr})) throw new Error('ASSERT FAIL')`; - const plan: ExecutionPlan = { - jobs: [ - { - id: 'start', - expression: 'export default [s => s]', - state: { data: { x: 1, y: 1 } }, - next: { - 'x-a': true, - 'y-a': true, - }, + const plan = createPlan([ + { + id: 'start', + expression: 'export default [s => s]', + state: { data: { x: 1, y: 1 } }, + next: { + 'x-a': true, + 'y-a': true, }, + }, - { - id: 'x-a', - expression: `export default [s => { - ${assert('s.data.x === 1')}; - ${assert('s.data.y === 1')}; - s.data.x += 1; - return s; - }]`, - next: { 'x-b': true }, - }, - { - id: 'x-b', - expression: `export default [s => { - ${assert('s.data.x === 2')}; - ${assert('s.data.y === 1')}; - return s; - }]`, - }, + { + id: 'x-a', + expression: `export default [s => { + ${assert('s.data.x === 1')}; + ${assert('s.data.y === 1')}; + s.data.x += 1; + return s; + }]`, + next: { 'x-b': true }, + }, + { + id: 'x-b', + expression: `export default [s => { + ${assert('s.data.x === 2')}; + ${assert('s.data.y === 1')}; + return s; + }]`, + }, - { - id: 'y-a', - expression: `export default [s => { - ${assert('s.data.x === 1')}; - ${assert('s.data.y === 1')}; - s.data.y += 1; - return s; - }]`, - next: { 'y-b': true }, - }, - { - id: 'y-b', - expression: `export default [s => { - ${assert('s.data.x === 1')}; - ${assert('s.data.y === 2')}; - return s; - }]`, - }, - ], - }; + { + id: 'y-a', + expression: `export default [s => { + ${assert('s.data.x === 1')}; + ${assert('s.data.y === 1')}; + s.data.y += 1; + return s; + }]`, + next: { 'y-b': true }, + }, + { + id: 'y-b', + expression: `export default [s => { + ${assert('s.data.x === 1')}; + ${assert('s.data.y === 2')}; + return s; + }]`, + }, + ]); - const result = await execute(plan, {}, mockLogger); + const result: any = await executePlan(plan, {}, {}, mockLogger); // explicit check that no assertion failed and wrote an error to state t.falsy(result.error); @@ -392,26 +325,24 @@ test('Jobs only receive state from upstream jobs', async (t) => { }); }); -test('all state is passed through in non-strict mode', async (t) => { - const plan: ExecutionPlan = { - jobs: [ - { - expression: - 'export default [s => ({ data: {}, references: [], x: 22, y: 33 })]', - next: { - job2: true, - }, - }, - - { - id: 'job2', - // Throw if we receive unexpected stuff in state - expression: - 'export default [s => { if (!s.x || !s.y || !s.references) { throw new Error() }; return s;}]', - }, - ], - }; - const result = await execute(plan, { strict: false }, mockLogger); +test('all state is passed through successive jobs', async (t) => { + const plan = createPlan([ + createJob({ + expression: + 'export default [s => ({ data: {}, references: [], x: 22, y: 33 })]', + next: { + job2: true, + }, + }), + createJob({ + id: 'job2', + // Throw if we receive unexpected stuff in state + expression: + 'export default [s => { if (!s.x || !s.y || !s.references) { throw new Error() }; return s;}]', + }), + ]); + + const result: any = await executePlan(plan, {}, {}, mockLogger); t.deepEqual(result, { data: {}, references: [], @@ -421,112 +352,102 @@ test('all state is passed through in non-strict mode', async (t) => { }); test('execute edge based on state in the condition', async (t) => { - const plan: ExecutionPlan = { - jobs: [ - { - id: 'job1', - state: {}, - expression: 'export default [(s) => { s.data.x = 10; return s;}]', - next: { - job2: { condition: 'state.data.x === 10' }, - }, + const plan = createPlan([ + { + id: 'job1', + state: {}, + expression: 'export default [(s) => { s.data.x = 10; return s;}]', + next: { + job2: { condition: 'state.data.x === 10' }, }, - { - id: 'job2', - expression: 'export default [() => ({ data: { y: 20 } })]', - }, - ], - }; - const result = await execute(plan, {}, mockLogger); + }, + { + id: 'job2', + expression: 'export default [() => ({ data: { y: 20 } })]', + }, + ]); + const result: any = await executePlan(plan, {}, {}, mockLogger); t.is(result.data?.y, 20); }); test('skip edge based on state in the condition ', async (t) => { - const plan: ExecutionPlan = { - jobs: [ - { - id: 'job1', - state: {}, - expression: 'export default [s => { s.data.x = 10; return s;}]', - next: { - job2: { condition: 'false' }, - }, - }, - { - id: 'job2', - expression: 'export default [() => ({ y: 20 })]', + const plan = createPlan([ + { + id: 'job1', + state: {}, + expression: 'export default [s => { s.data.x = 10; return s;}]', + next: { + job2: { condition: 'false' }, }, - ], - }; - const result = await execute(plan, {}, mockLogger); + }, + { + id: 'job2', + expression: 'export default [() => ({ y: 20 })]', + }, + ]); + const result: any = await executePlan(plan, {}, {}, mockLogger); t.is(result.data?.x, 10); }); test('do not traverse a disabled edge', async (t) => { - const plan: ExecutionPlan = { - jobs: [ - { - id: 'job1', - expression: 'export default [(s) => { s.data.x = 10; return s;}]', - next: { - job2: { - disabled: true, - condition: 'true', - }, + const plan = createPlan([ + { + id: 'job1', + expression: 'export default [(s) => { s.data.x = 10; return s;}]', + next: { + job2: { + disabled: true, + condition: 'true', }, }, - { - id: 'job2', - expression: 'export default [() => ({ data: { x: 20 } })]', - }, - ], - }; - const result = await execute(plan, {}, mockLogger); + }, + { + id: 'job2', + expression: 'export default [() => ({ data: { x: 20 } })]', + }, + ]); + const result: any = await executePlan(plan, {}, {}, mockLogger); t.is(result.data?.x, 10); }); test('execute a two-job execution plan', async (t) => { - const plan: ExecutionPlan = { - initialState: { data: { x: 0 } }, - jobs: [ - { - id: 'job1', - expression: 'export default [s => { s.data.x += 1; return s; } ]', - next: { job2: true }, - }, - { - id: 'job2', - expression: 'export default [s => { s.data.x += 1; return s; } ]', - }, - ], - }; - const result = await execute(plan, {}, mockLogger); + const plan = createPlan([ + { + id: 'job1', + expression: 'export default [s => { s.data.x += 1; return s; } ]', + next: { job2: true }, + }, + { + id: 'job2', + expression: 'export default [s => { s.data.x += 1; return s; } ]', + }, + ]); + const state = { data: { x: 0 } }; + + const result: any = await executePlan(plan, state, {}, mockLogger); t.is(result.data.x, 2); }); test('only execute one job in a two-job execution plan', async (t) => { - const plan: ExecutionPlan = { - initialState: { data: { x: 0 } }, - jobs: [ - { - id: 'job1', - expression: 'export default [s => { s.data.x += 1; return s; } ]', - next: { job2: false }, - }, - { - id: 'job2', - expression: 'export default [s => { s.data.x += 1; return s; } ]', - }, - ], - }; - const result = await execute(plan, {}, mockLogger); + const plan = createPlan([ + { + id: 'job1', + expression: 'export default [s => { s.data.x += 1; return s; } ]', + next: { job2: false }, + }, + { + id: 'job2', + expression: 'export default [s => { s.data.x += 1; return s; } ]', + }, + ]); + const state = { data: { x: 0 } }; + const result: any = await executePlan(plan, state, {}, mockLogger); t.is(result.data.x, 1); }); -test('execute a two-job execution plan with custom start in state', async (t) => { - const plan: ExecutionPlan = { - start: 'job2', - jobs: [ +test('execute a two-job execution plan with custom start', async (t) => { + const plan = createPlan( + [ { id: 'job1', expression: 'export default [() => ({ data: { result: 11 } }) ]', @@ -537,36 +458,16 @@ test('execute a two-job execution plan with custom start in state', async (t) => next: { job1: true }, }, ], - }; - const result = await execute(plan, {}, mockLogger); - t.is(result.data.result, 11); -}); + { start: 'job2' } + ); -test('execute a two-job execution plan with custom start in options', async (t) => { - const plan: ExecutionPlan = { - start: 'job1', - initialState: { start: 'job2' }, - jobs: [ - { - id: 'job1', - expression: 'export default [() => ({ data: { result: 11 } }) ]', - }, - { - id: 'job2', - expression: 'export default [() => ({ data: { result: 1 } }) ]', - next: { job1: true }, - }, - ], - }; - const result = await execute(plan, {}, mockLogger); + const result: any = await executePlan(plan, {}, {}, mockLogger); t.is(result.data.result, 11); }); test('Return when there are no more edges', async (t) => { - const plan: ExecutionPlan = { - start: 'job1', - initialState: { data: { x: 0 } }, - jobs: [ + const plan = createPlan( + [ { id: 'job1', expression: 'export default [s => { s.data.x += 1; return s; } ]', @@ -576,33 +477,36 @@ test('Return when there are no more edges', async (t) => { expression: 'export default [s => { s.data.x += 1; return s; } ]', }, ], - }; - const result = await execute(plan, {}, mockLogger); + { start: 'job1' } + ); + const state = { data: { x: 0 } }; + + const result: any = await executePlan(plan, state, {}, mockLogger); t.is(result.data?.x, 1); }); test('execute a 5 job execution plan', async (t) => { - const plan: ExecutionPlan = { - initialState: { data: { x: 0 } }, - start: '1', - jobs: [], - } as ExecutionPlan; + const steps = []; for (let i = 1; i < 6; i++) { - plan.jobs.push({ + steps.push({ id: `${i}`, expression: 'export default [s => { s.data.x += 1; return s; } ]', next: i === 5 ? null : { [`${i + 1}`]: true }, - } as JobNode); + } as Job); } - const result = await execute(plan, {}, mockLogger); + + const plan = createPlan(steps, { + start: '1', + }); + const state = { data: { x: 0 } }; + + const result: any = await executePlan(plan, state, {}, mockLogger); t.is(result.data.x, 5); }); test('execute multiple steps in "parallel"', async (t) => { - const plan: ExecutionPlan = { - start: 'start', - initialState: { data: { x: 0 } }, - jobs: [ + const plan = createPlan( + [ { id: 'start', expression: 'export default [s => s]', @@ -625,8 +529,11 @@ test('execute multiple steps in "parallel"', async (t) => { expression: 'export default [s => { s.data.x += 1; return s; } ]', }, ], - }; - const result = await execute(plan, {}, mockLogger); + { start: 'start' } + ); + const state = { data: { x: 0 } }; + + const result: any = await executePlan(plan, state, {}, mockLogger); t.deepEqual(result, { a: { data: { x: 1 } }, b: { data: { x: 1 } }, @@ -635,10 +542,8 @@ test('execute multiple steps in "parallel"', async (t) => { }); test('isolate state in "parallel" execution', async (t) => { - const plan: ExecutionPlan = { - start: 'start', - initialState: { data: { x: 0 } }, - jobs: [ + const plan = createPlan( + [ { id: 'start', expression: 'export default [s => s]', @@ -658,17 +563,17 @@ test('isolate state in "parallel" execution', async (t) => { 'export default [s => { if (s.data.b) { throw "e" }; s.data.c = true; return s }]', }, ], - }; + { start: 'start' } + ); + const state = { data: { x: 0 } }; - const result = await execute(plan, {}, mockLogger); + const result: any = await executePlan(plan, state, {}, mockLogger); t.falsy(result.errors); }); test('isolate state in "parallel" execution with deeper trees', async (t) => { - const plan: ExecutionPlan = { - start: 'start', - initialState: { data: { x: 0 } }, - jobs: [ + const plan = createPlan( + [ { id: 'start', expression: 'export default [s => s]', @@ -701,36 +606,35 @@ test('isolate state in "parallel" execution with deeper trees', async (t) => { 'export default [s => { if (s.data.c) { throw "e" }; s.data.b = true; return s }]', }, ], - }; + { start: 'start' } + ); + const state = { data: { x: 0 } }; - const result = await execute(plan, {}, mockLogger); + const result: any = await executePlan(plan, state, {}, mockLogger); t.falsy(result.errors); }); test('"parallel" execution with multiple leaves should write multiple results to state', async (t) => { - const plan: ExecutionPlan = { - start: 'start', - jobs: [ - { - id: 'start', - expression: 'export default [s => s]', - next: { - 'job-b': true, - 'job-c': true, - }, - }, - { - id: 'job-b', - expression: 'export default [s => { s.data.b = true; return s }]', + const plan = createPlan([ + { + id: 'start', + expression: 'export default [s => s]', + next: { + 'job-b': true, + 'job-c': true, }, - { - id: 'job-c', - expression: 'export default [s => { s.data.c = true; return s }]', - }, - ], - }; + }, + { + id: 'job-b', + expression: 'export default [s => { s.data.b = true; return s }]', + }, + { + id: 'job-c', + expression: 'export default [s => { s.data.c = true; return s }]', + }, + ]); - const result = await execute(plan, {}, mockLogger); + const result: any = await executePlan(plan, {}, {}, mockLogger); // Each leaf should write to its own place on state t.deepEqual(result, { 'job-b': { @@ -747,32 +651,30 @@ test('"parallel" execution with multiple leaves should write multiple results to }); test('return an error in state', async (t) => { - const plan: ExecutionPlan = { - jobs: [ - { - id: 'a', - state: {}, - expression: 'export default [s => { throw Error("e")}]', - }, - ], - }; - const result = await execute(plan, {}, mockLogger); + const plan = createPlan([ + { + id: 'a', + state: {}, + expression: 'export default [s => { throw Error("e")}]', + }, + ]); + + const result: any = await executePlan(plan, {}, {}, mockLogger); t.truthy(result.errors); t.is(result.errors.a.message, 'e'); }); // Fix for https://github.com/OpenFn/kit/issues/317 test('handle non-standard error objects', async (t) => { - const plan: ExecutionPlan = { - jobs: [ - { - id: 'a', - state: {}, - expression: 'export default [s => { throw "wibble" }]', - }, - ], - }; - const result = await execute(plan, {}, mockLogger); + const plan = createPlan([ + { + id: 'a', + state: {}, + expression: 'export default [s => { throw "wibble" }]', + }, + ]); + + const result: any = await executePlan(plan, {}, {}, mockLogger); t.truthy(result.errors); const err = result.errors.a; t.is(err.type, 'JobError'); @@ -780,186 +682,174 @@ test('handle non-standard error objects', async (t) => { }); test('keep executing after an error', async (t) => { - const plan: ExecutionPlan = { - jobs: [ - { - id: 'a', - state: {}, - expression: 'export default [s => { throw Error("e"); state.x = 20 }]', - next: { - b: true, - }, - }, - { - id: 'b', - expression: 'export default [() => ({ y: 20 })]', + const plan = createPlan([ + { + id: 'a', + state: {}, + expression: 'export default [s => { throw Error("e"); state.x = 20 }]', + next: { + b: true, }, - ], - }; - const result = await execute(plan, {}, mockLogger); + }, + { + id: 'b', + expression: 'export default [() => ({ y: 20 })]', + }, + ]); + + const result: any = await executePlan(plan, {}, {}, mockLogger); t.is(result.y, 20); t.falsy(result.x); }); test('simple on-error handler', async (t) => { - const plan: ExecutionPlan = { - jobs: [ - { - id: 'job1', - state: {}, - expression: 'export default [s => { throw Error("e")}]', - next: { - job2: { condition: 'state.errors' }, - job3: { condition: '!state.errors' }, - }, - }, - { - id: 'job2', - expression: 'export default [() => ({ y: 20 })]', - }, - { - id: 'job3', - expression: 'export default [() => ({ x: 20 })]', + const plan = createPlan([ + { + id: 'job1', + state: {}, + expression: 'export default [s => { throw Error("e")}]', + next: { + job2: { condition: 'state.errors' }, + job3: { condition: '!state.errors' }, }, - ], - }; - const result = await execute(plan, {}, mockLogger); + }, + { + id: 'job2', + expression: 'export default [() => ({ y: 20 })]', + }, + { + id: 'job3', + expression: 'export default [() => ({ x: 20 })]', + }, + ]); + + const result: any = await executePlan(plan, {}, {}, mockLogger); t.is(result.y, 20); t.falsy(result.x); }); test('log appopriately on error', async (t) => { - const plan: ExecutionPlan = { - jobs: [ - { - id: 'job1', - state: {}, - expression: 'export default [s => { throw Error("e")}]', - }, - ], - }; + const plan = createPlan([ + { + id: 'job1', + state: {}, + expression: 'export default [s => { throw Error("e")}]', + }, + ]); const logger = createMockLogger(undefined, { level: 'debug' }); - await execute(plan, {}, logger); - const err = logger._find('error', /failed job/i); + await executePlan(plan, {}, {}, logger); + const err = logger._find('error', /failed step/i); t.truthy(err); - t.regex(err!.message as string, /Failed job job1 after \d+ms/i); + t.regex(err!.message as string, /Failed step job1 after \d+ms/i); t.truthy(logger._find('error', /JobError: e/)); t.truthy(logger._find('error', /Check state.errors.job1 for details/i)); }); -test('jobs do not share a local scope', async (t) => { - const plan: ExecutionPlan = { - initialState: { data: {} }, - jobs: [ - { - // declare x in this expression's scope - expression: 'const x = 10; export default [s => s];', - next: { - b: true, - }, - }, - { - id: 'b', - // x should not defined here and this will throw - expression: 'export default [s => { s.data.x = x; return s; }]', +test('steps do not share a local scope', async (t) => { + const plan = createPlan([ + { + id: 'job1', + // declare x in this expression's scope + expression: 'const x = 10; export default [s => s];', + next: { + b: true, }, - ], - }; - await t.throwsAsync(() => execute(plan, {}, mockLogger), { + }, + { + id: 'b', + // x should not defined here and this will throw + expression: 'export default [s => { s.data.x = x; return s; }]', + }, + ]); + await t.throwsAsync(() => executePlan(plan, {}, {}, mockLogger), { message: 'ReferenceError: x is not defined', name: 'RuntimeCrash', }); }); -test('jobs do not share a global scope', async (t) => { - const plan: ExecutionPlan = { - initialState: { data: {} }, - jobs: [ - { - expression: 'export default [s => { x = 10; return s; }]', - next: { - b: true, - }, - }, - { - id: 'b', - expression: 'export default [s => { s.data.x = x; return s; }]', +test('steps do not share a global scope', async (t) => { + const plan = createPlan([ + { + id: 'job1', + expression: 'export default [s => { x = 10; return s; }]', + next: { + b: true, }, - ], - }; + }, + { + id: 'b', + expression: 'export default [s => { s.data.x = x; return s; }]', + }, + ]); - await t.throwsAsync(() => execute(plan, {}, mockLogger), { + await t.throwsAsync(() => executePlan(plan, {}, {}, mockLogger), { message: 'ReferenceError: x is not defined', name: 'RuntimeCrash', }); }); -test('jobs do not share a globalThis object', async (t) => { - const plan: ExecutionPlan = { - initialState: { data: {} }, - jobs: [ - { - expression: 'export default [(s) => { globalThis.x = 10; return s; }]', - next: { - b: true, - }, - }, - { - id: 'b', - expression: - 'export default [(s) => { s.data.x = globalThis.x; return s; }]', +test('steps do not share a globalThis object', async (t) => { + const plan = createPlan([ + { + id: 'job1', + expression: 'export default [(s) => { globalThis.x = 10; return s; }]', + next: { + b: true, }, - ], - }; - const result = await execute(plan, {}, mockLogger); + }, + { + id: 'b', + expression: + 'export default [(s) => { s.data.x = globalThis.x; return s; }]', + }, + ]); + + const result: any = await executePlan(plan, {}, {}, mockLogger); t.deepEqual(result, { data: {} }); }); // TODO this fails right now // https://github.com/OpenFn/kit/issues/213 -test.skip('jobs cannot scribble on globals', async (t) => { - const plan: ExecutionPlan = { - initialState: { data: {} }, - jobs: [ - { - expression: 'export default [s => { console.x = 10; return s; }]', - next: { - b: true, - }, - }, - { - id: 'b', - expression: 'export default [s => { s.data.x = console.x; return s; }]', +test.skip('steps cannot scribble on globals', async (t) => { + const plan = createPlan([ + { + id: 'job1', + expression: 'export default [s => { console.x = 10; return s; }]', + next: { + b: true, }, - ], - }; - const result = await execute(plan, {}, mockLogger); + }, + { + id: 'b', + expression: 'export default [s => { s.data.x = console.x; return s; }]', + }, + ]); + + const result: any = await executePlan(plan, {}, {}, mockLogger); t.falsy(result.data.x); }); // TODO this fails right now // https://github.com/OpenFn/kit/issues/213 -test.skip('jobs cannot scribble on adaptor functions', async (t) => { - const plan: ExecutionPlan = { - initialState: { data: { x: 0 } }, - jobs: [ - { - expression: - 'import { fn } from "@openfn/language-common"; fn.x = 10; export default [s => s]', - next: { - b: true, - }, - }, - { - id: 'b', - expression: - 'import { fn } from "@openfn/language-common"; export default [s => { s.data.x = fn.x; return s; }]', +test.skip('steps cannot scribble on adaptor functions', async (t) => { + const plan = createPlan([ + { + id: 'job1', + expression: + 'import { fn } from "@openfn/language-common"; fn.x = 10; export default [s => s]', + next: { + b: true, }, - ], - }; + }, + { + id: 'b', + expression: + 'import { fn } from "@openfn/language-common"; export default [s => { s.data.x = fn.x; return s; }]', + }, + ]); const options = { linker: { modules: { @@ -970,11 +860,11 @@ test.skip('jobs cannot scribble on adaptor functions', async (t) => { }, }; - const result = await execute(plan, options, mockLogger); + const result: any = await executePlan(plan, {}, options, mockLogger); t.falsy(result.data.x); }); -test('jobs can write circular references to state without blowing up downstream', async (t) => { +test('steps can write circular references to state without blowing up downstream', async (t) => { const expression = `export default [(s) => { const a = {}; const b = { a }; @@ -984,21 +874,19 @@ test('jobs can write circular references to state without blowing up downstream' return s; }] `; - const plan: ExecutionPlan = { - initialState: { data: {} }, - jobs: [ - { - expression, - next: { b: true }, - }, - { - id: 'b', - expression: 'export default [(s => s)]', - }, - ], - }; + const plan = createPlan([ + { + id: 'job1', + expression, + next: { b: true }, + }, + { + id: 'b', + expression: 'export default [(s => s)]', + }, + ]); - const result = await execute(plan, {}, mockLogger); + const result: any = await executePlan(plan, {}, {}, mockLogger); t.notThrows(() => JSON.stringify(result)); t.deepEqual(result, { @@ -1010,7 +898,7 @@ test('jobs can write circular references to state without blowing up downstream' }); }); -test('jobs cannot pass circular references to each other', async (t) => { +test('steps cannot pass circular references to each other', async (t) => { const expression = `export default [(s) => { const a = {}; const b = { a }; @@ -1020,101 +908,107 @@ test('jobs cannot pass circular references to each other', async (t) => { return s; }] `; - const plan: ExecutionPlan = { - initialState: { data: {} }, - jobs: [ - { - expression, - next: { b: true }, - }, - { - id: 'b', - expression: `export default [(s => { + const plan = createPlan([ + { + expression, + next: { b: true }, + }, + { + id: 'b', + expression: `export default [(s => { s.data.answer = s.data.ref.b.a; return s })]`, - }, - ], - }; + }, + ]); - const result = await execute(plan, {}, mockLogger); + const result: any = await executePlan(plan, {}, {}, mockLogger); t.notThrows(() => JSON.stringify(result)); t.is(result.data.answer, '[Circular]'); }); -test('jobs can write functions to state without blowing up downstream', async (t) => { - const plan: ExecutionPlan = { - initialState: { data: {} }, - jobs: [ - { - next: { b: true }, - expression: `export default [(s) => { +test('steps can write functions to state without blowing up downstream', async (t) => { + const plan = createPlan([ + { + next: { b: true }, + expression: `export default [(s) => { s.data = { x: () => 22 } return s; }]`, - }, - { - id: 'b', - expression: 'export default [(s) => s]', - }, - ], - }; + }, + { + id: 'b', + expression: 'export default [(s) => s]', + }, + ]); - const result = await execute(plan, {}, mockLogger); + const result: any = await executePlan(plan, {}, {}, mockLogger); t.notThrows(() => JSON.stringify(result)); t.deepEqual(result, { data: {} }); }); -test('jobs cannot pass functions to each other', async (t) => { - const plan: ExecutionPlan = { - initialState: { data: {} }, - jobs: [ - { - next: { b: true }, - expression: `export default [(s) => { +test('steps cannot pass functions to each other', async (t) => { + const plan = createPlan([ + { + next: { b: true }, + expression: `export default [(s) => { s.data = { x: () => 22 } return s; }]`, - }, - { - id: 'b', - expression: `export default [ + }, + { + id: 'b', + expression: `export default [ (s) => { s.data.x(); return s; } ]`, - }, - ], - }; + }, + ]); - const result = await execute(plan, {}, mockLogger); + const result: any = await executePlan(plan, {}, {}, mockLogger); const error = result.errors.b; t.is(error.type, 'TypeError'); t.is(error.message, 'TypeError: s.data.x is not a function'); }); -test('Plans log for each job start and end', async (t) => { - const plan: ExecutionPlan = { - jobs: [ - { - id: 'a', - expression: 'export default [s => s]', - }, - ], - }; +test('Plans log step ids for each job start and end', async (t) => { + const plan = createPlan([ + { + id: 'a', + expression: 'export default [s => s]', + }, + ]); + const logger = createMockLogger(undefined, { level: 'debug' }); + await executePlan(plan, {}, {}, logger); + const start = logger._find('info', /starting step a/i); + t.is(start!.message, 'Starting step a'); + + const end = logger._find('success', /completed step a/i); + t.regex(end!.message as string, /Completed step a in \d+ms/); +}); + +test('Plans log step names for each job start and end', async (t) => { + const plan = createPlan([ + { + id: 'a', + name: 'do-the-thing', + expression: 'export default [s => s]', + }, + ]); const logger = createMockLogger(undefined, { level: 'debug' }); - await execute(plan, {}, logger); + await executePlan(plan, {}, {}, logger); - const start = logger._find('always', /starting job/i); - t.is(start!.message, 'Starting job a'); + const start = logger._find('info', /starting step do-the-thing/i); + t.is(start!.message, 'Starting step do-the-thing'); - const end = logger._find('success', /completed job/i); - t.regex(end!.message as string, /Completed job a in \d+ms/); + const end = logger._find('success', /completed step do-the-thing/i); + t.regex(end!.message as string, /Completed step do-the-thing in \d+ms/); }); diff --git a/packages/runtime/test/execute/job.test.ts b/packages/runtime/test/execute/step.test.ts similarity index 73% rename from packages/runtime/test/execute/job.test.ts rename to packages/runtime/test/execute/step.test.ts index d1a36cc68..2fbd9205f 100644 --- a/packages/runtime/test/execute/job.test.ts +++ b/packages/runtime/test/execute/step.test.ts @@ -6,9 +6,10 @@ import { NOTIFY_JOB_ERROR, NOTIFY_JOB_START, } from '../../src'; -import execute from '../../src/execute/job'; +import execute from '../../src/execute/step'; -import type { ExecutionContext, State } from '../../src/types'; +import type { ExecutionContext } from '../../src/types'; +import { State } from '@openfn/lexicon'; const createState = (data = {}) => ({ data: data, @@ -20,7 +21,9 @@ const logger = createMockLogger(undefined, { level: 'debug' }); const createContext = (args = {}) => ({ logger, - plan: {}, + plan: { + options: {}, + }, opts: {}, notify: () => {}, report: () => {}, @@ -31,35 +34,35 @@ test.afterEach(() => { logger._reset(); }); -test.serial('resolve and return next for a simple job', async (t) => { - const job = { +test.serial('resolve and return next for a simple step', async (t) => { + const step = { id: 'j', expression: [(s: State) => s], next: { k: true, a: false }, }; const initialState = createState(); const context = createContext(); - const { next, state } = await execute(context, job, initialState); + const { next, state } = await execute(context, step, initialState); t.deepEqual(state, { data: {} }); t.deepEqual(next, ['k']); }); -test.serial('resolve and return next for a trigger-style job', async (t) => { - const job = { +test.serial('resolve and return next for a trigger-style step', async (t) => { + const step = { id: 'j', next: { k: true, a: false }, }; const initialState = createState(); const context = createContext(); - const { next, state } = await execute(context, job, initialState); + const { next, state } = await execute(context, step, initialState); t.deepEqual(state, initialState); t.deepEqual(next, ['k']); }); -test.serial('resolve and return next for a failed job', async (t) => { - const job = { +test.serial('resolve and return next for a failed step', async (t) => { + const step = { id: 'j', expression: [ () => { @@ -70,7 +73,7 @@ test.serial('resolve and return next for a failed job', async (t) => { }; const initialState = createState(); const context = createContext(); - const { next, state } = await execute(context, job, initialState); + const { next, state } = await execute(context, step, initialState); // Config should still be scrubbed from data t.deepEqual(state, { data: {} }); @@ -78,7 +81,7 @@ test.serial('resolve and return next for a failed job', async (t) => { }); test.serial(`notify ${NOTIFY_JOB_START}`, async (t) => { - const job = { + const step = { id: 'j', expression: [(s: State) => s], }; @@ -92,32 +95,32 @@ test.serial(`notify ${NOTIFY_JOB_START}`, async (t) => { const context = createContext({ notify }); - await execute(context, job, state); + await execute(context, step, state); }); test.serial( - `don't notify ${NOTIFY_JOB_START} for trigger-style jobs`, + `don't notify ${NOTIFY_JOB_START} for trigger-style steps`, async (t) => { - const job = { + const step = { id: 'j', }; const state = createState(); const notify = (event: string, payload?: any) => { if (event === NOTIFY_JOB_START) { - t.fail('should not notify job-start for trigger nodes'); + t.fail('should not notify step-start for trigger nodes'); } }; const context = createContext({ notify }); - await execute(context, job, state); + await execute(context, step, state); t.pass('all ok'); } ); test.serial(`notify ${NOTIFY_JOB_COMPLETE} with no next`, async (t) => { - const job = { + const step = { id: 'j', expression: [(s: State) => s], }; @@ -139,11 +142,11 @@ test.serial(`notify ${NOTIFY_JOB_COMPLETE} with no next`, async (t) => { const context = createContext({ notify }); - await execute(context, job, state); + await execute(context, step, state); }); test.serial(`notify ${NOTIFY_JOB_COMPLETE} with two nexts`, async (t) => { - const job = { + const step = { id: 'j', expression: [(s: State) => s], next: { b: true, c: true }, @@ -165,26 +168,26 @@ test.serial(`notify ${NOTIFY_JOB_COMPLETE} with two nexts`, async (t) => { const context = createContext({ notify }); - await execute(context, job, state); + await execute(context, step, state); }); test.serial( - `don't notify ${NOTIFY_JOB_COMPLETE} for trigger-style jobs`, + `don't notify ${NOTIFY_JOB_COMPLETE} for trigger-style steps`, async (t) => { - const job = { + const step = { id: 'j', }; const state = createState(); const notify = (event: string) => { if (event === NOTIFY_JOB_COMPLETE) { - t.fail('should not notify job-start for trigger nodes'); + t.fail('should not notify step-start for trigger nodes'); } }; const context = createContext({ notify }); - await execute(context, job, state); + await execute(context, step, state); t.pass('all ok'); } ); @@ -195,7 +198,7 @@ test.serial( // Promises will trigger an exception if you try to serialize them // If we don't return finalState in execute/expression, this test will fail const resultState = { x: new Promise((r) => r), y: 22 }; - const job = { + const step = { id: 'j', expression: [() => resultState], }; @@ -212,12 +215,12 @@ test.serial( const context = createContext({ notify }); - await execute(context, job, state); + await execute(context, step, state); } ); test.serial(`notify ${NOTIFY_JOB_ERROR} for a fail`, async (t) => { - const job = { + const step = { id: 'j', expression: [ () => { @@ -247,33 +250,33 @@ test.serial(`notify ${NOTIFY_JOB_ERROR} for a fail`, async (t) => { const context = createContext({ notify }); - await execute(context, job, state); + await execute(context, step, state); }); test.serial('log duration of execution', async (t) => { - const job = { + const step = { id: 'y', expression: [(s: State) => s], }; const initialState = createState(); const context = createContext(); - await execute(context, job, initialState); + await execute(context, step, initialState); - const duration = logger._find('success', /completed job /i); + const duration = logger._find('success', /completed step /i); - t.regex(duration?.message, /completed job y in \d\d?ms/i); + t.regex(duration?.message, /completed step y in \d\d?ms/i); }); test.serial('log memory usage', async (t) => { - const job = { + const step = { id: 'z', expression: [(s: State) => s], }; const initialState = createState(); const context = createContext(); - await execute(context, job, initialState); + await execute(context, step, initialState); const memory = logger._find('debug', /final memory usage/i); @@ -282,8 +285,8 @@ test.serial('log memory usage', async (t) => { t.regex(memory?.message, /\d+mb(.+)\d+mb/i); }); -test.serial('warn if a non-leaf job does not return state', async (t) => { - const job = { +test.serial('warn if a non-leaf step does not return state', async (t) => { + const step = { id: 'k', expression: [(s: State) => {}], next: { l: true }, @@ -292,14 +295,14 @@ test.serial('warn if a non-leaf job does not return state', async (t) => { const context = createContext(); const state = createState(); - // @ts-ignore ts complains that the job does not return state - const result = await execute(context, job, state); + // @ts-ignore ts complains that the step does not return state + const result = await execute(context, step, state); const warn = logger._find('warn', /did not return a state object/); t.truthy(warn); }); -test.serial('do not warn if a leaf job does not return state', async (t) => { - const job = { +test.serial('do not warn if a leaf step does not return state', async (t) => { + const step = { id: 'k', expression: [(s: State) => {}], }; @@ -307,17 +310,17 @@ test.serial('do not warn if a leaf job does not return state', async (t) => { const context = createContext(); const state = createState(); - // @ts-ignore ts complains that the job does not return state - const result = await execute(context, job, state); + // @ts-ignore ts complains that the step does not return state + const result = await execute(context, step, state); const warn = logger._find('warn', /did not return a state object/); t.falsy(warn); }); test.serial( - 'do not warn a non-leaf job does not return state and there was an error', + 'do not warn a non-leaf step does not return state and there was an error', async (t) => { - const job = { + const step = { id: 'k', expression: [ (s: State) => { @@ -330,8 +333,8 @@ test.serial( const context = createContext(); const state = createState(); - // @ts-ignore ts complains that the job does not return state - const result = await execute(context, job, state); + // @ts-ignore ts complains that the step does not return state + const result = await execute(context, step, state); const warn = logger._find('warn', /did not return a state object/); t.falsy(warn); diff --git a/packages/runtime/test/memory.test.ts b/packages/runtime/test/memory.test.ts index 972482dbd..9dd3b83f7 100644 --- a/packages/runtime/test/memory.test.ts +++ b/packages/runtime/test/memory.test.ts @@ -4,12 +4,9 @@ * */ import test from 'ava'; +import type { ExecutionPlan } from '@openfn/lexicon'; -import { - ExecutionPlan, - NOTIFY_JOB_COMPLETE, - NotifyJobCompletePayload, -} from '../src'; +import { NOTIFY_JOB_COMPLETE, NotifyJobCompletePayload } from '../src'; import callRuntime from '../src/runtime'; /** @@ -52,19 +49,14 @@ const run = async (t, workflow: ExecutionPlan) => { } }; - const state = await callRuntime( - workflow, - {}, - { - strict: false, - callbacks: { notify }, - globals: { - process: { - memoryUsage: () => process.memoryUsage(), - }, + const state = await callRuntime(workflow, { + callbacks: { notify }, + globals: { + process: { + memoryUsage: () => process.memoryUsage(), }, - } - ); + }, + }); return { state, mem }; }; diff --git a/packages/runtime/test/runtime.test.ts b/packages/runtime/test/runtime.test.ts index d4bb1888d..e7f8af39e 100644 --- a/packages/runtime/test/runtime.test.ts +++ b/packages/runtime/test/runtime.test.ts @@ -1,8 +1,9 @@ import test from 'ava'; import path from 'node:path'; import { createMockLogger } from '@openfn/logger'; +import type { ExecutionPlan } from '@openfn/lexicon'; + import { - ExecutionPlan, NOTIFY_INIT_COMPLETE, NOTIFY_JOB_COMPLETE, NOTIFY_JOB_ERROR, @@ -11,6 +12,8 @@ import { } from '../src'; import run from '../src/runtime'; +type ExecutionPlanNoOptions = Omit; + test('run simple expression', async (t) => { const expression = 'export default [(s) => {s.data.done = true; return s}]'; @@ -19,10 +22,12 @@ test('run simple expression', async (t) => { }); test('run a simple workflow', async (t) => { - const plan: ExecutionPlan = { - jobs: [ - { expression: 'export default [(s) => ({ data: { done: true } })]' }, - ], + const plan: ExecutionPlanNoOptions = { + workflow: { + steps: [ + { expression: 'export default [(s) => ({ data: { done: true } })]' }, + ], + }, }; const result: any = await run(plan); @@ -42,8 +47,10 @@ test('run a workflow and notify major events', async (t) => { notify, }; - const plan: ExecutionPlan = { - jobs: [{ expression: 'export default [(s) => s]' }], + const plan: ExecutionPlanNoOptions = { + workflow: { + steps: [{ expression: 'export default [(s) => s]' }], + }, }; await run(plan, {}, { callbacks }); @@ -69,10 +76,12 @@ test('notify job error even after fail', async (t) => { notify, }; - const plan: ExecutionPlan = { - jobs: [ - { id: 'a', expression: 'export default [(s) => s.data.x = s.err.z ]' }, - ], + const plan: ExecutionPlanNoOptions = { + workflow: { + steps: [ + { id: 'a', expression: 'export default [(s) => s.data.x = s.err.z ]' }, + ], + }, }; await run(plan, {}, { callbacks }); @@ -93,8 +102,8 @@ test('notify job error even after crash', async (t) => { notify, }; - const plan: ExecutionPlan = { - jobs: [{ id: 'a', expression: 'export default [() => s]' }], + const plan: ExecutionPlanNoOptions = { + workflow: { steps: [{ id: 'a', expression: 'export default [() => s]' }] }, }; try { @@ -106,17 +115,18 @@ test('notify job error even after crash', async (t) => { }); test('resolve a credential', async (t) => { - const plan: ExecutionPlan = { - jobs: [ - { - expression: 'export default [(s) => s]', - configuration: 'ccc', - }, - ], + const plan: Partial = { + workflow: { + steps: [ + { + expression: 'export default [(s) => s]', + configuration: 'ccc', + }, + ], + }, }; const options = { - strict: false, statePropsToRemove: [], callbacks: { resolveCredential: async () => ({ password: 'password1' }), @@ -129,13 +139,15 @@ test('resolve a credential', async (t) => { }); test('resolve initial state', async (t) => { - const plan: ExecutionPlan = { - jobs: [ - { - expression: 'export default [(s) => s]', - state: 'abc', - }, - ], + const plan: ExecutionPlanNoOptions = { + workflow: { + steps: [ + { + expression: 'export default [(s) => s]', + state: 'abc', + }, + ], + }, }; const options = { @@ -162,11 +174,13 @@ test('run a workflow with two jobs and call callbacks', async (t) => { notify, }; - const plan: ExecutionPlan = { - jobs: [ - { id: 'a', expression: 'export default [(s) => s]', next: { b: true } }, - { id: 'b', expression: 'export default [(s) => s]' }, - ], + const plan: ExecutionPlanNoOptions = { + workflow: { + steps: [ + { id: 'a', expression: 'export default [(s) => s]', next: { b: true } }, + { id: 'b', expression: 'export default [(s) => s]' }, + ], + }, }; await run(plan, {}, { callbacks }); @@ -178,30 +192,34 @@ test('run a workflow with two jobs and call callbacks', async (t) => { }); test('run a workflow with state and parallel branching', async (t) => { - const plan: ExecutionPlan = { - jobs: [ - { - expression: - 'export default [(s) => { s.data.count += 1; s.data.a = true; return s}]', - next: { - b: true as const, - c: true as const, + const plan: ExecutionPlanNoOptions = { + workflow: { + steps: [ + { + expression: + 'export default [(s) => { s.data.count += 1; s.data.a = true; return s}]', + next: { + b: true as const, + c: true as const, + }, }, - }, - { - id: 'b', - expression: - 'export default [(s) => { s.data.count += 1; s.data.b = true; return s}]', - }, - { - id: 'c', - expression: - 'export default [(s) => { s.data.count += 1; s.data.c = true; return s}]', - }, - ], + { + id: 'b', + expression: + 'export default [(s) => { s.data.count += 1; s.data.b = true; return s}]', + }, + { + id: 'c', + expression: + 'export default [(s) => { s.data.count += 1; s.data.c = true; return s}]', + }, + ], + }, }; - const result: any = await run(plan, { data: { count: 0 } }); + const state = { data: { count: 0 } }; + + const result: any = await run(plan, state); t.deepEqual(result, { b: { data: { @@ -220,29 +238,33 @@ test('run a workflow with state and parallel branching', async (t) => { }); }); +// TODO this test sort of shows why input state on the plan object is a bit funky +// running the same plan with two inputs is pretty clunky test('run a workflow with state and conditional branching', async (t) => { - const plan: ExecutionPlan = { - jobs: [ - { - expression: 'export default [(s) => { s.data.a = true; return s}]', - next: { - b: { - condition: 'state.data.count > 0', - }, - c: { - condition: 'state.data.count == 0', + const plan: ExecutionPlanNoOptions = { + workflow: { + steps: [ + { + expression: 'export default [(s) => { s.data.a = true; return s}]', + next: { + b: { + condition: 'state.data.count > 0', + }, + c: { + condition: 'state.data.count == 0', + }, }, }, - }, - { - id: 'b', - expression: 'export default [(s) => { s.data.b = true; return s}]', - }, - { - id: 'c', - expression: 'export default [(s) => { s.data.c = true; return s}]', - }, - ], + { + id: 'b', + expression: 'export default [(s) => { s.data.b = true; return s}]', + }, + { + id: 'c', + expression: 'export default [(s) => { s.data.c = true; return s}]', + }, + ], + }, }; const result1: any = await run(plan, { data: { count: 10 } }); @@ -260,40 +282,48 @@ test('run a workflow with state and conditional branching', async (t) => { test('run a workflow with initial state (data key) and optional start', async (t) => { const plan: ExecutionPlan = { - jobs: [ - { - // won't run - id: 'a', - expression: 'export default [(s) => { s.data.count +=1 ; return s}]', - next: { b: true }, - }, - { - id: 'b', - expression: 'export default [(s) => { s.data.count +=1 ; return s}]', - next: { c: true }, - }, - { - id: 'c', - expression: 'export default [(s) => { s.data.count +=1 ; return s}]', - }, - ], + workflow: { + steps: [ + { + // won't run + id: 'a', + expression: 'export default [(s) => { s.data.count +=1 ; return s}]', + next: { b: true }, + }, + { + id: 'b', + expression: 'export default [(s) => { s.data.count +=1 ; return s}]', + next: { c: true }, + }, + { + id: 'c', + expression: 'export default [(s) => { s.data.count +=1 ; return s}]', + }, + ], + }, + options: { + start: 'b', + }, }; - const result: any = await run(plan, { data: { count: 10 } }, { start: 'b' }); + const result: any = await run(plan, { data: { count: 10 } }); t.is(result.data.count, 12); }); test('run a workflow with a trigger node', async (t) => { - const plan: ExecutionPlan = { - jobs: [ - { - next: { b: { condition: 'state.data.age > 18 ' } }, - }, - { - id: 'b', - expression: 'export default [(s) => { s.data.done = true ; return s}]', - }, - ], + const plan: ExecutionPlanNoOptions = { + workflow: { + steps: [ + { + next: { b: { condition: 'state.data.age > 18 ' } }, + }, + { + id: 'b', + expression: + 'export default [(s) => { s.data.done = true ; return s}]', + }, + ], + }, }; const result: any = await run(plan, { data: { age: 28 } }); @@ -301,18 +331,20 @@ test('run a workflow with a trigger node', async (t) => { }); test('prefer initial state to inline state', async (t) => { - const plan: ExecutionPlan = { - jobs: [ - { - state: { - data: { - x: 20, // this will be overriden by the incoming state - y: 20, // This will be untouched + const plan: ExecutionPlanNoOptions = { + workflow: { + steps: [ + { + state: { + data: { + x: 20, // this will be overriden by the incoming state + y: 20, // This will be untouched + }, }, + expression: 'export default [(s) => s]', }, - expression: 'export default [(s) => s]', - }, - ], + ], + }, }; const result: any = await run(plan, { data: { x: 40 } }); @@ -320,40 +352,11 @@ test('prefer initial state to inline state', async (t) => { t.is(result.data.y, 20); }); -test('do not pass extraneous state in strict mode', async (t) => { - const plan: ExecutionPlan = { - jobs: [ - { - expression: 'export default [() => ({ x: 1, data: {}} )]', - }, - ], - }; - - const result: any = await run(plan, {}, { strict: true }); - t.deepEqual(result, { - data: {}, - }); -}); - -test('do pass extraneous state in non-strict mode', async (t) => { - const plan: ExecutionPlan = { - jobs: [ - { - expression: 'export default [() => ({ x: 1, data: {}} )]', - }, - ], - }; - - const result: any = await run(plan, {}, { strict: false }); - t.deepEqual(result, { - x: 1, - data: {}, - }); -}); - test('Allow a job to return undefined', async (t) => { - const plan: ExecutionPlan = { - jobs: [{ expression: 'export default [() => {}]' }], + const plan: ExecutionPlanNoOptions = { + workflow: { + steps: [{ expression: 'export default [() => {}]' }], + }, }; const result: any = await run(plan); @@ -361,39 +364,43 @@ test('Allow a job to return undefined', async (t) => { }); test('log errors, write to state, and continue', async (t) => { - const plan: ExecutionPlan = { - jobs: [ - { - id: 'a', - expression: 'export default [() => { throw new Error("test") }]', - next: { b: true }, - }, - { - id: 'b', - expression: 'export default [(s) => { s.x = 1; return s; }]', - }, - ], + const plan: ExecutionPlanNoOptions = { + workflow: { + steps: [ + { + id: 'a', + expression: 'export default [() => { throw new Error("test") }]', + next: { b: true }, + }, + { + id: 'b', + expression: 'export default [(s) => { s.x = 1; return s; }]', + }, + ], + }, }; const logger = createMockLogger(); - const result: any = await run(plan, {}, { strict: false, logger }); + const result: any = await run(plan, {}, { logger }); t.is(result.x, 1); t.truthy(result.errors); t.is(result.errors.a.message, 'test'); t.is(result.errors.a.type, 'JobError'); - t.truthy(logger._find('error', /failed job a/i)); + t.truthy(logger._find('error', /failed step a/i)); }); test('log job code to the job logger', async (t) => { - const plan: ExecutionPlan = { - jobs: [ - { - id: 'a', - expression: 'export default [(s) => { console.log("hi"); return s;}]', - }, - ], + const plan: ExecutionPlanNoOptions = { + workflow: { + steps: [ + { + id: 'a', + expression: 'export default [(s) => { console.log("hi"); return s;}]', + }, + ], + }, }; const jobLogger = createMockLogger('JOB', { level: 'debug', json: true }); @@ -407,14 +414,16 @@ test('log job code to the job logger', async (t) => { }); test('log and serialize an error to the job logger', async (t) => { - const plan: ExecutionPlan = { - jobs: [ - { - id: 'a', - expression: - 'export default [(s) => { console.log(new Error("hi")); return s;}]', - }, - ], + const plan: ExecutionPlanNoOptions = { + workflow: { + steps: [ + { + id: 'a', + expression: + 'export default [(s) => { console.log(new Error("hi")); return s;}]', + }, + ], + }, }; const jobLogger = createMockLogger('JOB', { level: 'debug', json: true }); @@ -432,41 +441,45 @@ test('log and serialize an error to the job logger', async (t) => { }); test('error reports can be overwritten', async (t) => { - const plan: ExecutionPlan = { - jobs: [ - { - id: 'a', - expression: 'export default [() => { throw new Error("test") }]', - next: { b: true }, - }, - { - id: 'b', - expression: 'export default [(s) => ({ errors: 22 })]', - }, - ], + const plan: ExecutionPlanNoOptions = { + workflow: { + steps: [ + { + id: 'a', + expression: 'export default [() => { throw new Error("test") }]', + next: { b: true }, + }, + { + id: 'b', + expression: 'export default [(s) => ({ errors: 22 })]', + }, + ], + }, }; const logger = createMockLogger(); - const result: any = await run(plan, {}, { strict: false, logger }); + const result: any = await run(plan, {}, { logger }); t.is(result.errors, 22); }); // This tracks current behaviour but I don't know if it's right test('stuff written to state before an error is preserved', async (t) => { - const plan: ExecutionPlan = { - jobs: [ - { - id: 'a', - data: { x: 0 }, - expression: - 'export default [(s) => { s.x = 1; throw new Error("test") }]', - }, - ], + const plan: ExecutionPlanNoOptions = { + workflow: { + steps: [ + { + id: 'a', + data: { x: 0 }, + expression: + 'export default [(s) => { s.x = 1; throw new Error("test") }]', + }, + ], + }, }; const logger = createMockLogger(); - const result: any = await run(plan, {}, { strict: false, logger }); + const result: any = await run(plan, {}, { logger }); t.is(result.x, 1); }); @@ -474,26 +487,28 @@ test('stuff written to state before an error is preserved', async (t) => { test('data can be an array (expression)', async (t) => { const expression = 'export default [() => ({ data: [1,2,3] })]'; - const result: any = await run(expression, {}, { strict: false }); + const result: any = await run(expression, {}, {}); t.deepEqual(result.data, [1, 2, 3]); }); test('data can be an array (workflow)', async (t) => { - const plan: ExecutionPlan = { - jobs: [ - { - id: 'a', - expression: 'export default [() => ({ data: [1,2,3] })]', - next: 'b', - }, - { - id: 'b', - expression: 'export default [(s) => s]', - }, - ], + const plan: ExecutionPlanNoOptions = { + workflow: { + steps: [ + { + id: 'a', + expression: 'export default [() => ({ data: [1,2,3] })]', + next: 'b', + }, + { + id: 'b', + expression: 'export default [(s) => s]', + }, + ], + }, }; - const result: any = await run(plan, {}, { strict: false }); + const result: any = await run(plan, {}, {}); t.deepEqual(result.data, [1, 2, 3]); }); diff --git a/packages/runtime/test/security.test.ts b/packages/runtime/test/security.test.ts index caa8f1dd0..57b9b6836 100644 --- a/packages/runtime/test/security.test.ts +++ b/packages/runtime/test/security.test.ts @@ -1,13 +1,9 @@ // a suite of tests with various security concerns in mind import test from 'ava'; -import doRun from '../src/runtime'; - import { createMockLogger } from '@openfn/logger'; -import { ExecutionPlan } from '../src/types'; +import type { ExecutionPlan, State } from '@openfn/lexicon'; -// Disable strict mode for all these tests -const run = (job: any, state?: any, options: any = {}) => - doRun(job, state, { ...options, strict: false }); +import run from '../src/runtime'; const logger = createMockLogger(undefined, { level: 'default' }); @@ -21,50 +17,32 @@ test.serial( const src = 'export default [(s) => s]'; const state = { - data: true, + data: {}, configuration: { password: 'secret', }, }; + const result: any = await run(src, state); - t.is(result.data, true); + t.deepEqual(result.data, {}); t.is(result.configuration, undefined); } ); -test.serial( - 'config should be scrubbed from the result state in strict mode', - async (t) => { - const src = 'export default [(s) => s]'; +test.serial('config should be scrubbed from the result state', async (t) => { + const src = 'export default [(s) => s]'; - const state = { - data: true, - configuration: { - password: 'secret', - }, - }; - const result: any = await run(src, state, { strict: true }); - t.is(result.data, true); - t.is(result.configuration, undefined); - } -); + const state = { + data: {}, + configuration: { + password: 'secret', + }, + }; -test.serial( - 'config should be scrubbed from the result state in non-strict mode', - async (t) => { - const src = 'export default [(s) => s]'; - - const state = { - data: true, - configuration: { - password: 'secret', - }, - }; - const result: any = await run(src, state, { strict: false }); - t.is(result.data, true); - t.is(result.configuration, undefined); - } -); + const result: any = await run(src, state, {}); + t.deepEqual(result.data, {}); + t.is(result.configuration, undefined); +}); test.serial( 'config should be scrubbed from the result state after error', @@ -72,14 +50,15 @@ test.serial( const src = 'export default [(s) => { throw "err" }]'; const state = { - data: true, + data: {}, configuration: { password: 'secret', }, }; - const result: any = await run(src, state, { strict: false }); + + const result: any = await run(src, state, {}); t.truthy(result.errors); - t.is(result.data, true); + t.deepEqual(result.data, {}); t.is(result.configuration, undefined); } ); @@ -99,14 +78,14 @@ test.serial('jobs should not have access to global scope', async (t) => { test.serial('jobs should be able to read global state', async (t) => { const src = 'export default [() => state.data.x]'; - const result: any = await run(src, { data: { x: 42 } }); // typings are a bit tricky + const result: any = await run(src, { data: { x: 42 } }); t.is(result, 42); }); test.serial('jobs should be able to mutate global state', async (t) => { const src = 'export default [() => { state.x = 22; return state.x; }]'; - const result: any = await run(src, { data: { x: 42 } }); // typings are a bit tricky + const result: any = await run(src, { data: { x: 42 } }); t.is(result, 22); }); @@ -198,20 +177,22 @@ test.serial( 'jobs in workflow cannot share data through globals (issue #213)', async (t) => { const plan: ExecutionPlan = { - jobs: [ - { - id: 'a', - expression: 'export default [s => { console.x = 10; return s; }]', - next: { - b: true, + workflow: { + steps: [ + { + id: 'a', + expression: 'export default [s => { console.x = 10; return s; }]', + next: { + b: true, + }, + }, + { + id: 'b', + expression: + 'export default [s => { s.data.x = console.x; return s; }]', }, - }, - { - id: 'b', - expression: - 'export default [s => { s.data.x = console.x; return s; }]', - }, - ], + ], + }, }; const result = await run(plan); diff --git a/packages/runtime/test/util/assemble-state.test.ts b/packages/runtime/test/util/assemble-state.test.ts index 8cb87fd8d..eac478b93 100644 --- a/packages/runtime/test/util/assemble-state.test.ts +++ b/packages/runtime/test/util/assemble-state.test.ts @@ -1,13 +1,11 @@ import test from 'ava'; import assembleState from '../../src/util/assemble-state'; -// TODO: what if iniitial state or data is not an object? -// Is this an error? Maybe just in strict mode? - test('with no arguments, returns a basic state object', (t) => { const initial = undefined; const defaultState = undefined; const config = undefined; + const result = assembleState(initial, config, defaultState); t.deepEqual(result, { configuration: {}, @@ -15,34 +13,12 @@ test('with no arguments, returns a basic state object', (t) => { }); }); -test('strict: ignores initial state', (t) => { +test('includes initial state', (t) => { const initial = { x: 22 }; const defaultState = undefined; const config = undefined; - const result = assembleState(initial, config, defaultState, true); - t.deepEqual(result, { - configuration: {}, - data: {}, - }); -}); -test('strict: ignores initial state except references', (t) => { - const initial = { references: [] }; - const defaultState = undefined; - const config = undefined; - const result = assembleState(initial, config, defaultState, true); - t.deepEqual(result, { - references: [], - configuration: {}, - data: {}, - }); -}); - -test('non-strict: includes initial state', (t) => { - const initial = { x: 22 }; - const defaultState = undefined; - const config = undefined; - const result = assembleState(initial, config, defaultState, false); + const result = assembleState(initial, config, defaultState); t.deepEqual(result, { x: 22, configuration: {}, @@ -55,18 +31,14 @@ test('merges default and initial data objects', (t) => { const defaultState = { data: { y: 1 } }; const config = undefined; - const strict = assembleState(initial, config, defaultState, true); - t.deepEqual(strict, { + const result = assembleState(initial, config, defaultState); + t.deepEqual(result, { configuration: {}, data: { x: 1, y: 1, }, }); - - // Ensure the same behaviour in non-strict mode - const nonStrict = assembleState(initial, config, defaultState, false); - t.deepEqual(strict, nonStrict); }); test('Initial data is prioritised over default data', (t) => { @@ -74,16 +46,13 @@ test('Initial data is prioritised over default data', (t) => { const defaultState = { data: { x: 2 } }; const config = undefined; - const strict = assembleState(initial, config, defaultState, true); - t.deepEqual(strict, { + const result = assembleState(initial, config, defaultState); + t.deepEqual(result, { configuration: {}, data: { x: 1, }, }); - - const nonStrict = assembleState(initial, config, defaultState, false); - t.deepEqual(strict, nonStrict); }); test('Initial data does not have to be an object', (t) => { @@ -91,16 +60,11 @@ test('Initial data does not have to be an object', (t) => { const defaultState = { data: {} }; const config = undefined; - const strict = assembleState(initial, config, defaultState, true); - t.deepEqual(strict, { + const result = assembleState(initial, config, defaultState); + t.deepEqual(result, { configuration: {}, data: [1], }); - - // At this point I don't want any special handling for strict mode, - // see https://github.com/OpenFn/kit/issues/233 - const nonStrict = assembleState(initial, config, defaultState, false); - t.deepEqual(strict, nonStrict); }); test('merges default and initial config objects', (t) => { @@ -108,18 +72,14 @@ test('merges default and initial config objects', (t) => { const defaultState = undefined; const config = { y: 1 }; - const strict = assembleState(initial, config, defaultState, true); - t.deepEqual(strict, { + const result = assembleState(initial, config, defaultState); + t.deepEqual(result, { configuration: { x: 1, y: 1, }, data: {}, }); - - // Ensure the same behaviour in non-strict mode - const nonStrict = assembleState(initial, config, defaultState, false); - t.deepEqual(strict, nonStrict); }); test('configuration overrides initialState.configuration', (t) => { @@ -127,15 +87,11 @@ test('configuration overrides initialState.configuration', (t) => { const defaultState = undefined; const config = { x: 2 }; - const strict = assembleState(initial, config, defaultState, true); - t.deepEqual(strict, { + const result = assembleState(initial, config, defaultState); + t.deepEqual(result, { configuration: { x: 2, }, data: {}, }); - - // Ensure the same behaviour in non-strict mode - const nonStrict = assembleState(initial, config, defaultState, false); - t.deepEqual(strict, nonStrict); }); diff --git a/packages/runtime/test/util/regex.ts b/packages/runtime/test/util/regex.test.ts similarity index 100% rename from packages/runtime/test/util/regex.ts rename to packages/runtime/test/util/regex.test.ts diff --git a/packages/runtime/test/util/validate-plan.test.ts b/packages/runtime/test/util/validate-plan.test.ts index 451940703..1f0858d06 100644 --- a/packages/runtime/test/util/validate-plan.test.ts +++ b/packages/runtime/test/util/validate-plan.test.ts @@ -1,19 +1,21 @@ import test from 'ava'; -import { ExecutionPlan } from '../../src'; +import type { ExecutionPlan, Job } from '@openfn/lexicon'; import validate, { buildModel } from '../../src/util/validate-plan'; +const job = (id: string, next?: Record) => + ({ + id, + next, + expression: '.', + } as Job); + test('builds a simple model', (t) => { const plan: ExecutionPlan = { - jobs: [ - { - id: 'a', - next: { b: true }, - }, - { - id: 'b', - }, - ], + options: {}, + workflow: { + steps: [job('a', { b: true }), job('b')], + }, }; const model = buildModel(plan); @@ -31,17 +33,10 @@ test('builds a simple model', (t) => { test('builds a more complex model', (t) => { const plan: ExecutionPlan = { - jobs: [ - { - id: 'a', - next: { b: true }, - }, - { - id: 'b', - next: { c: true, a: true }, - }, - { id: 'c' }, - ], + options: {}, + workflow: { + steps: [job('a', { b: true }), job('b', { c: true, a: true }), job('c')], + }, }; const model = buildModel(plan); @@ -63,16 +58,10 @@ test('builds a more complex model', (t) => { test('throws for a circular dependency', (t) => { const plan: ExecutionPlan = { - jobs: [ - { - id: 'a', - next: { b: true }, - }, - { - id: 'b', - next: { a: true }, - }, - ], + options: {}, + workflow: { + steps: [job('a', { b: true }), job('b', { a: true })], + }, }; t.throws(() => validate(plan), { @@ -82,20 +71,14 @@ test('throws for a circular dependency', (t) => { test('throws for an indirect circular dependency', (t) => { const plan: ExecutionPlan = { - jobs: [ - { - id: 'a', - next: { b: true }, - }, - { - id: 'b', - next: { c: true }, - }, - { - id: 'c', - next: { a: true }, - }, - ], + options: {}, + workflow: { + steps: [ + job('a', { b: true }), + job('b', { c: true }), + job('c', { a: true }), + ], + }, }; t.throws(() => validate(plan), { @@ -105,22 +88,17 @@ test('throws for an indirect circular dependency', (t) => { test('throws for a multiple inputs', (t) => { const plan: ExecutionPlan = { - jobs: [ - { - id: 'a', - next: { b: true, c: true }, - }, - { - id: 'b', - next: { z: true }, - }, - { - id: 'c', - next: { z: true }, - }, - { id: 'z' }, - ], + options: {}, + workflow: { + steps: [ + job('a', { b: true, c: true }), + job('b', { z: true }), + job('c', { z: true }), + job('z'), + ], + }, }; + t.throws(() => validate(plan), { message: 'Multiple dependencies detected for: z', }); @@ -128,12 +106,12 @@ test('throws for a multiple inputs', (t) => { test('throws for a an unknown job', (t) => { const plan: ExecutionPlan = { - jobs: [ - { - next: { z: true }, - }, - ], + options: {}, + workflow: { + steps: [job('next', { z: true })], + }, }; + t.throws(() => validate(plan), { message: 'Cannot find job: z', }); @@ -141,11 +119,15 @@ test('throws for a an unknown job', (t) => { test('throws for a an unknown job with shorthand syntax', (t) => { const plan: ExecutionPlan = { - jobs: [ - { - next: 'z', - }, - ], + options: {}, + workflow: { + steps: [ + { + next: 'z', + expression: '.', + }, + ], + }, }; t.throws(() => validate(plan), { message: 'Cannot find job: z', @@ -154,9 +136,14 @@ test('throws for a an unknown job with shorthand syntax', (t) => { test('throws for invalid string start', (t) => { const plan: ExecutionPlan = { - start: 'z', - jobs: [{ id: 'a' }], + options: { + start: 'z', + }, + workflow: { + steps: [job('a')], + }, }; + t.throws(() => validate(plan), { message: 'Could not find start job: z', }); diff --git a/packages/ws-worker/CHANGELOG.md b/packages/ws-worker/CHANGELOG.md index 9fda2dddf..d717a45cd 100644 --- a/packages/ws-worker/CHANGELOG.md +++ b/packages/ws-worker/CHANGELOG.md @@ -1,5 +1,25 @@ # ws-worker +## 1.0.0 + +### Major Changes + +- 86dd668: The 1.0 release updates the language and input of the Worker to match the nomenclature of Lightning. + +### Minor Changes + +- 29bff41: Validate the run token + +### Patch Changes + +- a97eb26: Better error handling for invalid dataclips +- 823b471: Update handling of logs to accept stringified messages +- Updated dependencies + - @openfn/engine-multi@1.0.0 + - @openfn/logger@1.0.0 + - @openfn/runtime@1.0.0 + - @openfn/lexicon@1.0.0 + ## 0.8.1 ### Patch Changes diff --git a/packages/ws-worker/package.json b/packages/ws-worker/package.json index 0872384f1..82eb2be59 100644 --- a/packages/ws-worker/package.json +++ b/packages/ws-worker/package.json @@ -1,6 +1,6 @@ { "name": "@openfn/ws-worker", - "version": "0.8.1", + "version": "1.0.0", "description": "A Websocket Worker to connect Lightning to a Runtime Engine", "main": "dist/index.js", "type": "module", @@ -22,6 +22,7 @@ "dependencies": { "@koa/router": "^12.0.0", "@openfn/engine-multi": "workspace:*", + "@openfn/lexicon": "workspace:^", "@openfn/logger": "workspace:*", "@openfn/runtime": "workspace:*", "@types/koa-logger": "^3.1.2", @@ -33,7 +34,7 @@ "koa": "^2.13.4", "koa-bodyparser": "^4.4.0", "koa-logger": "^3.2.1", - "phoenix": "^1.7.7", + "phoenix": "1.7.10", "ws": "^8.14.1" }, "devDependencies": { diff --git a/packages/ws-worker/src/api/claim.ts b/packages/ws-worker/src/api/claim.ts index 18d2e68d0..db4912984 100644 --- a/packages/ws-worker/src/api/claim.ts +++ b/packages/ws-worker/src/api/claim.ts @@ -1,13 +1,38 @@ +import crypto from 'node:crypto'; +import * as jose from 'jose'; import { Logger, createMockLogger } from '@openfn/logger'; -import { CLAIM, ClaimPayload, ClaimReply } from '../events'; +import { ClaimPayload, ClaimReply } from '@openfn/lexicon/lightning'; + +import { CLAIM } from '../events'; import type { ServerApp } from '../server'; const mockLogger = createMockLogger(); -// TODO: this needs standalone unit tests now that it's bene moved -const claim = (app: ServerApp, logger: Logger = mockLogger, maxWorkers = 5) => { +const verifyToken = async (token: string, publicKey: string) => { + const key = crypto.createPublicKey(publicKey); + + const { payload } = await jose.jwtVerify(token, key, { + issuer: 'Lightning', + }); + + if (payload) { + return true; + } +}; + +type ClaimOptions = { + maxWorkers?: number; +}; + +const claim = ( + app: ServerApp, + logger: Logger = mockLogger, + options: ClaimOptions = {} +) => { return new Promise((resolve, reject) => { + const { maxWorkers = 5 } = options; + const activeWorkers = Object.keys(app.workflows).length; if (activeWorkers >= maxWorkers) { return reject(new Error('Server at capacity')); @@ -30,7 +55,22 @@ const claim = (app: ServerApp, logger: Logger = mockLogger, maxWorkers = 5) => { return reject(new Error('No runs returned')); } - runs.forEach((run) => { + runs.forEach(async (run) => { + if (app.options?.runPublicKey) { + try { + await verifyToken(run.token, app.options.runPublicKey); + logger.debug('verified run token for', run.id); + } catch (e) { + logger.error('Error validating run token'); + logger.error(e); + reject(); + app.destroy(); + return; + } + } else { + logger.debug('skipping run token validation for', run.id); + } + logger.debug('starting run', run.id); app.execute(run); resolve(); diff --git a/packages/ws-worker/src/api/destroy.ts b/packages/ws-worker/src/api/destroy.ts index 1b9b3b1d9..f5102bedf 100644 --- a/packages/ws-worker/src/api/destroy.ts +++ b/packages/ws-worker/src/api/destroy.ts @@ -30,6 +30,8 @@ const destroy = async (app: ServerApp, logger: Logger) => { await app.engine.destroy(); app.socket?.disconnect(); + logger.info('Server closed....'); + resolve(); }), ]); @@ -41,9 +43,7 @@ const waitForRuns = (app: ServerApp, logger: Logger) => new Promise((resolve) => { const log = () => { logger.debug( - `Waiting for ${ - Object.keys(app.workflows).length - } runs to complete...` + `Waiting for ${Object.keys(app.workflows).length} runs to complete...` ); }; @@ -61,6 +61,7 @@ const waitForRuns = (app: ServerApp, logger: Logger) => log(); app.events.on(INTERNAL_RUN_COMPLETE, onRunComplete); } else { + logger.debug('No active rns detected'); resolve(); } }); diff --git a/packages/ws-worker/src/api/execute.ts b/packages/ws-worker/src/api/execute.ts index 9ce817bf4..c35ea2ca9 100644 --- a/packages/ws-worker/src/api/execute.ts +++ b/packages/ws-worker/src/api/execute.ts @@ -1,32 +1,33 @@ +import type { ExecutionPlan, Lazy, State } from '@openfn/lexicon'; +import type { RunLogPayload, RunStartPayload } from '@openfn/lexicon/lightning'; +import type { Logger } from '@openfn/logger'; +import type { + RuntimeEngine, + Resolvers, + WorkflowStartPayload, +} from '@openfn/engine-multi'; + +import { + getWithReply, + createRunState, + throttle as createThrottle, +} from '../util'; import { RUN_COMPLETE, RUN_LOG, - RunLogPayload, RUN_START, - RunStartPayload, - GET_CREDENTIAL, GET_DATACLIP, STEP_COMPLETE, STEP_START, + GET_CREDENTIAL, } from '../events'; -import { - getWithReply, - createRunState, - throttle as createThrottle, -} from '../util'; import handleStepComplete from '../events/step-complete'; import handleStepStart from '../events/step-start'; import handleRunComplete from '../events/run-complete'; import handleRunError from '../events/run-error'; -import type { RunOptions, Channel, RunState, JSONLog } from '../types'; -import type { Logger } from '@openfn/logger'; -import type { - RuntimeEngine, - Resolvers, - WorkflowStartPayload, -} from '@openfn/engine-multi'; -import type { ExecutionPlan } from '@openfn/runtime'; +import type { Channel, RunState, JSONLog } from '../types'; +import { WorkerRunOptions } from '../util/convert-lightning-plan'; const enc = new TextDecoder('utf-8'); @@ -37,6 +38,7 @@ export type Context = { state: RunState; logger: Logger; engine: RuntimeEngine; + options: WorkerRunOptions; onFinish: (result: any) => void; // maybe its better for version numbers to be scribbled here as we go? @@ -58,14 +60,22 @@ export function execute( engine: RuntimeEngine, logger: Logger, plan: ExecutionPlan, - options: RunOptions = {}, + input: Lazy, + options: WorkerRunOptions = {}, onFinish = (_result: any) => {} ) { logger.info('executing ', plan.id); - const state = createRunState(plan, options); + const state = createRunState(plan, input); - const context: Context = { channel, state, logger, engine, onFinish }; + const context: Context = { + channel, + state, + logger, + engine, + options, + onFinish, + }; const throttle = createThrottle(); @@ -125,33 +135,43 @@ export function execute( // dataclip: (id: string) => loadDataclip(channel, id), } as Resolvers; - Promise.resolve() + setTimeout(async () => { + let loadedInput = input; + // Optionally resolve initial state - .then(async () => { - // TODO we need to remove this from here and let the runtime take care of it through - // the resolver. See https://github.com/OpenFn/kit/issues/403 - if (typeof plan.initialState === 'string') { - logger.debug('loading dataclip', plan.initialState); - plan.initialState = await loadDataclip(channel, plan.initialState); - logger.success('dataclip loaded'); - logger.debug(plan.initialState); - } - return plan; - }) - // Execute (which we have to wrap in a promise chain to handle initial state) - .then(() => { + // TODO we need to remove this from here and let the runtime take care of it through + // the resolver. See https://github.com/OpenFn/kit/issues/403 + // TODO come back and work out how initial state will work + if (typeof input === 'string') { + logger.debug('loading dataclip', input); + try { - engine.execute(plan, { resolvers, ...options }); + loadedInput = await loadDataclip(channel, input); + logger.success('dataclip loaded'); } catch (e: any) { - // TODO what if there's an error? - handleRunError(context, { + // abort with error + return handleRunError(context, { workflowId: plan.id!, - message: e.message, - type: e.type, - severity: e.severity, + message: `Failed to load dataclip ${input}${ + e.message ? `: ${e.message}` : '' + }`, + type: 'DataClipError', + severity: 'exception', }); } - }); + } + + try { + engine.execute(plan, loadedInput as State, { resolvers, ...options }); + } catch (e: any) { + handleRunError(context, { + workflowId: plan.id!, + message: e.message, + type: e.type, + severity: e.severity, + }); + } + }); return context; } diff --git a/packages/ws-worker/src/api/reasons.ts b/packages/ws-worker/src/api/reasons.ts index 73fbd0661..37411a020 100644 --- a/packages/ws-worker/src/api/reasons.ts +++ b/packages/ws-worker/src/api/reasons.ts @@ -1,11 +1,6 @@ -import type { - ExitReason, - ExitReasonStrings, - State, - RunState, -} from '../types'; - -import type { JobNode } from '@openfn/runtime'; +import { State, Step } from '@openfn/lexicon'; +import { ExitReason, ExitReasonStrings } from '@openfn/lexicon/lightning'; +import type { RunState } from '../types'; // This takes the result state and error from the job const calculateJobExitReason = ( @@ -30,7 +25,7 @@ const calculateJobExitReason = ( }; // It has next jobs, but they weren't executed -const isLeafNode = (state: RunState, job: JobNode) => { +const isLeafNode = (state: RunState, job: Step) => { // A node is a leaf if: // It has no `next` jobs at all if (!job.next || Object.keys(job.next).length == 0) { @@ -47,11 +42,11 @@ const calculateRunExitReason = (state: RunState): ExitReason => { // basically becomes the exit reason // So If we get here, we basically just need to look to see if there's a fail on a leaf node // (we ignore fails on non-leaf nodes) - const leafJobReasons: ExitReason[] = state.plan.jobs - .filter((job: JobNode) => isLeafNode(state, job)) + const leafJobReasons: ExitReason[] = state.plan.workflow.steps + .filter((job) => isLeafNode(state, job)) // TODO what if somehow there is no exit reason for a job? // This implies some kind of exception error, no? - .map(({ id }: JobNode) => state.reasons[id!]); + .map(({ id }) => state.reasons[id!]); const fail = leafJobReasons.find((r) => r && r.reason === 'fail'); if (fail) { diff --git a/packages/ws-worker/src/api/workloop.ts b/packages/ws-worker/src/api/workloop.ts index aadf0469d..ea7465826 100644 --- a/packages/ws-worker/src/api/workloop.ts +++ b/packages/ws-worker/src/api/workloop.ts @@ -17,10 +17,16 @@ const startWorkloop = ( const workLoop = () => { if (!cancelled) { - promise = tryWithBackoff(() => claim(app, logger, maxWorkers), { - min: minBackoff, - max: maxBackoff, - }); + promise = tryWithBackoff( + () => + claim(app, logger, { + maxWorkers, + }), + { + min: minBackoff, + max: maxBackoff, + } + ); // TODO this needs more unit tests I think promise.then(() => { if (!cancelled) { diff --git a/packages/ws-worker/src/channels/run.ts b/packages/ws-worker/src/channels/run.ts index f7a1dffdd..8a505cf8a 100644 --- a/packages/ws-worker/src/channels/run.ts +++ b/packages/ws-worker/src/channels/run.ts @@ -1,11 +1,12 @@ -import convertRun from '../util/convert-run'; -import { getWithReply } from '../util'; -import { Run, RunOptions, Channel, Socket } from '../types'; -import { ExecutionPlan } from '@openfn/runtime'; -import { GET_PLAN, GetPlanReply } from '../events'; - +import type { ExecutionPlan, Lazy, State } from '@openfn/lexicon'; +import type { GetPlanReply, LightningPlan } from '@openfn/lexicon/lightning'; import type { Logger } from '@openfn/logger'; +import { getWithReply } from '../util'; +import convertRun, { WorkerRunOptions } from '../util/convert-lightning-plan'; +import { GET_PLAN } from '../events'; +import type { Channel, Socket } from '../types'; + // TODO what happens if this channel join fails? // Lightning could vanish, channel could error on its side, or auth could be wrong // We don't have a good feedback mechanism yet - worker:queue is the only channel @@ -20,7 +21,8 @@ const joinRunChannel = ( return new Promise<{ channel: Channel; plan: ExecutionPlan; - options: RunOptions; + options: WorkerRunOptions; + input: Lazy; }>((resolve, reject) => { // TMP - lightning seems to be sending two responses to me // just for now, I'm gonna gate the handling here @@ -36,9 +38,9 @@ const joinRunChannel = ( if (!didReceiveOk) { didReceiveOk = true; logger.success(`connected to ${channelName}`, e); - const { plan, options } = await loadRun(channel); + const { plan, options, input } = await loadRun(channel); logger.debug('converted run as execution plan:', plan); - resolve({ channel, plan, options }); + resolve({ channel, plan, options, input }); } }) .receive('error', (err: any) => { @@ -54,5 +56,5 @@ export async function loadRun(channel: Channel) { // first we get the run body through the socket const runBody = await getWithReply(channel, GET_PLAN); // then we generate the execution plan - return convertRun(runBody as Run); + return convertRun(runBody as LightningPlan); } diff --git a/packages/ws-worker/src/channels/worker-queue.ts b/packages/ws-worker/src/channels/worker-queue.ts index c961c0906..9ec76e659 100644 --- a/packages/ws-worker/src/channels/worker-queue.ts +++ b/packages/ws-worker/src/channels/worker-queue.ts @@ -1,7 +1,7 @@ import EventEmitter from 'node:events'; import { Socket as PhxSocket } from 'phoenix'; import { WebSocket } from 'ws'; - +import { API_VERSION } from '@openfn/lexicon/lightning'; import generateWorkerToken from '../util/worker-token'; import type { Logger } from '@openfn/logger'; @@ -16,10 +16,20 @@ const connectToWorkerQueue = ( ) => { const events = new EventEmitter(); - generateWorkerToken(secret, serverId, logger).then((token) => { + generateWorkerToken(secret, serverId, logger).then(async (token) => { + const pkg = await import('../../package.json', { + assert: { type: 'json' }, + }); + + const params = { + token, + api_version: API_VERSION, + worker_version: pkg.default.version, + }; + // @ts-ignore ts doesn't like the constructor here at all const socket = new SocketConstructor(endpoint, { - params: { token }, + params, transport: WebSocket, }); diff --git a/packages/ws-worker/src/events.ts b/packages/ws-worker/src/events.ts index 89cdeffca..fc157d5d8 100644 --- a/packages/ws-worker/src/events.ts +++ b/packages/ws-worker/src/events.ts @@ -1,69 +1,43 @@ -import { Run, ExitReason } from './types'; +import * as l from '@openfn/lexicon/lightning'; // These are worker-lightning events, used in the websocket - export const CLAIM = 'claim'; - -export type ClaimPayload = { demand?: number }; -export type ClaimReply = { runs: Array }; -export type ClaimRun = { id: string; token: string }; - export const GET_PLAN = 'fetch:plan'; -export type GetPlanPayload = void; // no payload -export type GetPlanReply = Run; - -export const GET_CREDENTIAL = 'fetch:credential'; -export type GetCredentialPayload = { id: string }; -// credential in-line, no wrapper, arbitrary data -export type GetCredentialReply = {}; - export const GET_DATACLIP = 'fetch:dataclip'; -export type GetDataclipPayload = { id: string }; -export type GetDataClipReply = Uint8Array; // represents a json string Run - -export const RUN_START = 'run:start'; // runId, timestamp -export type RunStartPayload = void; // no payload -export type RunStartReply = {}; // no payload +export const GET_CREDENTIAL = 'fetch:credential'; +export const RUN_START = 'run:start'; +export const RUN_COMPLETE = 'run:complete'; +export const RUN_LOG = 'run:log'; +export const STEP_START = 'step:start'; +export const STEP_COMPLETE = 'step:complete'; +export const INTERNAL_RUN_COMPLETE = 'server:run-complete'; -export const RUN_COMPLETE = 'run:complete'; // runId, timestamp, result, stats -export type RunCompletePayload = ExitReason & { - final_dataclip_id?: string; // TODO this will be removed soon +export type QueueEvents = { + [CLAIM]: l.ClaimPayload; }; -export type RunCompleteReply = undefined; -export const RUN_LOG = 'run:log'; // level, namespace (job,runtime,adaptor), message, time -export type RunLogPayload = { - message: Array; - timestamp: string; - run_id: string; - level?: string; - source?: string; // namespace - job_id?: string; - step_id?: string; +export type QueueEventReplies = { + [CLAIM]: l.ClaimReply; }; -export type RunLogReply = void; -export const STEP_START = 'step:start'; -export type StepStartPayload = { - job_id: string; - step_id: string; - run_id?: string; - input_dataclip_id?: string; - versions: Record; +export type RunEvents = { + [GET_PLAN]: l.GetPlanPayload; + [GET_CREDENTIAL]: l.GetCredentialPayload; + [GET_DATACLIP]: l.GetDataclipPayload; + [RUN_START]: l.RunStartPayload; + [RUN_COMPLETE]: l.RunCompletePayload; + [RUN_LOG]: l.RunLogPayload; + [STEP_START]: l.StepStartPayload; + [STEP_COMPLETE]: l.StepCompletePayload; }; -export type StepStartReply = void; -export const STEP_COMPLETE = 'step:complete'; -export type StepCompletePayload = ExitReason & { - run_id?: string; - job_id: string; - step_id: string; - output_dataclip?: string; - output_dataclip_id?: string; +export type RunReplies = { + [GET_PLAN]: l.GetPlanReply; + [GET_CREDENTIAL]: l.GetCredentialReply; + [GET_DATACLIP]: l.GetDataClipReply; + [RUN_START]: l.RunStartReply; + [RUN_COMPLETE]: l.RunCompleteReply; + [RUN_LOG]: l.RunLogReply; + [STEP_START]: l.StepStartReply; + [STEP_COMPLETE]: l.StepCompleteReply; }; -export type StepCompleteReply = void; - -// These are internal server events -// Explicitly (and awkwardly) namespaced to avoid confusion - -export const INTERNAL_RUN_COMPLETE = 'server:run-complete'; diff --git a/packages/ws-worker/src/events/run-complete.ts b/packages/ws-worker/src/events/run-complete.ts index 1554fb569..75c52f351 100644 --- a/packages/ws-worker/src/events/run-complete.ts +++ b/packages/ws-worker/src/events/run-complete.ts @@ -1,6 +1,7 @@ import type { WorkflowCompletePayload } from '@openfn/engine-multi'; +import type { RunCompletePayload } from '@openfn/lexicon/lightning'; -import { RUN_COMPLETE, RunCompletePayload } from '../events'; +import { RUN_COMPLETE } from '../events'; import { calculateRunExitReason } from '../api/reasons'; import { sendEvent, Context } from '../api/execute'; import logFinalReason from '../util/log-final-reason'; diff --git a/packages/ws-worker/src/events/run-error.ts b/packages/ws-worker/src/events/run-error.ts index 7f8375c64..aba0ae5ab 100644 --- a/packages/ws-worker/src/events/run-error.ts +++ b/packages/ws-worker/src/events/run-error.ts @@ -1,8 +1,8 @@ -import { calculateJobExitReason } from '../api/reasons'; - +import type { RunCompletePayload } from '@openfn/lexicon/lightning'; import type { WorkflowErrorPayload } from '@openfn/engine-multi'; -import { RUN_COMPLETE, RunCompletePayload } from '../events'; +import { calculateJobExitReason } from '../api/reasons'; +import { RUN_COMPLETE } from '../events'; import { sendEvent, Context, onJobError } from '../api/execute'; import logFinalReason from '../util/log-final-reason'; @@ -29,7 +29,7 @@ export default async function onRunError( onFinish({ reason }); } catch (e: any) { - logger.error('ERROR in workflow-error handler:', e.message); + logger.error('ERROR in run-error handler:', e.message); logger.error(e); onFinish({}); diff --git a/packages/ws-worker/src/events/step-complete.ts b/packages/ws-worker/src/events/step-complete.ts index 5400dc897..a542a5944 100644 --- a/packages/ws-worker/src/events/step-complete.ts +++ b/packages/ws-worker/src/events/step-complete.ts @@ -1,14 +1,14 @@ import crypto from 'node:crypto'; +import type { StepCompletePayload } from '@openfn/lexicon/lightning'; +import type { JobCompletePayload } from '@openfn/engine-multi'; -import { STEP_COMPLETE, StepCompletePayload } from '../events'; +import { STEP_COMPLETE } from '../events'; import { stringify } from '../util'; import { calculateJobExitReason } from '../api/reasons'; import { sendEvent, Context } from '../api/execute'; -import type { JobCompletePayload } from '@openfn/engine-multi'; - export default function onStepComplete( - { channel, state }: Context, + { channel, state, options }: Context, event: JobCompletePayload, // TODO this isn't terribly graceful, but accept an error for crashes error?: any @@ -52,7 +52,6 @@ export default function onStepComplete( step_id, job_id, output_dataclip_id: dataclipId, - output_dataclip: stringify(outputState), reason, error_message, @@ -61,6 +60,11 @@ export default function onStepComplete( mem: event.mem, duration: event.duration, thread_id: event.threadId, - }; + } as StepCompletePayload; + + if (!options || options.outputDataclips !== false) { + evt.output_dataclip = stringify(outputState); + } + return sendEvent(channel, STEP_COMPLETE, evt); } diff --git a/packages/ws-worker/src/events/step-start.ts b/packages/ws-worker/src/events/step-start.ts index 9703fb0e5..561652431 100644 --- a/packages/ws-worker/src/events/step-start.ts +++ b/packages/ws-worker/src/events/step-start.ts @@ -1,9 +1,11 @@ import crypto from 'node:crypto'; -import { JobStartPayload } from '@openfn/engine-multi'; import { timestamp } from '@openfn/logger'; +import { JobStartPayload } from '@openfn/engine-multi'; +import type { Job } from '@openfn/lexicon'; +import type { StepStartPayload } from '@openfn/lexicon/lightning'; import pkg from '../../package.json' assert { type: 'json' }; -import { STEP_START, StepStartPayload } from '../events'; +import { STEP_START } from '../events'; import { sendEvent, Context, onJobLog } from '../api/execute'; import calculateVersionString from '../util/versions'; @@ -20,7 +22,9 @@ export default async function onStepStart( state.activeStep = crypto.randomUUID(); state.activeJob = event.jobId; - const job = state.plan.jobs.find(({ id }) => id === event.jobId); + const job = state.plan.workflow.steps.find( + ({ id }) => id === event.jobId + ) as Job; const input_dataclip_id = state.inputDataclips[event.jobId]; diff --git a/packages/ws-worker/src/mock/resolvers.ts b/packages/ws-worker/src/mock/resolvers.ts index 489107e95..25ad81559 100644 --- a/packages/ws-worker/src/mock/resolvers.ts +++ b/packages/ws-worker/src/mock/resolvers.ts @@ -1,4 +1,5 @@ -import type { State, Credential } from '../types'; +import type { State } from '@openfn/lexicon'; +import type { Credential } from '@openfn/lexicon/lightning'; import { Resolvers } from '@openfn/engine-multi'; const mockResolveCredential = (_credId: string) => diff --git a/packages/ws-worker/src/mock/runtime-engine.ts b/packages/ws-worker/src/mock/runtime-engine.ts index b8f2741e5..f96541056 100644 --- a/packages/ws-worker/src/mock/runtime-engine.ts +++ b/packages/ws-worker/src/mock/runtime-engine.ts @@ -1,9 +1,11 @@ import { EventEmitter } from 'node:events'; import crypto from 'node:crypto'; -import run, { ExecutionPlan } from '@openfn/runtime'; +import run from '@openfn/runtime'; import * as engine from '@openfn/engine-multi'; +import type { ExecutionPlan, Job, State } from '@openfn/lexicon'; import mockResolvers from './resolvers'; +import { RuntimeEngine } from '@openfn/engine-multi'; export type EngineEvent = | typeof engine.JOB_COMPLETE @@ -13,23 +15,6 @@ export type EngineEvent = | typeof engine.WORKFLOW_LOG | typeof engine.WORKFLOW_START; -export type WorkflowStartEvent = { - workflowId: string; - threadId: string; -}; - -export type WorkflowCompleteEvent = { - workflowId: string; - error?: any; // hmm maybe not - threadId: string; -}; - -export type WorkflowErrorEvent = { - workflowId: string; - threadId: string; - message: string; -}; - // this is basically a fake adaptor // these functions will be injected into scope const helpers = { @@ -75,16 +60,19 @@ async function createMock() { const execute = async ( xplan: ExecutionPlan, + input: State, options: { resolvers?: engine.Resolvers; throw?: boolean } = { resolvers: mockResolvers, } ) => { - const { id, jobs } = xplan; + const { id } = xplan; + const { steps } = xplan.workflow; activeWorkflows[id!] = true; const threadId = crypto.randomUUID(); - for (const job of jobs) { + for (const step of steps) { + const job = step as Job; if (typeof job.configuration === 'string') { // Call the crendtial callback, but don't do anything with it job.configuration = await options.resolvers?.credential?.( @@ -134,7 +122,7 @@ async function createMock() { dispatch('workflow-start', { workflowId: id, threadId: threadId }); try { - await run(xplan, undefined, opts as any); + await run(xplan, input, opts as any); dispatch('workflow-complete', { workflowId: id, threadId: threadId }); } catch (e: any) { dispatch('workflow-error', { @@ -168,7 +156,7 @@ async function createMock() { getStatus, listen, destroy, - }; + } as unknown as RuntimeEngine; } export default createMock; diff --git a/packages/ws-worker/src/mock/sockets.ts b/packages/ws-worker/src/mock/sockets.ts index 942e7c7f7..172d0b6a5 100644 --- a/packages/ws-worker/src/mock/sockets.ts +++ b/packages/ws-worker/src/mock/sockets.ts @@ -1,7 +1,9 @@ type EventHandler = (evt?: any) => void; // Mock websocket implementations -export const mockChannel = (callbacks: Record = {}) => { +export const mockChannel = ( + callbacks: Record = {} +): any => { const c = { on: (event: string, fn: EventHandler) => { // TODO support multiple callbacks @@ -61,6 +63,7 @@ export const mockChannel = (callbacks: Record = {}) => { }; return receive; }, + leave: () => {}, }; return c; }; diff --git a/packages/ws-worker/src/server.ts b/packages/ws-worker/src/server.ts index 6616dd4ef..2de810ab3 100644 --- a/packages/ws-worker/src/server.ts +++ b/packages/ws-worker/src/server.ts @@ -5,8 +5,8 @@ import koaLogger from 'koa-logger'; import Router from '@koa/router'; import { humanId } from 'human-id'; import { createMockLogger, Logger } from '@openfn/logger'; - -import { INTERNAL_RUN_COMPLETE, ClaimRun } from './events'; +import { ClaimRun } from '@openfn/lexicon/lightning'; +import { INTERNAL_RUN_COMPLETE } from './events'; import destroy from './api/destroy'; import startWorkloop from './api/workloop'; import claim from './api/claim'; @@ -19,7 +19,7 @@ import type { Server } from 'http'; import type { RuntimeEngine } from '@openfn/engine-multi'; import type { Socket, Channel } from './types'; -type ServerOptions = { +export type ServerOptions = { maxWorkflows?: number; port?: number; lightning?: string; // url to lightning instance @@ -27,6 +27,7 @@ type ServerOptions = { noLoop?: boolean; // disable the worker loop secret?: string; // worker secret + runPublicKey?: string; // base64 encoded run public key backoff?: { min?: number; @@ -44,6 +45,7 @@ export interface ServerApp extends Koa { events: EventEmitter; server: Server; engine: RuntimeEngine; + options: ServerOptions; execute: ({ id, token }: ClaimRun) => Promise; destroy: () => void; @@ -65,6 +67,12 @@ function connect(app: ServerApp, logger: Logger, options: ServerOptions = {}) { // A new connection made to the queue const onConnect = ({ socket, channel }: SocketAndChannel) => { + if (app.destroyed) { + // Fix an edge case where a server can be destroyed before it is + // even connnected + // If this has happened, we do NOT want to go and start the workloop! + return; + } logger.success('Connected to Lightning at', options.lightning); // save the channel and socket @@ -109,6 +117,10 @@ function connect(app: ServerApp, logger: Logger, options: ServerOptions = {}) { // We failed to connect to the queue const onError = (e: any) => { + if (app.destroyed) { + return; + } + logger.error( 'CRITICAL ERROR: could not connect to lightning at', options.lightning @@ -152,16 +164,18 @@ function createServer(engine: RuntimeEngine, options: ServerOptions = {}) { router.get('/', healthcheck); + app.options = options || {}; + // TODO this probably needs to move into ./api/ somewhere app.execute = async ({ id, token }: ClaimRun) => { if (app.socket) { app.workflows[id] = true; - // TODO need to verify the token against LIGHTNING_PUBLIC_KEY const { channel: runChannel, plan, options, + input, } = await joinRunChannel(app.socket, token, id, logger); // Callback to be triggered when the work is done (including errors) @@ -176,6 +190,7 @@ function createServer(engine: RuntimeEngine, options: ServerOptions = {}) { engine, logger, plan, + input, options, onFinish ); @@ -190,7 +205,9 @@ function createServer(engine: RuntimeEngine, options: ServerOptions = {}) { // Debug API to manually trigger a claim router.post('/claim', async (ctx) => { logger.info('triggering claim from POST request'); - return claim(app, logger, options.maxWorkflows) + return claim(app, logger, { + maxWorkers: options.maxWorkflows, + }) .then(() => { logger.info('claim complete: 1 run claimed'); ctx.body = 'complete'; diff --git a/packages/ws-worker/src/start.ts b/packages/ws-worker/src/start.ts index 62b710493..9d6e38d63 100644 --- a/packages/ws-worker/src/start.ts +++ b/packages/ws-worker/src/start.ts @@ -5,7 +5,7 @@ import createLogger, { LogLevel } from '@openfn/logger'; import createRTE from '@openfn/engine-multi'; import createMockRTE from './mock/runtime-engine'; -import createWorker from './server'; +import createWorker, { ServerOptions } from './server'; type Args = { _: string[]; @@ -15,6 +15,7 @@ type Args = { secret?: string; loop?: boolean; log: LogLevel; + lightningPublicKey?: string; mock: boolean; backoff: string; capacity?: number; @@ -26,6 +27,7 @@ type Args = { const { WORKER_BACKOFF, WORKER_CAPACITY, + WORKER_LIGHTNING_PUBLIC_KEY, WORKER_LIGHTNING_SERVICE_URL, WORKER_LOG_LEVEL, WORKER_MAX_RUN_DURATION_SECONDS, @@ -64,6 +66,11 @@ const args = yargs(hideBin(process.argv)) 'Worker secret. (comes from WORKER_SECRET by default). Env: WORKER_SECRET', default: WORKER_SECRET, }) + .option('lightning-public-key', { + description: + 'Base64-encoded public key. Used to verify run tokens. Env: WORKER_LIGHTNING_PUBLIC_KEY', + default: WORKER_LIGHTNING_PUBLIC_KEY, + }) .option('log', { description: 'Set the log level for stdout (default to info, set to debug for verbose output). Env: WORKER_LOG_LEVEL', @@ -133,7 +140,7 @@ const [minBackoff, maxBackoff] = args.backoff function engineReady(engine: any) { logger.debug('Creating worker server...'); - const workerOptions = { + const workerOptions: ServerOptions = { port: args.port, lightning: args.lightning, logger, @@ -146,7 +153,23 @@ function engineReady(engine: any) { }, maxWorkflows: args.capacity, }; - const { logger: _l, secret: _s, ...humanOptions } = workerOptions; + + if (args.lightningPublicKey) { + logger.info( + 'Lightning public key found: run tokens from Lightning will be verified by this worker' + ); + workerOptions.runPublicKey = Buffer.from( + args.lightningPublicKey, + 'base64' + ).toString(); + } + + const { + logger: _l, + secret: _s, + runPublicKey, + ...humanOptions + } = workerOptions; logger.debug('Worker options:', humanOptions); createWorker(engine, workerOptions); diff --git a/packages/ws-worker/src/types.d.ts b/packages/ws-worker/src/types.d.ts index 8cc0709dd..a9d09fbeb 100644 --- a/packages/ws-worker/src/types.d.ts +++ b/packages/ws-worker/src/types.d.ts @@ -1,90 +1,15 @@ import { SanitizePolicies } from '@openfn/logger'; +import type { ExecutionPlan, Lazy, State } from '@openfn/lexicon'; import type { Channel as PhxChannel } from 'phoenix'; -import type { ExecutionPlan } from '@openfn/runtime'; export { Socket }; -export type Credential = Record; - -export type State = { - data: { - [key: string]: any; - }; - configuration?: { - [key: string]: any; - }; - errors?: { - [jobId: string]: { - type: string; - message: string; - }; - }; - - // technically there should be nothing here - [key: string]: any; -}; - -export type ExitReasonStrings = - | 'success' - | 'fail' - | 'crash' - | 'kill' - | 'cancel' - | 'exception'; - -export type ExitReason = { - reason: ExitReasonStrings; - error_message: string | null; - error_type: string | null; -}; - -export type Node = { - id: string; - body?: string; - adaptor?: string; - credential?: object; - credential_id?: string; - type?: 'webhook' | 'cron'; // trigger only - state?: any; // Initial state / defaults -}; - -export interface Edge { - id: string; - source_job_id?: string; - source_trigger_id?: string; - target_job_id: string; - name?: string; - condition?: string; - error_path?: boolean; - errors?: any; - enabled?: boolean; -} - -// An run object returned by Lightning -export type Run = { - id: string; - dataclip_id: string; - starting_node_id: string; - - triggers: Node[]; - jobs: Node[]; - edges: Edge[]; - - options?: RunOptions; -}; - -export type RunOptions = { - runTimeoutMs?: number; - - sanitize?: SanitizePolicies; -}; - // Internal server state for each run export type RunState = { activeStep?: string; activeJob?: string; plan: ExecutionPlan; - options: RunOptions; + input: Lazy; dataclips: Record; // For each run, map the input ids // TODO better name maybe? diff --git a/packages/ws-worker/src/util/convert-lightning-plan.ts b/packages/ws-worker/src/util/convert-lightning-plan.ts new file mode 100644 index 000000000..d6892a3ea --- /dev/null +++ b/packages/ws-worker/src/util/convert-lightning-plan.ts @@ -0,0 +1,172 @@ +import crypto from 'node:crypto'; +import type { + Step, + StepId, + ExecutionPlan, + State, + Job, + Trigger, + StepEdge, + WorkflowOptions, + Lazy, +} from '@openfn/lexicon'; +import { LightningPlan, Edge } from '@openfn/lexicon/lightning'; +import { ExecuteOptions } from '@openfn/engine-multi'; + +export const conditions: Record string | null> = + { + on_job_success: (upstreamId: string) => + `Boolean(!state?.errors?.["${upstreamId}"] ?? true)`, + on_job_failure: (upstreamId: string) => + `Boolean(state?.errors && state.errors["${upstreamId}"])`, + always: (_upstreamId: string) => null, + }; + +const mapEdgeCondition = (edge: Edge) => { + const { condition } = edge; + if (condition && condition in conditions) { + const upstream = (edge.source_job_id || edge.source_trigger_id) as string; + return conditions[condition](upstream); + } + return condition; +}; + +const mapTriggerEdgeCondition = (edge: Edge) => { + const { condition } = edge; + // This handles cron triggers with undefined conditions and the 'always' string. + if (condition === undefined || condition === 'always') return true; + // Otherwise, we will return the condition and assume it's a valid JS expression. + return condition; +}; + +// Options which relate to this execution but are not part of the plan +export type WorkerRunOptions = ExecuteOptions & { + // Defaults to true - must be explicity false to stop dataclips being sent + outputDataclips?: boolean; +}; + +export default ( + run: LightningPlan +): { plan: ExecutionPlan; options: WorkerRunOptions; input: Lazy } => { + // Some options get mapped straight through to the runtime's workflow options + const runtimeOpts: Omit = {}; + + // But some need to get passed down into the engine's options + const engineOpts: WorkerRunOptions = {}; + + if (run.options) { + if (run.options.runTimeoutMs) { + engineOpts.runTimeoutMs = run.options.runTimeoutMs; + } + if (run.options.sanitize) { + engineOpts.sanitize = run.options.sanitize; + } + if (run.options.hasOwnProperty('output_dataclips')) { + engineOpts.outputDataclips = run.options.output_dataclips; + } + } + + const plan: Partial = { + id: run.id, + options: runtimeOpts, + }; + + let initialState; + if (run.dataclip_id) { + initialState = run.dataclip_id; + } + + if (run.starting_node_id) { + runtimeOpts.start = run.starting_node_id; + } + + const nodes: Record = {}; + + const edges: Edge[] = run.edges ?? []; + + // We don't really care about triggers, it's mostly just a empty node + if (run.triggers?.length) { + run.triggers.forEach((trigger: Trigger) => { + const id = trigger.id || 'trigger'; + + nodes[id] = { + id, + }; + + // TODO do we need to support multiple edges here? Likely + const connectedEdges = edges.filter((e) => e.source_trigger_id === id); + if (connectedEdges.length) { + nodes[id].next = connectedEdges.reduce( + (obj: Partial, edge) => { + if (edge.enabled !== false) { + // @ts-ignore + obj[edge.target_job_id] = mapTriggerEdgeCondition(edge); + } + return obj; + }, + {} + ); + } else { + // TODO what if the edge isn't found? + } + }); + } + + if (run.jobs?.length) { + run.jobs.forEach((step) => { + const id = step.id || crypto.randomUUID(); + const job: Job = { + id, + configuration: step.credential || step.credential_id, + expression: step.body!, + adaptor: step.adaptor, + }; + + if (step.name) { + job.name = step.name; + } + + if (step.state) { + job.state = step.state; + } + + const next = edges + .filter((e) => e.source_job_id === id) + .reduce((obj, edge) => { + const newEdge: StepEdge = {}; + + const condition = mapEdgeCondition(edge); + if (condition) { + newEdge.condition = condition; + } + if (edge.enabled === false) { + newEdge.disabled = true; + } + obj[edge.target_job_id] = Object.keys(newEdge).length + ? newEdge + : true; + return obj; + }, {} as Record); + + if (Object.keys(next).length) { + job.next = next; + } + + nodes[id] = job; + }); + } + + plan.workflow = { + steps: Object.values(nodes), + }; + + if (run.name) { + plan.workflow.name = run.name; + } + + return { + plan: plan as ExecutionPlan, + options: engineOpts, + input: initialState || {}, + }; +}; diff --git a/packages/ws-worker/src/util/convert-run.ts b/packages/ws-worker/src/util/convert-run.ts deleted file mode 100644 index 35b200de2..000000000 --- a/packages/ws-worker/src/util/convert-run.ts +++ /dev/null @@ -1,134 +0,0 @@ -import crypto from 'node:crypto'; -import type { - JobNode, - JobNodeID, - JobEdge, - ExecutionPlan, -} from '@openfn/runtime'; -import { Run, RunOptions, Edge } from '../types'; - -export const conditions: Record string | null> = - { - on_job_success: (upstreamId: string) => - `Boolean(!state?.errors?.["${upstreamId}"] ?? true)`, - on_job_failure: (upstreamId: string) => - `Boolean(state?.errors && state.errors["${upstreamId}"])`, - always: (_upstreamId: string) => null, - }; - -const mapEdgeCondition = (edge: Edge) => { - const { condition } = edge; - if (condition && condition in conditions) { - const upstream = (edge.source_job_id || edge.source_trigger_id) as string; - return conditions[condition](upstream); - } - return condition; -}; - -const mapTriggerEdgeCondition = (edge: Edge) => { - const { condition } = edge; - // This handles cron triggers with undefined conditions and the 'always' string. - if (condition === undefined || condition === 'always') return true; - // Otherwise, we will return the condition and assume it's a valid JS expression. - return condition; -}; - -const mapOptions = (options: RunOptions): RunOptions => { - return options; -}; - -export default ( - run: Run -): { plan: ExecutionPlan; options: RunOptions } => { - const options = run.options || {}; - const plan: Partial = { - id: run.id, - }; - - if (run.dataclip_id) { - // This is tricky - we're assining a string to the XPlan - // which is fine becuase it'll be handled later - // I guess we need a new type for now? Like a lazy XPlan - // @ts-ignore - plan.initialState = run.dataclip_id; - } - if (run.starting_node_id) { - plan.start = run.starting_node_id; - } - - const nodes: Record = {}; - - const edges = run.edges ?? []; - - // We don't really care about triggers, it's mostly just a empty node - if (run.triggers?.length) { - run.triggers.forEach((trigger) => { - const id = trigger.id || 'trigger'; - - nodes[id] = { - id, - }; - - // TODO do we need to support multiple edges here? Likely - const connectedEdges = edges.filter((e) => e.source_trigger_id === id); - if (connectedEdges.length) { - nodes[id].next = connectedEdges.reduce((obj, edge) => { - if (edge.enabled !== false) { - // @ts-ignore - obj[edge.target_job_id] = mapTriggerEdgeCondition(edge); - } - return obj; - }, {}); - } else { - // TODO what if the edge isn't found? - } - }); - } - - if (run.jobs?.length) { - run.jobs.forEach((job) => { - const id = job.id || crypto.randomUUID(); - - nodes[id] = { - id, - configuration: job.credential || job.credential_id, - expression: job.body, - adaptor: job.adaptor, - }; - - if (job.state) { - // TODO this is likely to change - nodes[id].state = job.state; - } - - const next = edges - .filter((e) => e.source_job_id === id) - .reduce((obj, edge) => { - const newEdge: JobEdge = {}; - - const condition = mapEdgeCondition(edge); - if (condition) { - newEdge.condition = condition; - } - if (edge.enabled === false) { - newEdge.disabled = true; - } - obj[edge.target_job_id] = Object.keys(newEdge).length - ? newEdge - : true; - return obj; - }, {} as Record); - - if (Object.keys(next).length) { - nodes[id].next = next; - } - }); - } - - plan.jobs = Object.values(nodes); - - return { - plan: plan as ExecutionPlan, - options: mapOptions(options), - }; -}; diff --git a/packages/ws-worker/src/util/create-run-state.ts b/packages/ws-worker/src/util/create-run-state.ts index b9134e73a..7227da833 100644 --- a/packages/ws-worker/src/util/create-run-state.ts +++ b/packages/ws-worker/src/util/create-run-state.ts @@ -1,20 +1,17 @@ -import type { ExecutionPlan } from '@openfn/runtime'; -import type { RunOptions, RunState } from '../types'; +import type { ExecutionPlan, Job, Lazy, State } from '@openfn/lexicon'; +import type { RunState } from '../types'; -export default ( - plan: ExecutionPlan, - options: RunOptions = {} -): RunState => { +export default (plan: ExecutionPlan, input?: Lazy): RunState => { const state = { - plan, lastDataclipId: '', dataclips: {}, inputDataclips: {}, reasons: {}, - options, + plan, + input, } as RunState; - if (typeof plan.initialState === 'string') { + if (typeof input === 'string') { // We need to initialise inputDataclips so that the first run // has its inputDataclip set properly // Difficulty: the starting node is a trigger and NOT a run @@ -22,9 +19,10 @@ export default ( // and set the input state on THAT // find the first job - let startNode = plan.jobs[0]; - if (plan.start) { - startNode = plan.jobs.find(({ id }) => id === plan.start)!; + const jobs = plan.workflow.steps as Job[]; + let startNode = jobs[0]; + if (plan.options.start) { + startNode = jobs.find(({ id }) => id === plan.options.start)!; } // TODO throw with validation error of some kind if this node could not be found @@ -40,7 +38,7 @@ export default ( // For any runs downstream of the initial state, // Set up the input dataclip initialRuns.forEach((id) => { - state.inputDataclips[id] = plan.initialState as string; + state.inputDataclips[id] = input; }); } else { // what if initial state is an object? diff --git a/packages/ws-worker/src/util/get-with-reply.ts b/packages/ws-worker/src/util/get-with-reply.ts index ac6b0c778..5da1ad7e5 100644 --- a/packages/ws-worker/src/util/get-with-reply.ts +++ b/packages/ws-worker/src/util/get-with-reply.ts @@ -1,9 +1,16 @@ import { Channel } from '../types'; export default (channel: Channel, event: string, payload?: any) => - new Promise((resolve) => { - channel.push(event, payload).receive('ok', (evt: any) => { - resolve(evt); - }); - // TODO handle errors and timeouts too + new Promise((resolve, reject) => { + channel + .push(event, payload) + .receive('ok', (evt: any) => { + resolve(evt); + }) + .receive('error', (e: any) => { + reject(e); + }) + .receive('timeout', (e: any) => { + reject(e); + }); }); diff --git a/packages/ws-worker/src/util/index.ts b/packages/ws-worker/src/util/index.ts index 6c9b2b0e3..776d274e5 100644 --- a/packages/ws-worker/src/util/index.ts +++ b/packages/ws-worker/src/util/index.ts @@ -1,4 +1,4 @@ -import convertRun from './convert-run'; +import convertRun from './convert-lightning-plan'; import tryWithBackoff from './try-with-backoff'; import getWithReply from './get-with-reply'; import stringify from './stringify'; diff --git a/packages/ws-worker/src/util/log-final-reason.ts b/packages/ws-worker/src/util/log-final-reason.ts index aaa37c492..4a2f83981 100644 --- a/packages/ws-worker/src/util/log-final-reason.ts +++ b/packages/ws-worker/src/util/log-final-reason.ts @@ -1,6 +1,6 @@ import { timestamp } from '@openfn/logger'; +import { ExitReason } from '@openfn/lexicon/lightning'; import { Context, onJobLog } from '../api/execute'; -import { ExitReason } from '../types'; export default async (context: Context, reason: ExitReason) => { const time = (timestamp() - BigInt(10e6)).toString(); diff --git a/packages/ws-worker/src/util/versions.ts b/packages/ws-worker/src/util/versions.ts index 35e7d1dbf..8c3ba4383 100644 --- a/packages/ws-worker/src/util/versions.ts +++ b/packages/ws-worker/src/util/versions.ts @@ -5,29 +5,23 @@ const { triangleRightSmall: t } = mainSymbols; export type Versions = { node: string; worker: string; - engine: string; [adaptor: string]: string; }; export default (stepId: string, versions: Versions, adaptor?: string) => { - let longest = 'compiler'.length; // Bit wierdly defensive but ensure padding is reasonable even if version has no props + let longest = 'worker'.length; // Bit wierdly defensive but ensure padding is reasonable even if version has no props for (const v in versions) { longest = Math.max(v.length, longest); } - const { node, compiler, engine, worker, runtime, ...adaptors } = versions; + const { node, worker, ...adaptors } = versions; // Prefix and pad version numbers const prefix = (str: string) => ` ${t} ${str.padEnd(longest + 4, ' ')}`; let str = `Versions for step ${stepId}: ${prefix('node.js')}${versions.node || 'unknown'} -${prefix('worker')}${versions.worker || 'unknown'} -${prefix('engine')}${versions.engine || 'unknown'}`; - - // Unfortunately the runtime and compiler versions get reported as workspace:* in prod right now - // ${prefix('runtime')}${versions.runtime || 'unknown'} - // ${prefix('compiler')}${versions.compiler || 'unknown'}`; +${prefix('worker')}${versions.worker || 'unknown'}`; if (Object.keys(adaptors).length) { let allAdaptors = Object.keys(adaptors); diff --git a/packages/ws-worker/src/util/worker-token.ts b/packages/ws-worker/src/util/worker-token.ts index 9438fb6a3..fe11b04af 100644 --- a/packages/ws-worker/src/util/worker-token.ts +++ b/packages/ws-worker/src/util/worker-token.ts @@ -26,8 +26,7 @@ const generateWorkerToken = async ( const jwt = await new jose.SignJWT(claims) .setProtectedHeader({ alg }) .setIssuedAt() - .setIssuer('urn:example:issuer') - .setAudience('urn:example:audience') + .setIssuer('urn:openfn:worker') .sign(encodedSecret); // .setExpirationTime('2h') // ?? diff --git a/packages/ws-worker/test/api/destroy.test.ts b/packages/ws-worker/test/api/destroy.test.ts index 8fa2c26ea..2b47105c1 100644 --- a/packages/ws-worker/test/api/destroy.test.ts +++ b/packages/ws-worker/test/api/destroy.test.ts @@ -1,23 +1,23 @@ import test from 'ava'; import crypto from 'node:crypto'; - import createLightningServer from '@openfn/lightning-mock'; +import { createMockLogger } from '@openfn/logger'; +import { LightningPlan } from '@openfn/lexicon/lightning'; + import createWorker from '../../src/server'; import createMockRTE from '../../src/mock/runtime-engine'; - import destroy from '../../src/api/destroy'; -import { createMockLogger } from '@openfn/logger'; -import { Run } from '../../src/types'; const workerPort = 9876; const lightningPort = workerPort + 1; const logger = createMockLogger(); const lightning = createLightningServer({ port: lightningPort }); -let worker; + +let worker: any; test.beforeEach(async () => { - const engine = await createMockRTE(); + const engine: any = await createMockRTE(); worker = createWorker(engine, { logger, @@ -40,7 +40,7 @@ const createRun = () => body: `wait(${500 + Math.random() * 1000})`, }, ], - } as Run); + } as LightningPlan); const waitForClaim = (timeout: number = 1000) => new Promise((resolve) => { @@ -120,46 +120,43 @@ test.serial('destroy a worker while one run is active', async (t) => { }); }); -test.serial( - 'destroy a worker while multiple runs are active', - async (t) => { - return new Promise((done) => { - let completeCount = 0; - let startCount = 0; - - const doDestroy = async () => { - await destroy(worker, logger); +test.serial('destroy a worker while multiple runs are active', async (t) => { + return new Promise((done) => { + let completeCount = 0; + let startCount = 0; - // Ensure all three runs completed - t.is(completeCount, 3); + const doDestroy = async () => { + await destroy(worker, logger); - // should not respond to get - t.false(await ping()); - // should not be claiming - t.false(await waitForClaim()); + // Ensure all three runs completed + t.is(completeCount, 3); - done(); - }; + // should not respond to get + t.false(await ping()); + // should not be claiming + t.false(await waitForClaim()); - lightning.on('run:start', () => { - startCount++; + done(); + }; - // Let all three workflows start before we kill the server - if (startCount === 3) { - doDestroy(); - } - }); + lightning.on('run:start', () => { + startCount++; - lightning.on('run:complete', () => { - completeCount++; - }); + // Let all three workflows start before we kill the server + if (startCount === 3) { + doDestroy(); + } + }); - lightning.enqueueRun(createRun()); - lightning.enqueueRun(createRun()); - lightning.enqueueRun(createRun()); + lightning.on('run:complete', () => { + completeCount++; }); - } -); + + lightning.enqueueRun(createRun()); + lightning.enqueueRun(createRun()); + lightning.enqueueRun(createRun()); + }); +}); test("don't claim after destroy", (t) => { return new Promise((done) => { diff --git a/packages/ws-worker/test/api/execute.test.ts b/packages/ws-worker/test/api/execute.test.ts index 3d23375ba..4c74bfb65 100644 --- a/packages/ws-worker/test/api/execute.test.ts +++ b/packages/ws-worker/test/api/execute.test.ts @@ -1,5 +1,6 @@ import test from 'ava'; import { createMockLogger } from '@openfn/logger'; +import type { ExecutionPlan } from '@openfn/lexicon'; import { STEP_START, @@ -23,8 +24,7 @@ import createMockRTE from '../../src/mock/runtime-engine'; import { mockChannel } from '../../src/mock/sockets'; import { stringify, createRunState } from '../../src/util'; -import type { ExecutionPlan } from '@openfn/runtime'; -import type { Run, RunState, JSONLog } from '../../src/types'; +import type { RunState, JSONLog } from '../../src/types'; const enc = new TextEncoder(); @@ -54,7 +54,7 @@ test('send event should resolve when the event is acknowledged', async (t) => { test('send event should throw if an event errors', async (t) => { const channel = mockChannel({ - throw: (x) => { + throw: () => { throw new Error('err'); }, }); @@ -98,7 +98,7 @@ test('jobLog should should send a log event outside a run', async (t) => { }, }); - await onJobLog({ channel, state }, log); + await onJobLog({ channel, state } as any, log); }); test('jobLog should should send a log event inside a run', async (t) => { @@ -131,13 +131,13 @@ test('jobLog should should send a log event inside a run', async (t) => { }, }); - await onJobLog({ channel, state }, log); + await onJobLog({ channel, state } as any, log); }); test('jobError should trigger step:complete with a reason', async (t) => { - let stepCompleteEvent; + let stepCompleteEvent: any; - const state = createRunState({ id: 'run-23' } as Run); + const state = createRunState({ id: 'run-23' } as ExecutionPlan); state.activeJob = 'job-1'; state.activeStep = 'b'; @@ -153,7 +153,7 @@ test('jobError should trigger step:complete with a reason', async (t) => { error: { message: 'nope', severity: 'kill', type: 'TEST' }, state: exitState, }; - await onJobError({ channel, state }, event); + await onJobError({ channel, state } as any, event); t.is(stepCompleteEvent.reason, 'kill'); t.is(stepCompleteEvent.error_message, 'nope'); @@ -162,9 +162,9 @@ test('jobError should trigger step:complete with a reason', async (t) => { }); test('jobError should trigger step:complete with a reason and default state', async (t) => { - let stepCompleteEvent; + let stepCompleteEvent: any; - const state = createRunState({ id: 'run-23' } as Run); + const state = createRunState({ id: 'run-23' } as ExecutionPlan); const channel = mockChannel({ [STEP_COMPLETE]: (evt) => { @@ -176,7 +176,7 @@ test('jobError should trigger step:complete with a reason and default state', as const event = { error: { message: 'nope', severity: 'kill', type: 'TEST' }, }; - await onJobError({ channel, state }, event); + await onJobError({ channel, state } as any, event); t.deepEqual(stepCompleteEvent.output_dataclip, '{}'); }); @@ -188,6 +188,7 @@ test('workflowStart should send an empty run:start event', async (t) => { }, }); + // @ts-ignore await onWorkflowStart({ channel }); }); @@ -275,17 +276,20 @@ test('execute should pass the final result to onFinish', async (t) => { const plan = { id: 'a', - jobs: [ - { - expression: 'fn(() => ({ done: true }))', - }, - ], - }; + workflow: { + steps: [ + { + expression: 'fn(() => ({ done: true }))', + }, + ], + }, + } as ExecutionPlan; const options = {}; + const input = {}; return new Promise((done) => { - execute(channel, engine, logger, plan, options, (result) => { + execute(channel, engine, logger, plan, input, options, (result) => { t.deepEqual(result.state, { done: true }); done(); }); @@ -299,14 +303,17 @@ test('execute should return a context object', async (t) => { const plan = { id: 'a', - jobs: [ - { - expression: 'fn(() => ({ done: true }))', - }, - ], - }; + workflow: { + steps: [ + { + expression: 'fn(() => ({ done: true }))', + }, + ], + }, + } as ExecutionPlan; const options = {}; + const input = {}; return new Promise((done) => { const context = execute( @@ -314,13 +321,13 @@ test('execute should return a context object', async (t) => { engine, logger, plan, + input, options, - (result) => { + () => { done(); } ); t.truthy(context.state); - t.deepEqual(context.state.options, options); t.deepEqual(context.channel, channel); t.deepEqual(context.logger, logger); }); @@ -343,18 +350,21 @@ test('execute should lazy-load a credential', async (t) => { const plan = { id: 'a', - jobs: [ - { - configuration: 'abc', - expression: 'fn(() => ({ done: true }))', - }, - ], - }; + workflow: { + steps: [ + { + configuration: 'abc', + expression: 'fn(() => ({ done: true }))', + }, + ], + }, + } as ExecutionPlan; const options = {}; + const input = {}; return new Promise((done) => { - execute(channel, engine, logger, plan, options, (result) => { + execute(channel, engine, logger, plan, input, options, () => { t.true(didCallCredentials); done(); }); @@ -363,34 +373,37 @@ test('execute should lazy-load a credential', async (t) => { test('execute should lazy-load initial state', async (t) => { const logger = createMockLogger(); - let didCallState = false; + let didLoadState = false; const channel = mockChannel({ ...mockEventHandlers, [GET_DATACLIP]: (id) => { + console.log('> GET DATACLIP'); t.truthy(id); - didCallState = true; + didLoadState = true; return toArrayBuffer({}); }, }); const engine = await createMockRTE(); - const plan: Partial = { + const plan = { id: 'a', - // @ts-ignore - initialState: 'abc', - jobs: [ - { - expression: 'fn(() => ({ done: true }))', - }, - ], - }; + workflow: { + steps: [ + { + expression: 'fn(() => ({ done: true }))', + }, + ], + }, + options: {}, + } as ExecutionPlan; const options = {}; + const input = 'abc'; return new Promise((done) => { - execute(channel, engine, logger, plan, options, (result) => { - t.true(didCallState); + execute(channel, engine, logger, plan, input, options, () => { + t.true(didLoadState); done(); }); }); @@ -400,10 +413,10 @@ test('execute should call all events on the socket', async (t) => { const logger = createMockLogger(); const engine = await createMockRTE(); - const events = {}; + const events: Record = {}; - const toEventMap = (obj, evt: string) => { - obj[evt] = (e) => { + const toEventMap = (obj: any, evt: string) => { + obj[evt] = (e: any) => { events[evt] = e || true; }; return obj; @@ -424,20 +437,23 @@ test('execute should call all events on the socket', async (t) => { const plan = { id: 'run-1', - jobs: [ - { - id: 'trigger', - configuration: 'a', - adaptor: '@openfn/language-common@1.0.0', - expression: 'fn(() => console.log("x"))', - }, - ], - }; + workflow: { + steps: [ + { + id: 'trigger', + configuration: 'a', + adaptor: '@openfn/language-common@1.0.0', + expression: 'fn(() => console.log("x"))', + }, + ], + }, + } as ExecutionPlan; const options = {}; + const input = {}; return new Promise((done) => { - execute(channel, engine, logger, plan, options, (result) => { + execute(channel, engine, logger, plan, input, options, () => { // Check that events were passed to the socket // This is deliberately crude t.assert(allEvents.every((e) => events[e])); diff --git a/packages/ws-worker/test/api/reasons.test.ts b/packages/ws-worker/test/api/reasons.test.ts index d2d81ef2c..1e753c88c 100644 --- a/packages/ws-worker/test/api/reasons.test.ts +++ b/packages/ws-worker/test/api/reasons.test.ts @@ -4,7 +4,7 @@ import { calculateJobExitReason } from '../../src/api/reasons'; test('success', (t) => { const jobId = 'a'; - const state = {}; + const state: any = {}; const error = undefined; const r = calculateJobExitReason(jobId, state, error); @@ -15,7 +15,7 @@ test('success', (t) => { test('still success if a prior job has errors', (t) => { const jobId = 'a'; - const state = { + const state: any = { errors: { b: { type: 'RuntimeError', @@ -34,7 +34,7 @@ test('still success if a prior job has errors', (t) => { test('fail', (t) => { const jobId = 'a'; - const state = { + const state: any = { errors: { a: { type: 'RuntimeError', @@ -52,7 +52,7 @@ test('fail', (t) => { test('crash', (t) => { const jobId = 'a'; - const state = {}; + const state: any = {}; const error = new RuntimeCrash(new ReferenceError('x is not defined')); const r = calculateJobExitReason(jobId, state, error); @@ -63,7 +63,7 @@ test('crash', (t) => { test('crash has priority over fail', (t) => { const jobId = 'a'; - const state = { + const state: any = { errors: { b: { type: 'RuntimeError', @@ -83,7 +83,7 @@ test('crash has priority over fail', (t) => { // But it should not stop us calculating a reason test('success if no state is passed', (t) => { const jobId = 'a'; - const state = undefined; + const state: any = undefined; const error = undefined; const r = calculateJobExitReason(jobId, state, error); @@ -94,7 +94,7 @@ test('success if no state is passed', (t) => { test('success if boolean state is passed', (t) => { const jobId = 'a'; - const state = true; + const state: any = true; const error = undefined; const r = calculateJobExitReason(jobId, state, error); diff --git a/packages/ws-worker/test/api/workloop.test.ts b/packages/ws-worker/test/api/workloop.test.ts index a1772ee01..a1ce6df1c 100644 --- a/packages/ws-worker/test/api/workloop.test.ts +++ b/packages/ws-worker/test/api/workloop.test.ts @@ -1,13 +1,12 @@ import test from 'ava'; +import { createMockLogger } from '@openfn/logger'; import { sleep } from '../util'; - import { mockChannel } from '../../src/mock/sockets'; import startWorkloop from '../../src/api/workloop'; import { CLAIM } from '../../src/events'; -import { createMockLogger } from '@openfn/logger'; -let cancel; +let cancel: any; const logger = createMockLogger(); @@ -17,7 +16,6 @@ test.afterEach(() => { test('workloop can be cancelled', async (t) => { let count = 0; - let cancel; const app = { queueChannel: mockChannel({ [CLAIM]: () => { @@ -29,7 +27,7 @@ test('workloop can be cancelled', async (t) => { execute: () => {}, }; - cancel = startWorkloop(app, logger, 1, 1); + cancel = startWorkloop(app as any, logger, 1, 1); await sleep(100); // A quirk of how cancel works is that the loop will be called a few times @@ -38,8 +36,6 @@ test('workloop can be cancelled', async (t) => { test('workloop sends the runs:claim event', (t) => { return new Promise((done) => { - let cancel; - const app = { workflows: {}, queueChannel: mockChannel({ @@ -51,13 +47,12 @@ test('workloop sends the runs:claim event', (t) => { }), execute: () => {}, }; - cancel = startWorkloop(app, logger, 1, 1); + cancel = startWorkloop(app as any, logger, 1, 1); }); }); test('workloop sends the runs:claim event several times ', (t) => { return new Promise((done) => { - let cancel; let count = 0; const app = { workflows: {}, @@ -73,14 +68,12 @@ test('workloop sends the runs:claim event several times ', (t) => { }), execute: () => {}, }; - cancel = startWorkloop(app, logger, 1, 1); + cancel = startWorkloop(app as any, logger, 1, 1); }); }); test('workloop calls execute if runs:claim returns runs', (t) => { return new Promise((done) => { - let cancel; - const app = { workflows: {}, queueChannel: mockChannel({ @@ -88,13 +81,13 @@ test('workloop calls execute if runs:claim returns runs', (t) => { runs: [{ id: 'a', token: 'x.y.z' }], }), }), - execute: (run) => { + execute: (run: any) => { t.deepEqual(run, { id: 'a', token: 'x.y.z' }); t.pass(); done(); }, }; - cancel = startWorkloop(app, logger, 1, 1); + cancel = startWorkloop(app as any, logger, 1, 1); }); }); diff --git a/packages/ws-worker/test/channels/run.test.ts b/packages/ws-worker/test/channels/run.test.ts index e5e580bb7..f66dc7a3f 100644 --- a/packages/ws-worker/test/channels/run.test.ts +++ b/packages/ws-worker/test/channels/run.test.ts @@ -36,14 +36,16 @@ test('loadRun should return an execution plan and options', async (t) => { const { plan, options } = await loadRun(channel); t.like(plan, { id: 'run-1', - jobs: [ - { - id: 'job-1', - configuration: 'a', - expression: 'fn(a => a)', - adaptor: '@openfn/language-common@1.0.0', - }, - ], + workflow: { + steps: [ + { + id: 'job-1', + configuration: 'a', + expression: 'fn(a => a)', + adaptor: '@openfn/language-common@1.0.0', + }, + ], + }, }); t.is(options.sanitize, 'obfuscate'); t.is(options.runTimeoutMs, 10); @@ -70,7 +72,7 @@ test('should join an run channel with a token', async (t) => { ); t.truthy(channel); - t.deepEqual(plan, { id: 'a', jobs: [] }); + t.deepEqual(plan, { id: 'a', workflow: { steps: [] }, options: {} }); t.deepEqual(options, { runTimeoutMs: 10 }); }); diff --git a/packages/ws-worker/test/channels/worker-queue.test.ts b/packages/ws-worker/test/channels/worker-queue.test.ts index 53997e5d5..eef728cba 100644 --- a/packages/ws-worker/test/channels/worker-queue.test.ts +++ b/packages/ws-worker/test/channels/worker-queue.test.ts @@ -1,14 +1,17 @@ import test from 'ava'; import * as jose from 'jose'; +import { createMockLogger } from '@openfn/logger'; +import { API_VERSION } from '@openfn/lexicon/lightning'; +import pkg from '../../package.json' assert { type: 'json' }; + import connectToWorkerQueue from '../../src/channels/worker-queue'; import { mockSocket } from '../../src/mock/sockets'; -import { createMockLogger } from '@openfn/logger'; const logger = createMockLogger(); test('should connect', async (t) => { return new Promise((done) => { - connectToWorkerQueue('www', 'a', 'secret', logger, mockSocket).on( + connectToWorkerQueue('www', 'a', 'secret', logger, mockSocket as any).on( 'connect', ({ socket, channel }) => { t.truthy(socket); @@ -28,7 +31,7 @@ test('should connect with an auth token', async (t) => { const secret = 'xyz'; const encodedSecret = new TextEncoder().encode(secret); - function createSocket(endpoint, options) { + function createSocket(endpoint: string, options: any) { const socket = mockSocket(endpoint, {}, async () => { const { token } = options.params; @@ -38,16 +41,42 @@ test('should connect with an auth token', async (t) => { return socket; } - connectToWorkerQueue('www', workerId, secret, logger, createSocket).on( + connectToWorkerQueue( + 'www', + workerId, + secret, + logger, + createSocket as any + ).on('connect', ({ socket, channel }) => { + t.truthy(socket); + t.truthy(socket.connect); + t.truthy(channel); + t.truthy(channel.join); + t.pass('connected'); + done(); + }); + }); +}); + +test('should connect with api and worker versions', async (t) => { + return new Promise((done) => { + function createSocket(endpoint: string, options: any) { + const socket = mockSocket(endpoint, {}, async () => { + const { worker_version, api_version } = options.params; + + t.is(worker_version, pkg.version); + t.truthy(worker_version); + + t.is(api_version, API_VERSION); + t.truthy(api_version); + }); + + return socket; + } + + connectToWorkerQueue('www', 'a', 'secret', logger, createSocket as any).on( 'connect', - ({ socket, channel }) => { - t.truthy(socket); - t.truthy(socket.connect); - t.truthy(channel); - t.truthy(channel.join); - t.pass('connected'); - done(); - } + done ); }); }); @@ -58,7 +87,7 @@ test('should fail to connect with an invalid auth token', async (t) => { const secret = 'xyz'; const encodedSecret = new TextEncoder().encode(secret); - function createSocket(endpoint, options) { + function createSocket(endpoint: string, options: any) { const socket = mockSocket(endpoint, {}, async () => { const { token } = options.params; @@ -77,7 +106,7 @@ test('should fail to connect with an invalid auth token', async (t) => { workerId, 'wrong-secret!', logger, - createSocket + createSocket as any ).on('error', (e) => { t.is(e, 'auth_fail'); t.pass('error thrown'); diff --git a/packages/ws-worker/test/events/run-complete.test.ts b/packages/ws-worker/test/events/run-complete.test.ts index 8ca730082..9220ee28a 100644 --- a/packages/ws-worker/test/events/run-complete.test.ts +++ b/packages/ws-worker/test/events/run-complete.test.ts @@ -4,10 +4,11 @@ import handleRunComplete from '../../src/events/run-complete'; import { mockChannel } from '../../src/mock/sockets'; import { RUN_COMPLETE, RUN_LOG } from '../../src/events'; import { createRunState } from '../../src/util'; +import { createPlan } from '../util'; test('should send an run:complete event', async (t) => { const result = { answer: 42 }; - const plan = { id: 'run-1', jobs: [] }; + const plan = createPlan(); const state = createRunState(plan); state.dataclips = { @@ -22,15 +23,15 @@ test('should send an run:complete event', async (t) => { }, }); - const event = {}; + const event: any = {}; - const context = { channel, state, onFinish: () => {} }; + const context: any = { channel, state, onFinish: () => {} }; await handleRunComplete(context, event); }); test('should call onFinish with final dataclip', async (t) => { const result = { answer: 42 }; - const plan = { id: 'run-1', jobs: [] }; + const plan = createPlan(); const state = createRunState(plan); state.dataclips = { @@ -43,22 +44,22 @@ test('should call onFinish with final dataclip', async (t) => { [RUN_COMPLETE]: () => true, }); - const context = { + const context: any = { channel, state, - onFinish: ({ state: finalState }) => { + onFinish: ({ state: finalState }: any) => { t.deepEqual(result, finalState); }, }; - const event = { state: result }; + const event: any = { state: result }; await handleRunComplete(context, event); }); test('should send a reason log and return reason for success', async (t) => { const result = { answer: 42 }; - const plan = { id: 'run-1', jobs: [] }; + const plan = createPlan(); const state = createRunState(plan); state.dataclips = { @@ -66,8 +67,8 @@ test('should send a reason log and return reason for success', async (t) => { }; state.lastDataclipId = 'x'; - let logEvent; - let completeEvent; + let logEvent: any; + let completeEvent: any; const channel = mockChannel({ [RUN_LOG]: (e) => { @@ -78,15 +79,15 @@ test('should send a reason log and return reason for success', async (t) => { }, }); - const context = { + const context: any = { channel, state, - onFinish: ({ state: finalState }) => { + onFinish: ({ state: finalState }: any) => { t.deepEqual(result, finalState); }, }; - const event = { state: result }; + const event: any = { state: result }; await handleRunComplete(context, event); @@ -98,7 +99,7 @@ test('should send a reason log and return reason for success', async (t) => { test('should send a reason log and return reason for fail', async (t) => { const result = { answer: 42 }; - const plan = { id: 'run-1', jobs: [{ id: 'x' }] }; + const plan = createPlan({ id: 'x', expression: '.' }); const state = createRunState(plan); state.dataclips = { @@ -113,8 +114,8 @@ test('should send a reason log and return reason for fail', async (t) => { }, }; - let logEvent; - let completeEvent; + let logEvent: any; + let completeEvent: any; const channel = mockChannel({ [RUN_LOG]: (e) => { @@ -125,15 +126,15 @@ test('should send a reason log and return reason for fail', async (t) => { }, }); - const context = { + const context: any = { channel, state, - onFinish: ({ state: finalState }) => { + onFinish: ({ state: finalState }: any) => { t.deepEqual(result, finalState); }, }; - const event = { state: result }; + const event: any = { state: result }; await handleRunComplete(context, event); diff --git a/packages/ws-worker/test/events/run-error.test.ts b/packages/ws-worker/test/events/run-error.test.ts index 2583d2257..34404327c 100644 --- a/packages/ws-worker/test/events/run-error.test.ts +++ b/packages/ws-worker/test/events/run-error.test.ts @@ -5,7 +5,7 @@ import { mockChannel } from '../../src/mock/sockets'; import { RUN_COMPLETE, RUN_LOG, STEP_COMPLETE } from '../../src/events'; import { createRunState } from '../../src/util'; -const plan = { id: 'run-1', jobs: [] }; +const plan = { id: 'run-1', workflow: { steps: [] }, options: {} }; test('runError should trigger runComplete with a reason', async (t) => { const jobId = 'job-1'; @@ -25,7 +25,7 @@ test('runError should trigger runComplete with a reason', async (t) => { [RUN_COMPLETE]: () => true, }); - const event = { + const event: any = { severity: 'crash', type: 'Err', message: 'it crashed', @@ -33,7 +33,7 @@ test('runError should trigger runComplete with a reason', async (t) => { const context = { channel, state, onFinish: () => {} }; - await onRunError(context, event); + await onRunError(context as any, event); }); test('workflow error should send reason to onFinish', async (t) => { @@ -46,11 +46,11 @@ test('workflow error should send reason to onFinish', async (t) => { const channel = mockChannel({ [RUN_LOG]: () => true, - [STEP_COMPLETE]: (evt) => true, + [STEP_COMPLETE]: () => true, [RUN_COMPLETE]: () => true, }); - const event = { + const event: any = { error: { severity: 'crash', type: 'Err', @@ -62,12 +62,12 @@ test('workflow error should send reason to onFinish', async (t) => { const context = { channel, state, - onFinish: (evt) => { + onFinish: (evt: any) => { t.is(evt.reason.reason, 'crash'); }, }; - await onRunError(context, event); + await onRunError(context as any, event); }); test('runError should not call job complete if the job is not active', async (t) => { @@ -76,14 +76,14 @@ test('runError should not call job complete if the job is not active', async (t) const channel = mockChannel({ [RUN_LOG]: () => true, - [STEP_COMPLETE]: (evt) => { + [STEP_COMPLETE]: () => { t.fail('should not call!'); return true; }, [RUN_COMPLETE]: () => true, }); - const event = { + const event: any = { error: { severity: 'crash', type: 'Err', @@ -100,7 +100,7 @@ test('runError should not call job complete if the job is not active', async (t) }, }; - await onRunError(context, event); + await onRunError(context as any, event); }); test('runError should log the reason', async (t) => { @@ -108,31 +108,34 @@ test('runError should log the reason', async (t) => { const state = createRunState({ id: 'run-1', - jobs: [{ id: 'job-1' }], + workflow: { + steps: [{ id: 'job-1' }], + }, + options: {}, }); state.lastDataclipId = 'x'; state.activeStep = 'b'; state.activeJob = jobId; - const event = { + const event: any = { severity: 'crash', type: 'Err', message: 'it crashed', }; state.reasons['x'] = event; - let logEvent; + let logEvent: any; const channel = mockChannel({ [RUN_LOG]: (e) => { logEvent = e; }, - [STEP_COMPLETE]: (evt) => true, + [STEP_COMPLETE]: () => true, [RUN_COMPLETE]: () => true, }); const context = { channel, state, onFinish: () => {} }; - await onRunError(context, event); + await onRunError(context as any, event); t.is(logEvent.message[0], 'Run complete with status: crash\nErr: it crashed'); }); diff --git a/packages/ws-worker/test/events/step-complete.test.ts b/packages/ws-worker/test/events/step-complete.test.ts index f12d1a47a..4972eb0f7 100644 --- a/packages/ws-worker/test/events/step-complete.test.ts +++ b/packages/ws-worker/test/events/step-complete.test.ts @@ -1,14 +1,15 @@ import test from 'ava'; -import handleStepStart from '../../src/events/step-complete'; +import type { StepCompletePayload } from '@openfn/lexicon/lightning'; +import handleStepComplete from '../../src/events/step-complete'; import { mockChannel } from '../../src/mock/sockets'; import { createRunState } from '../../src/util'; import { STEP_COMPLETE } from '../../src/events'; - -import type { ExecutionPlan } from '@openfn/runtime'; +import { createPlan } from '../util'; +import { JobCompletePayload } from '@openfn/engine-multi'; test('clear the step id and active job on state', async (t) => { - const plan = { id: 'run-1' }; + const plan = createPlan(); const jobId = 'job-1'; const state = createRunState(plan); @@ -19,16 +20,16 @@ test('clear the step id and active job on state', async (t) => { [STEP_COMPLETE]: () => true, }); - const event = { state: { x: 10 } }; - await handleStepStart({ channel, state }, event); + const event = { state: { x: 10 } } as any; + await handleStepComplete({ channel, state } as any, event); t.falsy(state.activeJob); t.falsy(state.activeStep); }); test('setup input mappings on on state', async (t) => { - let lightningEvent; - const plan = { id: 'run-1' }; + let lightningEvent: any; + const plan = createPlan(); const jobId = 'job-1'; const state = createRunState(plan); @@ -41,8 +42,8 @@ test('setup input mappings on on state', async (t) => { }, }); - const engineEvent = { state: { x: 10 }, next: ['job-2'] }; - await handleStepStart({ channel, state }, engineEvent); + const engineEvent = { state: { x: 10 }, next: ['job-2'] } as any; + await handleStepComplete({ channel, state } as any, engineEvent); t.deepEqual(state.inputDataclips, { ['job-2']: lightningEvent.output_dataclip_id, @@ -50,7 +51,7 @@ test('setup input mappings on on state', async (t) => { }); test('save the dataclip to state', async (t) => { - const plan = { id: 'run-1' } as ExecutionPlan; + const plan = createPlan(); const jobId = 'job-1'; const state = createRunState(plan); @@ -61,8 +62,8 @@ test('save the dataclip to state', async (t) => { [STEP_COMPLETE]: () => true, }); - const event = { state: { x: 10 } }; - await handleStepStart({ channel, state }, event); + const event = { state: { x: 10 } } as any; + await handleStepComplete({ channel, state } as any, event); t.is(Object.keys(state.dataclips).length, 1); const [dataclip] = Object.values(state.dataclips); @@ -70,7 +71,7 @@ test('save the dataclip to state', async (t) => { }); test('write a reason to state', async (t) => { - const plan = { id: 'run-1' } as ExecutionPlan; + const plan = createPlan(); const jobId = 'job-1'; const state = createRunState(plan); @@ -83,8 +84,8 @@ test('write a reason to state', async (t) => { [STEP_COMPLETE]: () => true, }); - const event = { state: { x: 10 } }; - await handleStepStart({ channel, state }, event); + const event = { state: { x: 10 } } as any; + await handleStepComplete({ channel, state } as any, event); t.is(Object.keys(state.reasons).length, 1); t.deepEqual(state.reasons[jobId], { @@ -95,14 +96,14 @@ test('write a reason to state', async (t) => { }); test('generate an exit reason: success', async (t) => { - const plan = { id: 'run-1' } as ExecutionPlan; + const plan = createPlan(); const jobId = 'job-1'; const state = createRunState(plan); state.activeJob = jobId; state.activeStep = 'b'; - let event; + let event: any; const channel = mockChannel({ [STEP_COMPLETE]: (e) => { @@ -110,7 +111,10 @@ test('generate an exit reason: success', async (t) => { }, }); - await handleStepStart({ channel, state }, { state: { x: 10 } }); + await handleStepComplete( + { channel, state } as any, + { state: { x: 10 } } as any + ); t.truthy(event); t.is(event.reason, 'success'); @@ -119,7 +123,7 @@ test('generate an exit reason: success', async (t) => { }); test('send a step:complete event', async (t) => { - const plan = { id: 'run-1' }; + const plan = createPlan(); const jobId = 'job-1'; const result = { x: 10 }; @@ -128,7 +132,7 @@ test('send a step:complete event', async (t) => { state.activeStep = 'b'; const channel = mockChannel({ - [STEP_COMPLETE]: (evt) => { + [STEP_COMPLETE]: (evt: StepCompletePayload) => { t.is(evt.job_id, jobId); t.truthy(evt.step_id); t.truthy(evt.output_dataclip_id); @@ -140,11 +144,45 @@ test('send a step:complete event', async (t) => { }); const event = { + jobId, + workflowId: plan.id, state: result, next: ['a'], mem: { job: 1, system: 10 }, duration: 61, - threadId: 'abc', + thread_id: 'abc', + } as JobCompletePayload; + await handleStepComplete({ channel, state } as any, event); +}); + +test('do not include dataclips in step:complete if output_dataclip is false', async (t) => { + const plan = createPlan(); + const jobId = 'job-1'; + const result = { x: 10 }; + + const state = createRunState(plan); + state.activeJob = jobId; + state.activeStep = 'b'; + + const options = { + outputDataclips: false, }; - await handleStepStart({ channel, state }, event); + + const channel = mockChannel({ + [STEP_COMPLETE]: (evt: StepCompletePayload) => { + t.truthy(evt.output_dataclip_id); + t.falsy(evt.output_dataclip); + }, + }); + + const event = { + jobId, + workflowId: plan.id, + state: result, + next: ['a'], + mem: { job: 1, system: 10 }, + duration: 61, + thread_id: 'abc', + } as JobCompletePayload; + await handleStepComplete({ channel, state, options } as any, event); }); diff --git a/packages/ws-worker/test/events/step-start.test.ts b/packages/ws-worker/test/events/step-start.test.ts index 9cebbdc94..e97b69a61 100644 --- a/packages/ws-worker/test/events/step-start.test.ts +++ b/packages/ws-worker/test/events/step-start.test.ts @@ -10,7 +10,11 @@ import { RUN_LOG, STEP_START } from '../../src/events'; import pkg from '../../package.json' assert { type: 'json' }; test('set a step id and active job on state', async (t) => { - const plan = { id: 'run-1', jobs: [{ id: 'job-1' }] }; + const plan = { + id: 'run-1', + workflow: { steps: [{ id: 'job-1' }] }, + options: {}, + }; const jobId = 'job-1'; const state = createRunState(plan); @@ -20,7 +24,7 @@ test('set a step id and active job on state', async (t) => { [RUN_LOG]: (x) => x, }); - await handleStepStart({ channel, state }, { jobId }); + await handleStepStart({ channel, state } as any, { jobId } as any); t.is(state.activeJob, jobId); t.truthy(state.activeStep); @@ -29,37 +33,43 @@ test('set a step id and active job on state', async (t) => { test('send a step:start event', async (t) => { const plan = { id: 'run-1', - initialState: 'abc', - jobs: [ - { id: 'job-1', expression: '.' }, - { id: 'job-2', expression: '.' }, - ], + workflow: { + steps: [ + { id: 'job-1', expression: '.' }, + { id: 'job-2', expression: '.' }, + ], + }, + options: {}, }; + const input = 'abc'; const jobId = 'job-1'; - const state = createRunState(plan); + const state = createRunState(plan, input); state.activeJob = jobId; state.activeStep = 'b'; const channel = mockChannel({ [STEP_START]: (evt) => { t.is(evt.job_id, jobId); - t.is(evt.input_dataclip_id, plan.initialState); + t.is(evt.input_dataclip_id, input); t.truthy(evt.step_id); return true; }, [RUN_LOG]: () => true, }); - await handleStepStart({ channel, state }, { jobId }); + await handleStepStart({ channel, state } as any, { jobId } as any); }); test('step:start event should include versions', async (t) => { const plan = { id: 'run-1', - initialState: 'abc', - jobs: [{ id: 'job-1', expression: '.' }], + workflow: { + steps: [{ id: 'job-1', expression: '.' }], + }, + options: {}, }; + const input = 'abc'; const jobId = 'job-1'; const versions = { @@ -76,7 +86,7 @@ test('step:start event should include versions', async (t) => { versions, }; - const state = createRunState(plan); + const state = createRunState(plan, input); state.activeJob = jobId; state.activeStep = 'b'; @@ -91,16 +101,19 @@ test('step:start event should include versions', async (t) => { [RUN_LOG]: () => true, }); - await handleStepStart({ channel, state }, event); + await handleStepStart({ channel, state } as any, event); }); test('also logs the version number', async (t) => { - let logEvent; + let logEvent: any; const plan = { id: 'run-1', - initialState: 'abc', - jobs: [{ id: 'job-1', expression: '.' }], + workflow: { + steps: [{ id: 'job-1', expression: '.' }], + }, + options: {}, }; + const input = 'abc'; const jobId = 'job-1'; const versions = { @@ -117,12 +130,12 @@ test('also logs the version number', async (t) => { versions, }; - const state = createRunState(plan); + const state = createRunState(plan, input); state.activeJob = jobId; state.activeStep = 'b'; const channel = mockChannel({ - [STEP_START]: (evt) => true, + [STEP_START]: () => true, [RUN_LOG]: (evt) => { if (evt.source === 'VER') { logEvent = evt; @@ -131,7 +144,7 @@ test('also logs the version number', async (t) => { }, }); - await handleStepStart({ channel, state }, event); + await handleStepStart({ channel, state } as any, event); t.truthy(logEvent); t.is(logEvent.level, 'info'); diff --git a/packages/ws-worker/test/lightning.test.ts b/packages/ws-worker/test/lightning.test.ts index 337e1847a..1fe55c90a 100644 --- a/packages/ws-worker/test/lightning.test.ts +++ b/packages/ws-worker/test/lightning.test.ts @@ -3,16 +3,24 @@ */ import test from 'ava'; -import createLightningServer from '@openfn/lightning-mock'; +import type { + LightningPlan, + RunCompletePayload, +} from '@openfn/lexicon/lightning'; +import createLightningServer, { + generateKeys, + toBase64, +} from '@openfn/lightning-mock'; import { createRun, createEdge, createJob } from './util'; - import createWorkerServer from '../src/server'; -import createMockRTE from '../src/mock/runtime-engine'; import * as e from '../src/events'; +import createMockRTE from '../src/mock/runtime-engine'; -let lng; -let worker; +let lng: any; +let worker: any; + +let keys = { private: '.', public: '.' }; const urls = { worker: 'http://localhost:4567', @@ -20,14 +28,23 @@ const urls = { }; test.before(async () => { + keys = await generateKeys(); + const engine = await createMockRTE(); - // TODO give lightning the same secret and do some validation - lng = createLightningServer({ port: 7654 }); + lng = createLightningServer({ + port: 7654, + runPrivateKey: toBase64(keys.private), + }); + worker = createWorkerServer(engine, { port: 4567, lightning: urls.lng, secret: 'abc', maxWorkflows: 1, + + // Note that if this is not passed, + // JWT verification will be skipped + runPublicKey: keys.public, }); }); @@ -37,21 +54,22 @@ test.afterEach(() => { let rollingRunId = 0; -const getRun = (ext = {}, jobs?: any) => ({ - id: `a${++rollingRunId}`, - jobs: jobs || [ - { - id: 'j', - adaptor: '@openfn/language-common@1.0.0', - body: 'fn(() => ({ answer: 42 }))', - }, - ], - ...ext, -}); +const getRun = (ext = {}, jobs?: any[]): LightningPlan => + ({ + id: `a${++rollingRunId}`, + jobs: jobs || [ + { + id: 'j', + adaptor: '@openfn/language-common@1.0.0', + body: 'fn(() => ({ answer: 42 }))', + }, + ], + ...ext, + } as LightningPlan); test.serial(`events: lightning should respond to a ${e.CLAIM} event`, (t) => { return new Promise((done) => { - lng.on(e.CLAIM, (evt) => { + lng.on(e.CLAIM, (evt: any) => { const response = evt.payload; t.deepEqual(response, []); done(); @@ -64,9 +82,9 @@ test.serial( (t) => { return new Promise((done) => { const run = getRun(); - let response; + let response: any; - lng.on(e.CLAIM, ({ payload }) => { + lng.on(e.CLAIM, ({ payload }: any) => { if (payload.length) { response = payload[0]; } @@ -87,8 +105,15 @@ test.serial( } ); +test.todo('worker should log when a run token is verified'); + +// Perhaps a workflow exception is the most responsible thing right now +test.todo( + 'worker throws or blow up or something when token verification fails' +); + test.serial( - 'should run an run which returns an expression as JSON', + 'should run a run which returns an expression as JSON', async (t) => { return new Promise((done) => { const run = { @@ -100,7 +125,7 @@ test.serial( ], }; - lng.waitForResult(run.id).then((result) => { + lng.waitForResult(run.id).then((result: any) => { t.deepEqual(result, { count: 122 }); done(); }); @@ -110,7 +135,7 @@ test.serial( } ); -test.serial('should run an run which returns intial state', async (t) => { +test.serial('should run a run which returns initial state', async (t) => { return new Promise((done) => { lng.addDataclip('x', { data: 66, @@ -126,7 +151,7 @@ test.serial('should run an run which returns intial state', async (t) => { ], }; - lng.waitForResult(run.id).then((result) => { + lng.waitForResult(run.id).then((result: any) => { t.deepEqual(result, { data: 66 }); done(); }); @@ -142,7 +167,7 @@ test.serial( (t) => { return new Promise((done) => { const run = getRun(); - lng.onSocketEvent(e.RUN_COMPLETE, run.id, (evt) => { + lng.onSocketEvent(e.RUN_COMPLETE, run.id, (evt: any) => { const { final_dataclip_id } = evt.payload; t.assert(typeof final_dataclip_id === 'string'); t.pass('run complete event received'); @@ -160,34 +185,33 @@ test.todo(`events: lightning should receive a ${e.RUN_START} event`); // for each event we can see a copy of the server state // (if that helps anything?) -test.serial( - `events: lightning should receive a ${e.GET_PLAN} event`, - (t) => { - return new Promise((done) => { - const run = getRun(); - - let didCallEvent = false; - lng.onSocketEvent(e.GET_PLAN, run.id, ({ payload }) => { - // This doesn't test that the correct run gets sent back - // We'd have to add an event to the engine for that - // (not a bad idea) - didCallEvent = true; - }); +test.serial(`events: lightning should receive a ${e.GET_PLAN} event`, (t) => { + return new Promise((done) => { + const run = getRun(); - lng.onSocketEvent(e.RUN_COMPLETE, run.id, (evt) => { - t.true(didCallEvent); - done(); - }); + let didCallEvent = false; + lng.onSocketEvent(e.GET_PLAN, run.id, () => { + // This doesn't test that the correct run gets sent back + // We'd have to add an event to the engine for that + // (not a bad idea) + didCallEvent = true; + }); - lng.enqueueRun(run); + lng.onSocketEvent(e.RUN_COMPLETE, run.id, () => { + t.true(didCallEvent); + done(); }); - } -); + + lng.enqueueRun(run); + }); +}); test.serial( `events: lightning should receive a ${e.GET_CREDENTIAL} event`, (t) => { return new Promise((done) => { + lng.addCredential('a', {}); + const run = getRun({}, [ { id: 'some-job', @@ -203,7 +227,8 @@ test.serial( didCallEvent = true; }); - lng.onSocketEvent(e.RUN_COMPLETE, run.id, () => { + lng.onSocketEvent(e.RUN_COMPLETE, run.id, ({ payload }: any) => { + t.is(payload.reason, 'success'); t.true(didCallEvent); done(); }); @@ -224,7 +249,7 @@ test.serial( }); let didCallEvent = false; - lng.onSocketEvent(e.GET_DATACLIP, run.id, ({ payload }) => { + lng.onSocketEvent(e.GET_DATACLIP, run.id, ({ payload }: any) => { // payload is the incoming/request payload - this tells us which dataclip // the worker is asking for // Note that it doesn't tell us much about what is returned @@ -243,17 +268,48 @@ test.serial( } ); +test.serial( + `events: worker should send an error if ${e.GET_DATACLIP} references a non-existant dataclip`, + (t) => { + return new Promise((done) => { + const run = getRun({ + dataclip_id: 'xyz', + }); + // Do not load the dataclip into lightning + + let didCallEvent = false; + lng.onSocketEvent(e.GET_DATACLIP, run.id, () => { + didCallEvent = true; + }); + + lng.onSocketEvent( + e.RUN_COMPLETE, + run.id, + ({ payload }: { payload: RunCompletePayload }) => { + t.true(didCallEvent); + t.is(payload.reason, 'exception'); + t.is(payload.error_type, 'DataClipError'); + t.regex(payload.error_message!, /Failed to load dataclip xyz/); + done(); + } + ); + + lng.enqueueRun(run); + }); + } +); + test.serial(`events: lightning should receive a ${e.STEP_START} event`, (t) => { return new Promise((done) => { const run = getRun(); - lng.onSocketEvent(e.STEP_START, run.id, ({ payload }) => { + lng.onSocketEvent(e.STEP_START, run.id, ({ payload }: any) => { t.is(payload.job_id, 'j'); t.truthy(payload.step_id); t.pass('called run start'); }); - lng.onSocketEvent(e.RUN_COMPLETE, run.id, (evt) => { + lng.onSocketEvent(e.RUN_COMPLETE, run.id, () => { done(); }); @@ -267,7 +323,7 @@ test.serial( return new Promise((done) => { const run = getRun(); - lng.onSocketEvent(e.STEP_COMPLETE, run.id, ({ payload }) => { + lng.onSocketEvent(e.STEP_COMPLETE, run.id, ({ payload }: any) => { t.is(payload.job_id, 'j'); t.truthy(payload.step_id); t.truthy(payload.output_dataclip); @@ -278,7 +334,7 @@ test.serial( t.pass('called run complete'); }); - lng.onSocketEvent(e.RUN_COMPLETE, run.id, (evt) => { + lng.onSocketEvent(e.RUN_COMPLETE, run.id, () => { done(); }); @@ -299,12 +355,12 @@ test.serial( }, ]); - lng.onSocketEvent(e.STEP_COMPLETE, run.id, ({ payload }) => { + lng.onSocketEvent(e.STEP_COMPLETE, run.id, ({ payload }: any) => { t.is(payload.reason, 'fail'); t.pass('called step complete'); }); - lng.onSocketEvent(e.RUN_COMPLETE, run.id, ({ payload }) => { + lng.onSocketEvent(e.RUN_COMPLETE, run.id, () => { done(); }); @@ -313,37 +369,56 @@ test.serial( } ); -test.serial( - `events: lightning should receive a ${e.RUN_LOG} event`, - (t) => { - return new Promise((done) => { - const run = { - id: 'run-1', - jobs: [ - { - body: 'fn((s) => { console.log("x"); return s })', - }, - ], - }; +test.serial(`events: ${e.STEP_COMPLETE} should not return dataclips`, (t) => { + return new Promise((done) => { + const run = getRun(); + run.options = { + output_dataclips: false, + }; - lng.onSocketEvent(e.RUN_LOG, run.id, ({ payload }) => { - const log = payload; + lng.onSocketEvent(e.STEP_COMPLETE, run.id, ({ payload }: any) => { + t.is(payload.job_id, 'j'); + t.falsy(payload.output_dataclip); + t.truthy(payload.output_dataclip_id); + t.pass(); + }); - t.is(log.level, 'info'); - t.truthy(log.run_id); - t.truthy(log.step_id); - t.truthy(log.message); - t.deepEqual(log.message, ['x']); - }); + lng.onSocketEvent(e.RUN_COMPLETE, run.id, () => { + done(); + }); - lng.onSocketEvent(e.RUN_COMPLETE, run.id, (evt) => { - done(); - }); + lng.enqueueRun(run); + }); +}); - lng.enqueueRun(run); +test.serial(`events: lightning should receive a ${e.RUN_LOG} event`, (t) => { + return new Promise((done) => { + const run = { + id: 'run-1', + jobs: [ + { + body: 'fn((s) => { console.log("x"); return s })', + }, + ], + }; + + lng.onSocketEvent(e.RUN_LOG, run.id, ({ payload }: any) => { + const log = payload; + + t.is(log.level, 'info'); + t.truthy(log.run_id); + t.truthy(log.step_id); + t.truthy(log.message); + t.deepEqual(log.message, ['x']); }); - } -); + + lng.onSocketEvent(e.RUN_COMPLETE, run.id, () => { + done(); + }); + + lng.enqueueRun(run); + }); +}); // Skipping because this is flaky at microsecond resolution // See branch hrtime-send-nanoseconds-to-lightning where this should be more robust @@ -366,13 +441,13 @@ test.serial.skip(`events: logs should have increasing timestamps`, (t) => { lng.onSocketEvent( e.RUN_LOG, run.id, - ({ payload }) => { + ({ payload }: any) => { history.push(BigInt(payload.timestamp)); }, false ); - lng.onSocketEvent(e.RUN_COMPLETE, run.id, (evt) => { + lng.onSocketEvent(e.RUN_COMPLETE, run.id, () => { t.log(history); let last = BigInt(0); @@ -407,7 +482,7 @@ test.serial( return new Promise((done) => { const run = getRun(); - lng.onSocketEvent(e.RUN_COMPLETE, run.id, (evt) => { + lng.onSocketEvent(e.RUN_COMPLETE, run.id, () => { t.pass('called run:complete'); done(); }); @@ -417,37 +492,34 @@ test.serial( } ); -test.serial( - 'should register and de-register runs to the server', - async (t) => { - return new Promise((done) => { - const run = { - id: 'run-1', - jobs: [ - { - body: 'fn(() => ({ count: 122 }))', - }, - ], - }; - - worker.on(e.RUN_START, () => { - t.truthy(worker.workflows[run.id]); - }); +test.serial('should register and de-register runs to the server', async (t) => { + return new Promise((done) => { + const run = { + id: 'run-1', + jobs: [ + { + body: 'fn(() => ({ count: 122 }))', + }, + ], + }; - lng.onSocketEvent(e.RUN_COMPLETE, run.id, (evt) => { - t.truthy(worker.workflows[run.id]); - // Tidyup is done AFTER lightning receives the event - // This timeout is crude but should work - setTimeout(() => { - t.falsy(worker.workflows[run.id]); - done(); - }, 10); - }); + worker.on(e.RUN_START, () => { + t.truthy(worker.workflows[run.id]); + }); - lng.enqueueRun(run); + lng.onSocketEvent(e.RUN_COMPLETE, run.id, () => { + t.truthy(worker.workflows[run.id]); + // Tidyup is done AFTER lightning receives the event + // This timeout is crude but should work + setTimeout(() => { + t.falsy(worker.workflows[run.id]); + done(); + }, 10); }); - } -); + + lng.enqueueRun(run); + }); +}); // TODO this is a server test // What I am testing here is that the first job completes @@ -469,10 +541,10 @@ test.skip('should not claim while at capacity', async (t) => { id: 'run-2', }; - let run1Start; + let run1Start: any; // When the first run starts, we should only have run 1 in progress - lng.onSocketEvent(e.RUN_START, run1.id, (evt) => { + lng.onSocketEvent(e.RUN_START, run1.id, () => { run1Start = Date.now(); t.truthy(worker.workflows[run1.id]); @@ -480,7 +552,7 @@ test.skip('should not claim while at capacity', async (t) => { }); // When the second run starts, we should only have run 2 in progress - lng.onSocketEvent(e.RUN_START, run2.id, (evt) => { + lng.onSocketEvent(e.RUN_START, run2.id, () => { const duration = Date.now() - run1Start; t.true(duration > 490); @@ -490,7 +562,7 @@ test.skip('should not claim while at capacity', async (t) => { // also, the now date should be around 500 ms after the first start }); - lng.onSocketEvent(e.RUN_COMPLETE, run2.id, (evt) => { + lng.onSocketEvent(e.RUN_COMPLETE, run2.id, () => { done(); }); @@ -506,9 +578,9 @@ test.serial('should pass the right dataclip when running in parallel', (t) => { body: `fn((s) => { s.data.${id} = true; return s; })`, }); - const outputDataclipIds = {}; - const inputDataclipIds = {}; - const outputs = {}; + const outputDataclipIds: any = {}; + const inputDataclipIds: any = {}; + const outputs: any = {}; const a = { id: 'a', body: 'fn(() => ({ data: { a: true } }))', @@ -535,7 +607,7 @@ test.serial('should pass the right dataclip when running in parallel', (t) => { const unsub2 = lng.onSocketEvent( e.STEP_START, run.id, - ({ payload }) => { + ({ payload }: any) => { inputDataclipIds[payload.job_id] = payload.input_dataclip_id; }, false @@ -545,14 +617,14 @@ test.serial('should pass the right dataclip when running in parallel', (t) => { const unsub1 = lng.onSocketEvent( e.STEP_COMPLETE, run.id, - ({ payload }) => { + ({ payload }: any) => { outputDataclipIds[payload.job_id] = payload.output_dataclip_id; outputs[payload.job_id] = JSON.parse(payload.output_dataclip); }, false ); - lng.onSocketEvent(e.RUN_COMPLETE, run.id, (evt) => { + lng.onSocketEvent(e.RUN_COMPLETE, run.id, () => { unsub1(); unsub2(); @@ -595,7 +667,7 @@ test.serial( const bc = createEdge('b', 'c'); bc.condition = 'on_job_success'; - const run = createRun([a, b, c], [ab, bc]); + const run = createRun([a, b, c] as any, [ab, bc] as any); const results: Record = {}; @@ -603,13 +675,13 @@ test.serial( const unsub = lng.onSocketEvent( e.STEP_COMPLETE, run.id, - (evt) => { + (evt: any) => { results[evt.payload.job_id] = JSON.parse(evt.payload.output_dataclip); }, false ); - lng.onSocketEvent(e.RUN_COMPLETE, run.id, (evt) => { + lng.onSocketEvent(e.RUN_COMPLETE, run.id, (evt: any) => { t.is(evt.payload.reason, 'success'); // What we REALLY care about is that the b-c edge condition @@ -629,7 +701,7 @@ test.serial( test.serial(`worker should send a success reason in the logs`, (t) => { return new Promise((done) => { - let log; + let log: any; const run = { id: 'run-1', @@ -640,7 +712,7 @@ test.serial(`worker should send a success reason in the logs`, (t) => { ], }; - lng.onSocketEvent(e.RUN_LOG, run.id, ({ payload }) => { + lng.onSocketEvent(e.RUN_LOG, run.id, ({ payload }: any) => { if (payload.message[0].match(/Run complete with status: success/)) { log = payload.message[0]; } @@ -657,7 +729,7 @@ test.serial(`worker should send a success reason in the logs`, (t) => { test.serial(`worker should send a fail reason in the logs`, (t) => { return new Promise((done) => { - let log; + let log: any; const run = { id: 'run-1', @@ -668,7 +740,7 @@ test.serial(`worker should send a fail reason in the logs`, (t) => { ], }; - lng.onSocketEvent(e.RUN_LOG, run.id, ({ payload }) => { + lng.onSocketEvent(e.RUN_LOG, run.id, ({ payload }: any) => { if (payload.message[0].match(/Run complete with status: fail/)) { log = payload.message[0]; } diff --git a/packages/ws-worker/test/mock/runtime-engine.test.ts b/packages/ws-worker/test/mock/runtime-engine.test.ts index bfb9eba63..aa66c2aa6 100644 --- a/packages/ws-worker/test/mock/runtime-engine.test.ts +++ b/packages/ws-worker/test/mock/runtime-engine.test.ts @@ -1,44 +1,52 @@ import test from 'ava'; -import create, { - JobCompleteEvent, - JobStartEvent, - WorkflowCompleteEvent, - WorkflowStartEvent, -} from '../../src/mock/runtime-engine'; -import type { ExecutionPlan } from '@openfn/runtime'; -import { waitForEvent, clone } from '../util'; +import type { ExecutionPlan } from '@openfn/lexicon'; + +import type { + JobCompletePayload, + JobStartPayload, + WorkflowCompletePayload, + WorkflowStartPayload, +} from '@openfn/engine-multi'; +import create from '../../src/mock/runtime-engine'; +import { waitForEvent, clone, createPlan } from '../util'; +import { WorkflowErrorPayload } from '@openfn/engine-multi'; const sampleWorkflow = { id: 'w1', - jobs: [ - { - id: 'j1', - adaptor: 'common@1.0.0', - expression: 'fn(() => ({ data: { x: 10 } }))', - }, - ], + workflow: { + steps: [ + { + id: 'j1', + adaptor: 'common@1.0.0', + expression: 'fn(() => ({ data: { x: 10 } }))', + }, + ], + }, } as ExecutionPlan; -let engine; +let engine: any; test.before(async () => { engine = await create(); }); -test('getStatus() should should have no active workflows', async (t) => { +test.serial('getStatus() should should have no active workflows', async (t) => { const { active } = engine.getStatus(); t.is(active, 0); }); -test('Dispatch start events for a new workflow', async (t) => { +test.serial('Dispatch start events for a new workflow', async (t) => { engine.execute(sampleWorkflow); - const evt = await waitForEvent(engine, 'workflow-start'); + const evt = await waitForEvent( + engine, + 'workflow-start' + ); t.truthy(evt); t.is(evt.workflowId, 'w1'); }); -test('getStatus should report one active workflow', async (t) => { +test.serial('getStatus should report one active workflow', async (t) => { engine.execute(sampleWorkflow); const { active } = engine.getStatus(); @@ -46,9 +54,9 @@ test('getStatus should report one active workflow', async (t) => { t.is(active, 1); }); -test('Dispatch complete events when a workflow completes', async (t) => { +test.serial('Dispatch complete events when a workflow completes', async (t) => { engine.execute(sampleWorkflow); - const evt = await waitForEvent( + const evt = await waitForEvent( engine, 'workflow-complete' ); @@ -57,80 +65,77 @@ test('Dispatch complete events when a workflow completes', async (t) => { t.truthy(evt.threadId); }); -test('Dispatch start events for a job', async (t) => { +test.serial('Dispatch start events for a job', async (t) => { engine.execute(sampleWorkflow); - const evt = await waitForEvent(engine, 'job-start'); + const evt = await waitForEvent(engine, 'job-start'); t.truthy(evt); t.is(evt.workflowId, 'w1'); t.is(evt.jobId, 'j1'); }); -test('Dispatch complete events for a job', async (t) => { +test.serial('Dispatch complete events for a job', async (t) => { engine.execute(sampleWorkflow); - const evt = await waitForEvent(engine, 'job-complete'); + const evt = await waitForEvent(engine, 'job-complete'); t.truthy(evt); t.is(evt.workflowId, 'w1'); t.is(evt.jobId, 'j1'); t.deepEqual(evt.state, { data: { x: 10 } }); }); -test('Dispatch error event for a crash', async (t) => { - const wf = { - id: 'xyz', - jobs: [ - { - id: 'j1', - adaptor: 'common@1.0.0', - expression: 'fn(() => ( @~!"@£!4 )', - }, - ], - } as ExecutionPlan; +test.serial('Dispatch error event for a crash', async (t) => { + const wf = createPlan({ + id: 'j1', + adaptor: 'common@1.0.0', + expression: 'fn(() => ( @~!"@£!4 )', + }); engine.execute(wf); - const evt = await waitForEvent(engine, 'workflow-error'); + const evt = await waitForEvent( + engine, + 'workflow-error' + ); - t.is(evt.workflowId, 'xyz'); + t.is(evt.workflowId, wf.id!); t.is(evt.type, 'RuntimeCrash'); t.regex(evt.message, /invalid or unexpected token/i); }); -test('wait function', async (t) => { - const wf = { - id: 'w1', - jobs: [ - { - id: 'j1', - expression: 'wait(100)', - }, - ], - } as ExecutionPlan; +test.serial('wait function', async (t) => { + const wf = createPlan({ + id: 'j1', + expression: 'wait(100)', + }); engine.execute(wf); const start = Date.now(); - await waitForEvent(engine, 'workflow-complete'); + await waitForEvent(engine, 'workflow-complete'); const end = Date.now() - start; t.true(end > 90); }); -test('resolve credential before job-start if credential is a string', async (t) => { - const wf = clone(sampleWorkflow); - wf.jobs[0].configuration = 'x'; +test.serial( + 'resolve credential before job-start if credential is a string', + async (t) => { + const wf = clone(sampleWorkflow); + wf.id = t.title; + wf.workflow.steps[0].configuration = 'x'; - let didCallCredentials; - const credential = async (_id) => { - didCallCredentials = true; - return {}; - }; + let didCallCredentials; + const credential = async () => { + didCallCredentials = true; + return {}; + }; - // @ts-ignore - engine.execute(wf, { resolvers: { credential } }); + // @ts-ignore + engine.execute(wf, {}, { resolvers: { credential } }); - await waitForEvent(engine, 'job-start'); - t.true(didCallCredentials); -}); + await waitForEvent(engine, 'job-start'); + t.true(didCallCredentials); + } +); -test('listen to events', async (t) => { +test.serial('listen to events', async (t) => { const called = { 'job-start': false, 'job-complete': false, @@ -139,68 +144,64 @@ test('listen to events', async (t) => { 'workflow-complete': false, }; - const wf = { - id: 'wibble', - jobs: [ - { - id: 'j1', - adaptor: 'common@1.0.0', - expression: 'export default [() => { console.log("x"); }]', - }, - ], - } as ExecutionPlan; + const wf = createPlan({ + id: 'j1', + adaptor: 'common@1.0.0', + expression: 'export default [() => { console.log("x"); }]', + }); engine.listen(wf.id, { - 'job-start': ({ workflowId, jobId }) => { + 'job-start': ({ workflowId, jobId }: any) => { called['job-start'] = true; t.is(workflowId, wf.id); - t.is(jobId, wf.jobs[0].id); + t.is(jobId, wf.workflow.steps[0].id); }, - 'job-complete': ({ workflowId, jobId }) => { + 'job-complete': ({ workflowId, jobId }: any) => { called['job-complete'] = true; t.is(workflowId, wf.id); - t.is(jobId, wf.jobs[0].id); + t.is(jobId, wf.workflow.steps[0].id); // TODO includes state? }, - 'workflow-log': ({ workflowId, message }) => { + 'workflow-log': ({ workflowId, message }: any) => { called['workflow-log'] = true; t.is(workflowId, wf.id); t.truthy(message); }, - 'workflow-start': ({ workflowId }) => { + 'workflow-start': ({ workflowId }: any) => { called['workflow-start'] = true; t.is(workflowId, wf.id); }, - 'workflow-complete': ({ workflowId }) => { + 'workflow-complete': ({ workflowId }: any) => { called['workflow-complete'] = true; t.is(workflowId, wf.id); }, }); engine.execute(wf); - await waitForEvent(engine, 'workflow-complete'); + await waitForEvent(engine, 'workflow-complete'); t.assert(Object.values(called).every((v) => v === true)); }); -test('only listen to events for the correct workflow', async (t) => { +test.serial('only listen to events for the correct workflow', async (t) => { engine.listen('bobby mcgee', { - 'workflow-start': ({ workflowId }) => { + 'workflow-start': () => { throw new Error('should not have called this!!'); }, }); engine.execute(sampleWorkflow); - await waitForEvent(engine, 'workflow-complete'); + await waitForEvent(engine, 'workflow-complete'); t.pass(); }); test('log events should stringify a string message', async (t) => { const wf = clone(sampleWorkflow); - wf.jobs[0].expression = + wf.id = t.title; + wf.workflow.steps[0].expression = 'fn((s) => {console.log("haul away joe"); return s; })'; engine.listen(wf.id, { - 'workflow-log': ({ message }) => { + 'workflow-log': ({ message }: any) => { t.is(typeof message, 'string'); const result = JSON.parse(message); t.deepEqual(result, ['haul away joe']); @@ -208,15 +209,17 @@ test('log events should stringify a string message', async (t) => { }); engine.execute(wf); - await waitForEvent(engine, 'workflow-complete'); + await waitForEvent(engine, 'workflow-complete'); }); -test('log events should stringify an object message', async (t) => { +test.serial('log events should stringify an object message', async (t) => { const wf = clone(sampleWorkflow); - wf.jobs[0].expression = 'fn((s) => {console.log({ x: 22 }); return s; })'; + wf.id = t.title; + wf.workflow.steps[0].expression = + 'fn((s) => {console.log({ x: 22 }); return s; })'; engine.listen(wf.id, { - 'workflow-log': ({ message }) => { + 'workflow-log': ({ message }: any) => { t.is(typeof message, 'string'); const result = JSON.parse(message); t.deepEqual(result, [{ x: 22 }]); @@ -224,55 +227,53 @@ test('log events should stringify an object message', async (t) => { }); engine.execute(wf); - await waitForEvent(engine, 'workflow-complete'); + await waitForEvent(engine, 'workflow-complete'); }); -test('do nothing for a job if no expression and adaptor (trigger node)', async (t) => { - const workflow = { - id: 'w1', - jobs: [ - { - id: 'j1', - adaptor: '@openfn/language-common@1.0.0', - }, - ], - } as ExecutionPlan; +test.serial( + 'do nothing for a job if no expression and adaptor (trigger node)', + async (t) => { + // @ts-ignore + const workflow = createPlan({ + id: 'j1', + adaptor: '@openfn/language-common@1.0.0', + }); - let didCallEvent = false; + let didCallEvent = false; - engine.listen(workflow.id, { - 'job-start': () => { - didCallEvent = true; - }, - 'job-complete': () => { - didCallEvent = true; - }, - 'workflow-log': () => { - // this can be called - }, - 'workflow-start': () => { - // ditto - }, - 'workflow-complete': () => { - // ditto - }, - }); + engine.listen(workflow.id, { + 'job-start': () => { + didCallEvent = true; + }, + 'job-complete': () => { + didCallEvent = true; + }, + 'workflow-log': () => { + // this can be called + }, + 'workflow-start': () => { + // ditto + }, + 'workflow-complete': () => { + // ditto + }, + }); - engine.execute(workflow); - await waitForEvent(engine, 'workflow-complete'); + engine.execute(workflow); + await waitForEvent(engine, 'workflow-complete'); - t.false(didCallEvent); -}); + t.false(didCallEvent); + } +); -test('timeout', async (t) => { +test.skip('timeout', async (t) => { const wf = clone(sampleWorkflow); - wf.jobs[0].expression = 'wait(1000)'; - // wf.options = { timeout: 10 }; + wf.workflow.steps[0].expression = 'wait(1000)'; // @ts-ignore - engine.execute(wf, { timeout: 10 }); + engine.execute(wf, {}, { timeout: 10 }); - const evt = await waitForEvent( + const evt = await waitForEvent( engine, 'workflow-error' ); diff --git a/packages/ws-worker/test/mock/sockets.test.ts b/packages/ws-worker/test/mock/sockets.test.ts index 46ad6aa4c..2312d9582 100644 --- a/packages/ws-worker/test/mock/sockets.test.ts +++ b/packages/ws-worker/test/mock/sockets.test.ts @@ -68,7 +68,7 @@ test('mock channel: invoke the ok handler with the callback result', (t) => { }, }); - channel.push('ping', 'abc').receive('ok', (evt) => { + channel.push('ping', 'abc').receive('ok', (evt: any) => { t.is(evt, 'pong!'); t.pass(); done(); diff --git a/packages/ws-worker/test/reasons.test.ts b/packages/ws-worker/test/reasons.test.ts index 3ceecfb1a..514dbd6b8 100644 --- a/packages/ws-worker/test/reasons.test.ts +++ b/packages/ws-worker/test/reasons.test.ts @@ -1,22 +1,23 @@ import test from 'ava'; import createRTE from '@openfn/engine-multi'; import { createMockLogger } from '@openfn/logger'; +import type { ExitReason } from '@openfn/lexicon/lightning'; import { createPlan } from './util'; import { execute as doExecute } from '../src/api/execute'; import { mockChannel } from '../src/mock/sockets'; - import { STEP_START, STEP_COMPLETE, RUN_LOG, RUN_START, RUN_COMPLETE, + GET_CREDENTIAL, } from '../src/events'; -import { ExitReason } from '../src/types'; +import { ExecutionPlan } from '@openfn/lexicon'; -let engine; -let logger; +let engine: any; +let logger: any; test.before(async () => { logger = createMockLogger(); @@ -39,7 +40,7 @@ test.before(async () => { test.after(async () => engine.destroy()); // Wrap up an execute call, capture the on complete state -const execute = async (plan, options = {}) => +const execute = async (plan: ExecutionPlan, input = {}, options = {}) => new Promise<{ reason: ExitReason; state: any }>((done) => { // Ignore all channel events // In these test we assume that the correct messages are sent to the channel @@ -49,14 +50,16 @@ const execute = async (plan, options = {}) => [RUN_LOG]: async () => true, [STEP_COMPLETE]: async () => true, [RUN_COMPLETE]: async () => true, + [GET_CREDENTIAL]: async () => { + throw new Error('err'); + }, }); - const onFinish = (result) => { + const onFinish = (result: any) => { done(result); }; - // @ts-ignore - doExecute(channel, engine, logger, plan, options, onFinish); + doExecute(channel, engine, logger, plan, input, options, onFinish); }); test('success', async (t) => { @@ -65,9 +68,9 @@ test('success', async (t) => { expression: '(s) => s', }); - plan.initialState = { data: { result: 42 } }; + const input = { data: { result: 42 } }; - const { reason } = await execute(plan); + const { reason } = await execute(plan, input); t.is(reason.reason, 'success'); }); @@ -165,10 +168,11 @@ test('fail: error in the first job, with downstream job that is not run', async { id: 'a', expression: 'export default [(s) => {throw "abort!"}]', - next: { b: true }, + next: { b: '!state.errors' }, }, { id: 'b', + expression: 'export default [(s) => s]', } ); @@ -221,6 +225,20 @@ test('exception: autoinstall error', async (t) => { ); }); +test('exception: failed to load credential', async (t) => { + const plan = createPlan({ + id: 'aa', + expression: 'export default [() => s]', + configuration: 'zzz', + }); + + const { reason } = await execute(plan); + + t.is(reason.reason, 'exception'); + t.is(reason.error_type, 'CredentialLoadError'); + t.is(reason.error_message, 'Failed to load credential zzz for step aa'); +}); + test('kill: timeout', async (t) => { const plan = createPlan({ id: 'x', @@ -231,7 +249,7 @@ test('kill: timeout', async (t) => { runTimeoutMs: 100, }; - const { reason } = await execute(plan, options); + const { reason } = await execute(plan, {}, options); t.is(reason.reason, 'kill'); t.is(reason.error_type, 'TimeoutError'); t.is(reason.error_message, 'Workflow failed to return within 100ms'); diff --git a/packages/ws-worker/test/server.test.ts b/packages/ws-worker/test/server.test.ts index c6eb919b1..1ac45a6cc 100644 --- a/packages/ws-worker/test/server.test.ts +++ b/packages/ws-worker/test/server.test.ts @@ -4,7 +4,7 @@ import createWorkerServer from '../src/server'; test.before(async () => { const engine = await createMockRTE(); - createWorkerServer(engine, { + createWorkerServer(engine as any, { port: 2323, secret: 'abc', maxWorkflows: 1, diff --git a/packages/ws-worker/test/util.ts b/packages/ws-worker/test/util.ts index fe663d009..df70a3c99 100644 --- a/packages/ws-worker/test/util.ts +++ b/packages/ws-worker/test/util.ts @@ -1,7 +1,8 @@ -import { ExecutionPlan } from '@openfn/runtime'; +import { ExecutionPlan, Job } from '@openfn/lexicon'; +import { Edge, Node } from '@openfn/lexicon/lightning'; import crypto from 'node:crypto'; -export const wait = (fn, maxRuns = 100) => +export const wait = (fn: () => any, maxRuns = 100) => new Promise((resolve) => { let count = 0; let ival = setInterval(() => { @@ -19,11 +20,11 @@ export const wait = (fn, maxRuns = 100) => }, 100); }); -export const clone = (obj) => JSON.parse(JSON.stringify(obj)); +export const clone = (obj: any) => JSON.parse(JSON.stringify(obj)); -export const waitForEvent = (engine, eventName) => +export const waitForEvent = (engine: any, eventName: string) => new Promise((resolve) => { - engine.once(eventName, (e) => { + engine.once(eventName, (e: any) => { resolve(e); }); }); @@ -33,22 +34,27 @@ export const sleep = (delay = 100) => setTimeout(resolve, delay); }); -export const createPlan = (...jobs) => +export const createPlan = (...steps: Job[]) => ({ id: crypto.randomUUID(), - jobs: [...jobs], + workflow: { + steps, + }, + options: {}, } as ExecutionPlan); -export const createEdge = (from: string, to: string) => ({ - id: `${from}-${to}`, - source_job_id: from, - target_job_id: to, -}); +export const createEdge = (from: string, to: string) => + ({ + id: `${from}-${to}`, + source_job_id: from, + target_job_id: to, + } as Edge); -export const createJob = (body?: string, id?: string) => ({ - id: id || crypto.randomUUID(), - body: body || `fn((s) => s)`, -}); +export const createJob = (body?: string, id?: string) => + ({ + id: id || crypto.randomUUID(), + body: body || `fn((s) => s)`, + } as Node); export const createRun = (jobs = [], edges = [], triggers = []) => ({ id: crypto.randomUUID(), diff --git a/packages/ws-worker/test/util/convert-run.test.ts b/packages/ws-worker/test/util/convert-lightning-plan.test.ts similarity index 55% rename from packages/ws-worker/test/util/convert-run.test.ts rename to packages/ws-worker/test/util/convert-lightning-plan.test.ts index 3cfd58ec3..c0ffa73a4 100644 --- a/packages/ws-worker/test/util/convert-run.test.ts +++ b/packages/ws-worker/test/util/convert-lightning-plan.test.ts @@ -1,6 +1,7 @@ import test from 'ava'; -import convertRun, { conditions } from '../../src/util/convert-run'; -import { Run, Node } from '../../src/types'; +import type { LightningPlan, Node } from '@openfn/lexicon/lightning'; +import convertPlan, { conditions } from '../../src/util/convert-lightning-plan'; +import { ConditionalStepEdge, Job } from '@openfn/lexicon'; // Creates a lightning node (job or trigger) const createNode = (props = {}) => @@ -12,7 +13,7 @@ const createNode = (props = {}) => ...props, } as Node); -const createEdge = (from, to, props = {}) => ({ +const createEdge = (from: string, to: string, props = {}) => ({ id: `${from}-${to}`, source_job_id: from, target_job_id: to, @@ -36,28 +37,51 @@ const createJob = (props = {}) => ({ ...props, }); -const testEdgeCondition = (expr, state) => { +const testEdgeCondition = (expr: string, state: any) => { const fn = new Function('state', 'return ' + expr); return fn(state); }; test('convert a single job', (t) => { - const run: Partial = { + const run: Partial = { id: 'w', jobs: [createNode()], triggers: [], edges: [], }; - const { plan } = convertRun(run as Run); + const { plan } = convertPlan(run as LightningPlan); t.deepEqual(plan, { id: 'w', - jobs: [createJob()], + options: {}, + workflow: { + steps: [createJob()], + }, + }); +}); + +test('convert a single job with names', (t) => { + const run: Partial = { + id: 'w', + name: 'my-workflow', + jobs: [createNode({ name: 'my-job' })], + triggers: [], + edges: [], + }; + const { plan } = convertPlan(run as LightningPlan); + + t.deepEqual(plan, { + id: 'w', + options: {}, + workflow: { + name: 'my-workflow', + steps: [createJob({ name: 'my-job' })], + }, }); }); test('convert a single job with options', (t) => { - const run: Partial = { + const run: Partial = { id: 'w', jobs: [createNode()], triggers: [], @@ -67,111 +91,136 @@ test('convert a single job with options', (t) => { runTimeoutMs: 10, }, }; - const { plan, options } = convertRun(run as Run); + const { plan, options } = convertPlan(run as LightningPlan); t.deepEqual(plan, { id: 'w', - jobs: [createJob()], + options: {}, + workflow: { + steps: [createJob()], + }, + }); + t.deepEqual(options, { + runTimeoutMs: 10, + sanitize: 'obfuscate', }); - t.deepEqual(options, run.options); }); // Note idk how lightningg will handle state/defaults on a job // but this is what we'll do right now test('convert a single job with data', (t) => { - const run: Partial = { + const run: Partial = { id: 'w', jobs: [createNode({ state: { data: { x: 22 } } })], triggers: [], edges: [], }; - const { plan, options } = convertRun(run as Run); + const { plan, options } = convertPlan(run as LightningPlan); t.deepEqual(plan, { id: 'w', - jobs: [createJob({ state: { data: { x: 22 } } })], + options: {}, + workflow: { + steps: [createJob({ state: { data: { x: 22 } } })], + }, }); t.deepEqual(options, {}); }); test('Accept a partial run object', (t) => { - const run: Partial = { + const run: Partial = { id: 'w', }; - const { plan, options } = convertRun(run as Run); + const { plan, options } = convertPlan(run as LightningPlan); t.deepEqual(plan, { id: 'w', - jobs: [], + options: {}, + workflow: { + steps: [], + }, }); t.deepEqual(options, {}); }); -test('handle dataclip_id', (t) => { - const run: Partial = { +test('handle dataclip_id as input', (t) => { + const run: Partial = { id: 'w', dataclip_id: 'xyz', }; - const { plan } = convertRun(run as Run); + const { input } = convertPlan(run as LightningPlan); - t.deepEqual(plan, { - id: 'w', - initialState: 'xyz', - jobs: [], - }); + t.deepEqual(input, 'xyz'); }); -test('handle starting_node_id', (t) => { - const run: Partial = { +test('handle starting_node_id as options', (t) => { + const run: Partial = { id: 'w', starting_node_id: 'j1', }; - const { plan } = convertRun(run as Run); + const { plan } = convertPlan(run as LightningPlan); - t.deepEqual(plan, { - id: 'w', + t.deepEqual(plan.options, { start: 'j1', - jobs: [], + }); +}); + +test('handle output_dataclip as options', (t) => { + const run: Partial = { + id: 'w', + options: { + output_dataclips: false, + }, + }; + const { options } = convertPlan(run as LightningPlan); + t.deepEqual(options, { + outputDataclips: false, }); }); test('convert a single trigger', (t) => { - const run: Partial = { + const run: Partial = { id: 'w', triggers: [createTrigger()], jobs: [], edges: [], }; - const { plan } = convertRun(run as Run); + const { plan } = convertPlan(run as LightningPlan); t.deepEqual(plan, { id: 'w', - jobs: [ - { - id: 't', - }, - ], + options: {}, + workflow: { + steps: [ + { + id: 't', + }, + ], + }, }); }); // This exhibits current behaviour. This should never happen though test('ignore a single edge', (t) => { - const run: Partial = { + const run: Partial = { id: 'w', jobs: [], triggers: [], edges: [createEdge('a', 'b')], }; - const { plan } = convertRun(run as Run); + const { plan } = convertPlan(run as LightningPlan); t.deepEqual(plan, { id: 'w', - jobs: [], + options: {}, + workflow: { + steps: [], + }, }); }); test('convert a single trigger with an edge', (t) => { - const run: Partial = { + const run: Partial = { id: 'w', triggers: [createTrigger()], jobs: [createNode()], @@ -183,24 +232,27 @@ test('convert a single trigger with an edge', (t) => { }, ], }; - const { plan } = convertRun(run as Run); + const { plan } = convertPlan(run as LightningPlan); t.deepEqual(plan, { id: 'w', - jobs: [ - { - id: 't', - next: { - a: true, + options: {}, + workflow: { + steps: [ + { + id: 't', + next: { + a: true, + }, }, - }, - createJob(), - ], + createJob(), + ], + }, }); }); test('convert a single trigger with two edges', (t) => { - const run: Partial = { + const run: Partial = { id: 'w', triggers: [createTrigger()], jobs: [createNode({ id: 'a' }), createNode({ id: 'b' })], @@ -217,26 +269,29 @@ test('convert a single trigger with two edges', (t) => { }, ], }; - const { plan } = convertRun(run as Run); + const { plan } = convertPlan(run as LightningPlan); t.deepEqual(plan, { id: 'w', - jobs: [ - { - id: 't', - next: { - a: true, - b: true, + options: {}, + workflow: { + steps: [ + { + id: 't', + next: { + a: true, + b: true, + }, }, - }, - createJob({ id: 'a' }), - createJob({ id: 'b' }), - ], + createJob({ id: 'a' }), + createJob({ id: 'b' }), + ], + }, }); }); test('convert a disabled trigger', (t) => { - const run: Partial = { + const run: Partial = { id: 'w', triggers: [createTrigger()], jobs: [createNode({ id: 'a' })], @@ -249,38 +304,47 @@ test('convert a disabled trigger', (t) => { }, ], }; - const { plan } = convertRun(run as Run); + const { plan } = convertPlan(run as LightningPlan); t.deepEqual(plan, { id: 'w', - jobs: [ - { - id: 't', - next: {}, - }, - createJob({ id: 'a' }), - ], + options: {}, + workflow: { + steps: [ + { + id: 't', + next: {}, + }, + createJob({ id: 'a' }), + ], + }, }); }); test('convert two linked jobs', (t) => { - const run: Partial = { + const run: Partial = { id: 'w', jobs: [createNode({ id: 'a' }), createNode({ id: 'b' })], triggers: [], edges: [createEdge('a', 'b')], }; - const { plan } = convertRun(run as Run); + const { plan } = convertPlan(run as LightningPlan); t.deepEqual(plan, { id: 'w', - jobs: [createJob({ id: 'a', next: { b: true } }), createJob({ id: 'b' })], + options: {}, + workflow: { + steps: [ + createJob({ id: 'a', next: { b: true } }), + createJob({ id: 'b' }), + ], + }, }); }); // This isn't supported by the runtime, but it'll survive the conversion test('convert a job with two upstream jobs', (t) => { - const run: Partial = { + const run: Partial = { id: 'w', jobs: [ createNode({ id: 'a' }), @@ -290,52 +354,61 @@ test('convert a job with two upstream jobs', (t) => { triggers: [], edges: [createEdge('a', 'x'), createEdge('b', 'x')], }; - const { plan } = convertRun(run as Run); + const { plan } = convertPlan(run as LightningPlan); t.deepEqual(plan, { id: 'w', - jobs: [ - createJob({ id: 'a', next: { x: true } }), - createJob({ id: 'b', next: { x: true } }), - createJob({ id: 'x' }), - ], + options: {}, + workflow: { + steps: [ + createJob({ id: 'a', next: { x: true } }), + createJob({ id: 'b', next: { x: true } }), + createJob({ id: 'x' }), + ], + }, }); }); test('convert two linked jobs with an edge condition', (t) => { const condition = 'state.age > 10'; - const run: Partial = { + const run: Partial = { id: 'w', jobs: [createNode({ id: 'a' }), createNode({ id: 'b' })], triggers: [], edges: [createEdge('a', 'b', { condition })], }; - const { plan } = convertRun(run as Run); + const { plan } = convertPlan(run as LightningPlan); t.deepEqual(plan, { id: 'w', - jobs: [ - createJob({ id: 'a', next: { b: { condition } } }), - createJob({ id: 'b' }), - ], + options: {}, + workflow: { + steps: [ + createJob({ id: 'a', next: { b: { condition } } }), + createJob({ id: 'b' }), + ], + }, }); }); test('convert two linked jobs with a disabled edge', (t) => { - const run: Partial = { + const run: Partial = { id: 'w', jobs: [createNode({ id: 'a' }), createNode({ id: 'b' })], triggers: [], edges: [createEdge('a', 'b', { enabled: false })], }; - const { plan } = convertRun(run as Run); + const { plan } = convertPlan(run as LightningPlan); t.deepEqual(plan, { id: 'w', - jobs: [ - createJob({ id: 'a', next: { b: { disabled: true } } }), - createJob({ id: 'b' }), - ], + options: {}, + workflow: { + steps: [ + createJob({ id: 'a', next: { b: { disabled: true } } }), + createJob({ id: 'b' }), + ], + }, }); }); @@ -343,7 +416,7 @@ test('on_job_success condition: return true if no errors', (t) => { const condition = conditions.on_job_success('a'); const state = {}; - const result = testEdgeCondition(condition, state); + const result = testEdgeCondition(condition!, state); t.is(result, true); }); @@ -354,7 +427,7 @@ test('on_job_success condition: return true if state is undefined', (t) => { const condition = conditions.on_job_success('a'); const state = undefined; - const result = testEdgeCondition(condition, state); + const result = testEdgeCondition(condition!, state); t.is(result, true); }); @@ -369,7 +442,7 @@ test('on_job_success condition: return true if unconnected upstream errors', (t) }, }, }; - const result = testEdgeCondition(condition, state); + const result = testEdgeCondition(condition!, state); t.is(result, true); }); @@ -384,7 +457,7 @@ test('on_job_success condition: return false if the upstream job errored', (t) = }, }, }; - const result = testEdgeCondition(condition, state); + const result = testEdgeCondition(condition!, state); t.is(result, false); }); @@ -399,7 +472,7 @@ test('on_job_failure condition: return true if error immediately upstream', (t) }, }, }; - const result = testEdgeCondition(condition, state); + const result = testEdgeCondition(condition!, state); t.is(result, true); }); @@ -414,7 +487,7 @@ test('on_job_failure condition: return false if unrelated error upstream', (t) = }, }, }; - const result = testEdgeCondition(condition, state); + const result = testEdgeCondition(condition!, state); t.is(result, false); }); @@ -423,7 +496,7 @@ test('on_job_failure condition: return false if no errors', (t) => { const condition = conditions.on_job_failure('a'); const state = {}; - const result = testEdgeCondition(condition, state); + const result = testEdgeCondition(condition!, state); t.is(result, false); }); @@ -432,45 +505,45 @@ test('on_job_failure condition: return false if state is undefined', (t) => { const condition = conditions.on_job_failure('a'); const state = undefined; - const result = testEdgeCondition(condition, state); + const result = testEdgeCondition(condition!, state); t.is(result, false); }); test('convert edge condition on_job_success', (t) => { - const run: Partial = { + const run: Partial = { id: 'w', jobs: [createNode({ id: 'a' }), createNode({ id: 'b' })], triggers: [], edges: [createEdge('a', 'b', { condition: 'on_job_success' })], }; - const { plan } = convertRun(run as Run); + const { plan } = convertPlan(run as LightningPlan); - const [job] = plan.jobs; + const [job] = plan.workflow.steps as Job[]; + const edge = job.next as Record; - t.truthy(job.next?.b); - t.is(job.next.b.condition, conditions.on_job_success('a')); - - t.true(testEdgeCondition(job.next.b.condition, {})); + t.truthy(edge.b); + t.is(edge.b.condition!, conditions.on_job_success('a')!); + t.true(testEdgeCondition(edge.b.condition!, {})); }); test('convert edge condition on_job_failure', (t) => { - const run: Partial = { + const run: Partial = { id: 'w', jobs: [createNode({ id: 'a' }), createNode({ id: 'b' })], triggers: [], edges: [createEdge('a', 'b', { condition: 'on_job_failure' })], }; - const { plan } = convertRun(run as Run); - - const [job] = plan.jobs; + const { plan } = convertPlan(run as LightningPlan); - t.truthy(job.next?.b); - t.is(job.next.b.condition, conditions.on_job_failure('a')); + const [job] = plan.workflow.steps as Job[]; + const edge = job.next as Record; + t.truthy(edge.b); + t.is(edge.b.condition!, conditions.on_job_failure('a')!); // Check that this is valid js t.true( - testEdgeCondition(job.next.b.condition, { + testEdgeCondition(edge.b.condition!, { errors: { a: {} }, }) ); @@ -478,46 +551,32 @@ test('convert edge condition on_job_failure', (t) => { test('convert edge condition on_job_success with a funky id', (t) => { const id_a = 'a-b-c@ # {} !£'; - const run: Partial = { + const run: Partial = { id: 'w', jobs: [createNode({ id: id_a }), createNode({ id: 'b' })], triggers: [], edges: [createEdge(id_a, 'b', { condition: 'on_job_success' })], }; - const { plan } = convertRun(run as Run); - const [job] = plan.jobs; - - t.truthy(job.next?.b); - t.is(job.next.b.condition, conditions.on_job_success(id_a)); + const { plan } = convertPlan(run as LightningPlan); + const [job] = plan.workflow.steps as Job[]; + const edge = job.next as Record; + t.truthy(edge.b); + t.is(edge.b.condition!, conditions.on_job_success(id_a)!); // Check that this is valid js - t.true(testEdgeCondition(job.next.b.condition, {})); + t.true(testEdgeCondition(edge.b.condition!, {})); }); test('convert edge condition always', (t) => { - const run: Partial = { + const run: Partial = { id: 'w', jobs: [createNode({ id: 'a' }), createNode({ id: 'b' })], triggers: [], edges: [createEdge('a', 'b', { condition: 'always' })], }; - const { plan } = convertRun(run as Run); - - const [job] = plan.jobs; - - t.false(job.next.b.hasOwnProperty('condition')); -}); - -test('convert random options', (t) => { - const run: Partial = { - id: 'w', - options: { - a: 1, - b: 2, - c: 3, - }, - }; - const { options } = convertRun(run as Run); + const { plan } = convertPlan(run as LightningPlan); - t.deepEqual(options, { a: 1, b: 2, c: 3 }); + const [job] = plan.workflow.steps as Job[]; + const edge = job.next as Record; + t.false(edge.b.hasOwnProperty('condition')); }); diff --git a/packages/ws-worker/test/util/create-run-state.test.ts b/packages/ws-worker/test/util/create-run-state.test.ts index 9bf15e960..7424ee957 100644 --- a/packages/ws-worker/test/util/create-run-state.test.ts +++ b/packages/ws-worker/test/util/create-run-state.test.ts @@ -1,101 +1,100 @@ import test from 'ava'; +import type { ExecutionPlan, Job } from '@openfn/lexicon'; import { createRunState } from '../../src/util'; +const createPlan = (jobs: Partial[]) => + ({ + workflow: { + steps: jobs.map((j) => ({ expression: '.', ...j })), + }, + options: {}, + } as ExecutionPlan); + test('create run', (t) => { - const options = { timeout: 666 }; - const plan = { jobs: [{ id: 'a' }] }; - const run = createRunState(plan, options); + const plan = createPlan([{ id: 'a' }]); + const input = undefined; + + const run = createRunState(plan, input); t.deepEqual(run.plan, plan); t.deepEqual(run.lastDataclipId, ''); t.deepEqual(run.dataclips, {}); t.deepEqual(run.inputDataclips, {}); t.deepEqual(run.reasons, {}); - t.deepEqual(run.options, options); }); test('Set initial input dataclip if no explicit start and first job is a step', (t) => { - const plan = { initialState: 'x', jobs: [{ id: 'a', expression: '.' }] }; - const run = createRunState(plan); + const plan = createPlan([{ id: 'a' }]); + const input = 'x'; + + const run = createRunState(plan, input); t.deepEqual(run.inputDataclips, { a: 'x' }); }); test('Set initial input dataclip if the explicit start is a step', (t) => { - const plan = { - initialState: 'x', - start: 'a', - jobs: [ - { id: 'b', expression: '.' }, - { id: 'a', expression: '.' }, - ], - }; - const run = createRunState(plan); + const plan = createPlan([{ id: 'a' }, { id: 'b' }]); + plan.options.start = 'a'; + const input = 'x'; + + const run = createRunState(plan, input); t.deepEqual(run.inputDataclips, { a: 'x' }); }); test('Set initial input dataclip if the start is a trigger (simple)', (t) => { - const plan = { - initialState: 's', - start: 't', - jobs: [ - { id: 't', next: { a: true } }, - { id: 'a', expression: '.' }, - ], - }; - const run = createRunState(plan); + const plan = createPlan([{ id: 't', next: { a: true } }, { id: 'a' }]); + plan.options.start = 'a'; + const input = 's'; + + const run = createRunState(plan, input); t.deepEqual(run.inputDataclips, { a: 's' }); }); test('Set initial input dataclip if the start is a trigger (complex)', (t) => { - const plan = { - initialState: 's', - start: 't', - jobs: [ - { id: 'a', expression: '.' }, - { id: 'b', expression: '.' }, - { id: 'c', expression: '.' }, - { id: 'd', expression: '.' }, - { id: 't', next: { c: true } }, - ], - }; - const run = createRunState(plan); + const plan = createPlan([ + { id: 'a' }, + { id: 'b' }, + { id: 'c' }, + { id: 'd' }, + { id: 't', next: { c: true }, expression: undefined }, + ]); + plan.options.start = 't'; + const input = 's'; + + const run = createRunState(plan, input); t.deepEqual(run.inputDataclips, { c: 's' }); }); test('Set initial input dataclip with a trigger as implicit start', (t) => { - const plan = { - initialState: 's', - jobs: [ - { id: 't', next: { c: true } }, - { id: 'a', expression: '.' }, - { id: 'b', expression: '.' }, - { id: 'c', expression: '.' }, - { id: 'd', expression: '.' }, - ], - }; - const run = createRunState(plan); + const plan = createPlan([ + { id: 't', next: { c: true }, expression: undefined }, + { id: 'a', expression: '.' }, + { id: 'b', expression: '.' }, + { id: 'c', expression: '.' }, + { id: 'd', expression: '.' }, + ]); + const input = 's'; + + const run = createRunState(plan, input); t.deepEqual(run.inputDataclips, { c: 's' }); }); test('Set initial input dataclip with a trigger with multiple downstream jobs', (t) => { - const plan = { - initialState: 's', - start: 't', - jobs: [ - { id: 'a', expression: '.' }, - { id: 'b', expression: '.' }, - { id: 't', next: { a: true, b: true, c: true } }, - { id: 'c', expression: '.' }, - { id: 'd', expression: '.' }, - ], - }; - const run = createRunState(plan); + const plan = createPlan([ + { id: 'a' }, + { id: 'b' }, + { id: 't', next: { a: true, b: true, c: true }, expression: undefined }, + { id: 'c' }, + { id: 'd' }, + ]); + plan.options.start = 't'; + const input = 's'; + const run = createRunState(plan, input); t.deepEqual(run.inputDataclips, { a: 's', b: 's', c: 's' }); }); diff --git a/packages/ws-worker/test/util/throttle.test.ts b/packages/ws-worker/test/util/throttle.test.ts index f865f2aca..10260abe8 100644 --- a/packages/ws-worker/test/util/throttle.test.ts +++ b/packages/ws-worker/test/util/throttle.test.ts @@ -88,7 +88,7 @@ test('return in order', async (t) => { const results: string[] = []; - const fn = (name: string, delay: number) => + const fn = (name: string) => new Promise((resolve) => { setTimeout(() => { results.push(name); diff --git a/packages/ws-worker/test/util/versions.test.ts b/packages/ws-worker/test/util/versions.test.ts index fe4c41de6..6aadf00a3 100644 --- a/packages/ws-worker/test/util/versions.test.ts +++ b/packages/ws-worker/test/util/versions.test.ts @@ -5,10 +5,7 @@ import calculateVersionString from '../../src/util/versions'; // keys in this obejct are scrambled on purpose const versions = { worker: '2', - // compiler: '5', node: '1', - engine: '3', - // runtime: '4', }; // Util function to parse a version string into something easier to test @@ -33,9 +30,8 @@ test('calculate version string', (t) => { t.is( str, `Versions for step step-1: - ▸ node.js 1 - ▸ worker 2 - ▸ engine 3` + ▸ node.js 1 + ▸ worker 2` ); }); @@ -46,7 +42,6 @@ test('helper should parse a version string and return the correct order', (t) => t.deepEqual(parsed, [ ['node.js', '1'], ['worker', '2'], - ['engine', '3'], ]); }); @@ -58,7 +53,6 @@ test("show unknown if a version isn't passed", (t) => { t.deepEqual(parsed, [ ['node.js', 'unknown'], ['worker', 'unknown'], - ['engine', 'unknown'], ]); }); @@ -67,9 +61,8 @@ test('show adaptors last', (t) => { '@openfn/language-common': '1.0.0', ...versions, }); - const parsed = parse(str); - const common = parsed[3]; + const common = parsed[2]; t.deepEqual(common, ['@openfn/language-common', '1.0.0']); }); @@ -83,9 +76,9 @@ test('sort and list multiple adaptors', (t) => { const parsed = parse(str); - const a = parsed[3]; - const j = parsed[4]; - const z = parsed[5]; + const a = parsed[2]; + const j = parsed[3]; + const z = parsed[4]; t.deepEqual(a, ['a', '1']); t.deepEqual(j, ['j', '2']); diff --git a/packages/ws-worker/tsconfig.json b/packages/ws-worker/tsconfig.json index 3be5c53e0..834d5af09 100644 --- a/packages/ws-worker/tsconfig.json +++ b/packages/ws-worker/tsconfig.json @@ -1,6 +1,6 @@ { "extends": "../../tsconfig.common", - "include": ["src/**/*.ts", "test/mock/data.ts", "src/channels/runs"], + "include": ["src/**/*.ts", "test/**/*.ts", "src/channels/runs"], "compilerOptions": { "module": "ESNext" } diff --git a/pnpm-lock.yaml b/pnpm-lock.yaml index 17e5bf7f3..7e6cb90f8 100644 --- a/pnpm-lock.yaml +++ b/pnpm-lock.yaml @@ -104,6 +104,12 @@ importers: specifier: ^3.0.2 version: 3.0.2 + integration-tests/cli/repo: + dependencies: + '@openfn/language-common_1.12.0': + specifier: npm:@openfn/language-common@^1.12.0 + version: /@openfn/language-common@1.12.0 + integration-tests/worker: dependencies: '@openfn/engine-multi': @@ -185,6 +191,9 @@ importers: '@openfn/language-common': specifier: 2.0.0-rc3 version: 2.0.0-rc3 + '@openfn/lexicon': + specifier: workspace:^ + version: link:../lexicon '@types/mock-fs': specifier: ^4.13.1 version: 4.13.1 @@ -377,6 +386,9 @@ importers: '@openfn/language-common': specifier: 2.0.0-rc3 version: 2.0.0-rc3 + '@openfn/lexicon': + specifier: workspace:^ + version: link:../lexicon '@openfn/logger': specifier: workspace:* version: link:../logger @@ -413,6 +425,12 @@ importers: packages/engine-multi/tmp/repo: {} + packages/lexicon: + devDependencies: + '@openfn/logger': + specifier: workspace:^ + version: link:../logger + packages/lightning-mock: dependencies: '@koa/router': @@ -421,6 +439,9 @@ importers: '@openfn/engine-multi': specifier: workspace:* version: link:../engine-multi + '@openfn/lexicon': + specifier: workspace:^ + version: link:../lexicon '@openfn/logger': specifier: workspace:* version: link:../logger @@ -559,6 +580,9 @@ importers: '@openfn/language-common': specifier: 2.0.0-rc3 version: 2.0.0-rc3 + '@openfn/lexicon': + specifier: workspace:^ + version: link:../lexicon '@types/mock-fs': specifier: ^4.13.1 version: 4.13.1 @@ -595,6 +619,9 @@ importers: '@openfn/engine-multi': specifier: workspace:* version: link:../engine-multi + '@openfn/lexicon': + specifier: workspace:^ + version: link:../lexicon '@openfn/logger': specifier: workspace:* version: link:../logger @@ -629,8 +656,8 @@ importers: specifier: ^3.2.1 version: 3.2.1 phoenix: - specifier: ^1.7.7 - version: 1.7.7 + specifier: 1.7.10 + version: 1.7.10 ws: specifier: ^8.14.1 version: 8.14.1 @@ -1329,6 +1356,11 @@ packages: heap: 0.2.7 dev: false + /@fastify/busboy@2.1.0: + resolution: {integrity: sha512-+KpH+QxZU7O4675t3mnkQKcZZg56u+K/Ct2K+N2AZYNVK8kyeo/bI18tI8aPm3tvNNRyTWfj6s5tnGNlcbQRsA==} + engines: {node: '>=14'} + dev: false + /@inquirer/checkbox@1.3.5: resolution: {integrity: sha512-ZznkPU+8XgNICKkqaoYENa0vTw9jeToEHYyG5gUKpGmY+4PqPTsvLpSisOt9sukLkYzPRkpSCHREgJLqbCG3Fw==} engines: {node: '>=14.18.0'} @@ -1585,6 +1617,22 @@ packages: semver: 7.5.4 dev: true + /@openfn/language-common@1.12.0: + resolution: {integrity: sha512-JQjJpRNdwG5LMmAIO7P7HLgtHYS0UssoibAhMJOpoHk5/kFLDpH3tywpp40Pai33NMzgofxb5gb0MZTgoEk3fw==} + dependencies: + ajv: 8.12.0 + axios: 1.1.3 + csv-parse: 5.5.3 + csvtojson: 2.0.10 + date-fns: 2.30.0 + http-status-codes: 2.3.0 + jsonpath-plus: 4.0.0 + lodash: 4.17.21 + undici: 5.28.3 + transitivePeerDependencies: + - debug + dev: false + /@openfn/language-common@1.7.5: resolution: {integrity: sha512-QivV3v5Oq5fb4QMopzyqUUh+UGHaFXBdsGr6RCmu6bFnGXdJdcQ7GpGpW5hKNq29CkmE23L/qAna1OLr4rP/0w==} dependencies: @@ -2034,6 +2082,15 @@ packages: clean-stack: 4.2.0 indent-string: 5.0.0 + /ajv@8.12.0: + resolution: {integrity: sha512-sRu1kpcO9yLtYxBKvqfTeh9KzZEwO3STyX1HT+4CaDzC6HpTGYhIhPIzj9XuKU7KYDwnaeh5hcOwjy1QuJzBPA==} + dependencies: + fast-deep-equal: 3.1.3 + json-schema-traverse: 1.0.0 + require-from-string: 2.0.2 + uri-js: 4.4.1 + dev: false + /ansi-colors@4.1.3: resolution: {integrity: sha512-/6w/C21Pm1A7aZitlI5Ni/2J6FFQN8i1Cvz3kHABAAbw93v/NlvKdVOqz7CCWz/3iv/JplRSEEZ83XION15ovw==} engines: {node: '>=6'} @@ -2194,7 +2251,6 @@ packages: /asynckit@0.4.0: resolution: {integrity: sha512-Oei9OH4tRh0YqU3GxhX79dM/mwVgvbZJaSNaRk+bshkj0S5cfHcgYakreBjrHwatXKbz+IoIdYLxrKim2MjW0Q==} - dev: true /atob@2.1.2: resolution: {integrity: sha512-Wm6ukoaOGJi/73p/cl2GvLjTI5JM1k/O14isD73YML8StrH/7/lRFgmg8nICZgD3bZZvjwCGxtMOD3wWNAu8cg==} @@ -2342,7 +2398,6 @@ packages: proxy-from-env: 1.1.0 transitivePeerDependencies: - debug - dev: true /b4a@1.6.1: resolution: {integrity: sha512-AsKjNhz72yxteo/0EtQEiwkMUgk/tGmycXlbG4g3Ard2/ULtNLUykGOkeK0egmN27h0xMAhb76jYccW+XTBExA==} @@ -2423,6 +2478,10 @@ packages: readable-stream: 4.2.0 dev: true + /bluebird@3.7.2: + resolution: {integrity: sha512-XpNj6GDQzdfW+r2Wnn7xiSAd7TM3jzkxGXBGTtWKuSXv1xUV+azxAm8jdWZN06QTQk+2N2XB9jRDkvbmQmcRtg==} + dev: false + /blueimp-md5@2.19.0: resolution: {integrity: sha512-DRQrD6gJyy8FbiE4s+bDoXS9hiW3Vbx5uCdwvcCf3zLHL+Iv7LtGHLpr+GZV8rHG8tK766FGYBwRbu8pELTt+w==} @@ -2815,7 +2874,6 @@ packages: engines: {node: '>= 0.8'} dependencies: delayed-stream: 1.0.0 - dev: true /commander@4.1.1: resolution: {integrity: sha512-NOKm8xhkzAjzFx8B2v5OAHT+u5pRQc2UCa2Vq9jYL/31o2wi9mxBA7LIFs3sV5VSC49z6pEhfbMULvShKj26WA==} @@ -2958,6 +3016,10 @@ packages: resolution: {integrity: sha512-cO1I/zmz4w2dcKHVvpCr7JVRu8/FymG5OEpmvsZYlccYolPBLoVGKUHgNoc4ZGkFeFlWGEDmMyBM+TTqRdW/wg==} dev: true + /csv-parse@5.5.3: + resolution: {integrity: sha512-v0KW6C0qlZzoGjk6u5tLmVfyZxNgPGXZsWTXshpAgKVGmGXzaVWGdlCFxNx5iuzcXT/oJN1HHM9DZKwtAtYa+A==} + dev: false + /csv-stringify@5.6.5: resolution: {integrity: sha512-PjiQ659aQ+fUTQqSrd1XEDnOr52jh30RBurfzkscaE2tPaFsDH5wOAHJiw8XAHphRknCwMUE9KRayc4K/NbO8A==} dev: true @@ -2972,6 +3034,16 @@ packages: stream-transform: 2.1.3 dev: true + /csvtojson@2.0.10: + resolution: {integrity: sha512-lUWFxGKyhraKCW8Qghz6Z0f2l/PqB1W3AO0HKJzGIQ5JRSlR651ekJDiGJbBT4sRNNv5ddnSGVEnsxP9XRCVpQ==} + engines: {node: '>=4.0.0'} + hasBin: true + dependencies: + bluebird: 3.7.2 + lodash: 4.17.21 + strip-bom: 2.0.0 + dev: false + /currently-unhandled@0.4.1: resolution: {integrity: sha512-/fITjgjGU50vjQ4FH6eUoYu+iUoUKIXws2hL15JJpIR+BbTxaXQsMuuyjtNh2WqsSBS5nsaZHFsFecyw5CCAng==} engines: {node: '>=0.10.0'} @@ -3112,7 +3184,6 @@ packages: /delayed-stream@1.0.0: resolution: {integrity: sha512-ZySD7Nf91aLB0RxL4KGrKHBXl7Eds1DAmEdcoVawXnLD7SDhpNgtuII2aAkg7a7QS41jxPSZ17p4VdGnMHk3MQ==} engines: {node: '>=0.4.0'} - dev: true /delegates@1.0.0: resolution: {integrity: sha512-bd2L678uiWATM6m5Z1VzNCErI3jiGzt6HGY8OVICs40JQq/HALfbyNJmp0UDakEY4pMMaN0Ly5om/B1VI/+xfQ==} @@ -3938,6 +4009,10 @@ packages: - supports-color dev: true + /fast-deep-equal@3.1.3: + resolution: {integrity: sha512-f3qQ9oQy9j2AhBe/H9VC91wLmKBCCU/gDOnKNAYG5hswO7BLKj09Hc5HYNz9cGI++xlpDCIgDaitVs03ATR84Q==} + dev: false + /fast-diff@1.3.0: resolution: {integrity: sha512-VxPP4NqbUjj6MaAOafWeUn2cXWLcCtljklUtZf0Ind4XQ+QPtmA0b18zZy0jIQx+ExRVCR/ZQpBmik5lXshNsw==} @@ -4084,7 +4159,6 @@ packages: peerDependenciesMeta: debug: optional: true - dev: true /for-in@1.0.2: resolution: {integrity: sha512-7EwmXrOjyL+ChxMhmG5lnW9MPt1aIeZEwKhQzoBUdTV0N3zuwWDZYVJatDvZ2OyzPUvdIAZDsCetk3coyMfcnQ==} @@ -4115,7 +4189,6 @@ packages: asynckit: 0.4.0 combined-stream: 1.0.8 mime-types: 2.1.35 - dev: true /fragment-cache@0.2.1: resolution: {integrity: sha512-GMBAbW9antB8iZRHLoGw0b3HANt57diZYFO/HL1JGIC1MjKrdmhxvrJbupnVvpys0zsz7yBApXdQyfepKly2kA==} @@ -4491,6 +4564,10 @@ packages: - supports-color dev: true + /http-status-codes@2.3.0: + resolution: {integrity: sha512-RJ8XvFvpPM/Dmc5SV+dC4y5PCeOhT3x1Hq0NU3rjGeg5a/CqlhZ7uudknPwZFz4aeAXDcbAyaeP7GAo9lvngtA==} + dev: false + /https-proxy-agent@5.0.1: resolution: {integrity: sha512-dFcAjpTQFgoLMzC2VwU+C/CbS7uRL0lWmxDITmqm7C+7F0Odmj6s9l6alZc6AELXhrnggM2CeWSXHGOdX2YtwA==} engines: {node: '>= 6'} @@ -4891,6 +4968,10 @@ packages: resolution: {integrity: sha512-43r2mRvz+8JRIKnWJ+3j8JtjRKZ6GmjzfaE/qiBJnikNnYv/6bagRJ1kUhNk8R5EX/GkobD+r+sfxCPJsiKBLQ==} engines: {node: '>=12'} + /is-utf8@0.2.1: + resolution: {integrity: sha512-rMYPYvCzsXywIsldgLaSoPlw5PfoB/ssr7hY4pLfcodrA5M/eArza1a9VmTiNIBNMjOGr1Ow9mTyU2o69U6U9Q==} + dev: false + /is-weakref@1.0.2: resolution: {integrity: sha512-qctsuLZmIQ0+vSSMfoVvyFe2+GSEvnmZ2ezTup1SBse9+twCCeial6EEi3Nc2KFcf6+qz2FBPnjXsk8xhKSaPQ==} dependencies: @@ -4984,6 +5065,10 @@ packages: resolution: {integrity: sha512-xyFwyhro/JEof6Ghe2iz2NcXoj2sloNsWr/XsERDK/oiPCfaNhl5ONfp+jQdAZRQQ0IJWNzH9zIZF7li91kh2w==} dev: true + /json-schema-traverse@1.0.0: + resolution: {integrity: sha512-NM8/P9n3XjXhIZn1lLhkFaACTOURQXjWhV4BA/RnOv8xvgqtqpAX9IO4mRQxSx1Rlo4tqzeqb0sOlruaOy3dug==} + dev: false + /jsonfile@4.0.0: resolution: {integrity: sha512-m6F1R3z8jjlf2imQHS2Qez5sjKWQzbuuhuJ/FKYFRZvPE3PuHcSMVZzfsLhGVOkfd20obL5SWEBew5ShlquNxg==} optionalDependencies: @@ -4998,7 +5083,6 @@ packages: /jsonpath-plus@4.0.0: resolution: {integrity: sha512-e0Jtg4KAzDJKKwzbLaUtinCn0RZseWBVRTRGihSpvFlM3wTR7ExSp+PTdeTsDrLNJUe7L7JYJe8mblHX5SCT6A==} engines: {node: '>=10.0'} - dev: true /jsonpath@1.1.1: resolution: {integrity: sha512-l6Cg7jRpixfbgoWgkrl77dgEj8RPvND0wMH6TwQmi9Qs4TFfS9u5cUFnbeKTwj5ga5Y3BTGGNI28k117LJ009w==} @@ -6081,6 +6165,10 @@ packages: through2: 2.0.5 dev: true + /phoenix@1.7.10: + resolution: {integrity: sha512-akfr/QvLPFRB8sORyc8FQFY/YoGwjWhka/YRcu45sKlBOZHvA80EkLYBUsYlW63UicxgrXABZdrjDkv54LTE+g==} + dev: false + /phoenix@1.7.7: resolution: {integrity: sha512-moAN6e4Z16x/x1nswUpnTR2v5gm7HsI7eluZ2YnYUUsBNzi3cY/5frmiJfXIEi877IQAafzTfp8hd6vEUMme+w==} dev: false @@ -6305,7 +6393,6 @@ packages: /proxy-from-env@1.1.0: resolution: {integrity: sha512-D+zkORCbA9f1tdWRK0RaCR3GPv50cMxcrz4X8k5LTSUD1Dkw47mKJEZQNunItRTkWwgtaUSo1RVFRIG9ZXiFYg==} - dev: true /proxy-middleware@0.15.0: resolution: {integrity: sha512-EGCG8SeoIRVMhsqHQUdDigB2i7qU7fCsWASwn54+nPutYO8n4q6EiwMzyfWlC+dzRFExP+kvcnDFdBDHoZBU7Q==} @@ -6338,7 +6425,6 @@ packages: /punycode@2.3.0: resolution: {integrity: sha512-rRV+zQD8tVFys26lAGR9WUuS4iUAngJScM+ZRSKtvl5tKeZ2t5bvdNFdNHBW9FWR4guGHlgmsZ1G7BSm2wTbuA==} engines: {node: '>=6'} - dev: true /qs@6.11.2: resolution: {integrity: sha512-tDNIz22aBzCDxLtVH++VnTfzxlfeK5CbqohpSqpJgj1Wg/cQbStNAz3NuqCs5vV+pjBsK4x4pN9HlVh7rcYRiA==} @@ -6544,6 +6630,11 @@ packages: resolution: {integrity: sha512-fGxEI7+wsG9xrvdjsrlmL22OMTTiHRwAMroiEeMgq8gzoLC/PQr7RsRDSTLUg/bZAZtF+TVIkHc6/4RIKrui+Q==} engines: {node: '>=0.10.0'} + /require-from-string@2.0.2: + resolution: {integrity: sha512-Xf0nWe6RseziFMu+Ap9biiUbmplq6S9/p+7w7YXP/JBHhrUDDUhwa+vANyubuqfZWTveU//DYVGsDG7RKL/vEw==} + engines: {node: '>=0.10.0'} + dev: false + /require-main-filename@2.0.0: resolution: {integrity: sha512-NKN5kMDylKuldxYLSUfrbo5Tuzh4hd+2E8NPPX02mZtn1VuREQToYe/ZdlJy+J3uCpfaiGF05e7B8W0iXbQHmg==} dev: true @@ -7067,6 +7158,13 @@ packages: dependencies: ansi-regex: 6.0.1 + /strip-bom@2.0.0: + resolution: {integrity: sha512-kwrX1y7czp1E69n2ajbG65mIo9dqvJ+8aBQXOGVxqwvNbsXdFM6Lq37dLAY3mknUwru8CfcCbfOLL/gMo+fi3g==} + engines: {node: '>=0.10.0'} + dependencies: + is-utf8: 0.2.1 + dev: false + /strip-bom@3.0.0: resolution: {integrity: sha512-vavAMRXOgBVNF6nyEEmL3DBK19iRpDcoIwW+swQ+CbGiu7lju6t+JklA1MHweoWtadgt4ISVUsXLyDq34ddcwA==} engines: {node: '>=4'} @@ -7620,6 +7718,13 @@ packages: resolution: {integrity: sha512-hEQt0+ZLDVUMhebKxL4x1BTtDY7bavVofhZ9KZ4aI26X9SRaE+Y3m83XUL1UP2jn8ynjndwCCpEHdUG+9pP1Tw==} dev: true + /undici@5.28.3: + resolution: {integrity: sha512-3ItfzbrhDlINjaP0duwnNsKpDQk3acHI3gVJ1z4fmwMK31k5G9OVIAMLSIaP6w4FaGkaAkN6zaQO9LUvZ1t7VA==} + engines: {node: '>=14.0'} + dependencies: + '@fastify/busboy': 2.1.0 + dev: false + /union-value@1.0.1: resolution: {integrity: sha512-tJfXmxMeWYnczCVs7XAEvIV7ieppALdyepWMkHkwciRpZraG/xwT+s2JN8+pr1+8jCRf80FFzvr+MpQeeoF4Xg==} engines: {node: '>=0.10.0'} @@ -7670,6 +7775,12 @@ packages: engines: {node: '>=4'} dev: true + /uri-js@4.4.1: + resolution: {integrity: sha512-7rKUyy33Q1yc98pQ1DAmLtwX109F7TIfWlW1Ydo8Wl1ii1SeHieeh0HHfPeL2fMXK6z0s8ecKs9frCuLJvndBg==} + dependencies: + punycode: 2.3.0 + dev: false + /urix@0.1.0: resolution: {integrity: sha512-Am1ousAhSLBeB9cG/7k7r2R0zj50uDRlZHPGbazid5s9rlF1F/QKYObEKSIunSjIOkJZqwRRLpvewjEkM7pSqg==} deprecated: Please see https://github.com/lydell/urix#deprecated