Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump cachetools from 5.5.0 to 5.5.1 #647

Merged
merged 1 commit into from
Jan 23, 2025

Conversation

dependabot[bot]
Copy link
Contributor

@dependabot dependabot bot commented on behalf of github Jan 22, 2025

Bumps cachetools from 5.5.0 to 5.5.1.

Changelog

Sourced from cachetools's changelog.

v5.5.1 (2025-01-21)

  • Add documentation regarding caching of exceptions.

  • Officially support Python 3.13.

  • Update CI environment.

Commits
  • b072920 Release v5.5.1.
  • efc3633 Fix #138: Add documentation regarding caching of exceptions.
  • d5c6892 Officially support Python 3.13.
  • a34b9c5 Merge remote-tracking branch 'origin/dependabot/github_actions/actions/setup-...
  • 9c122a2 Merge remote-tracking branch 'origin/dependabot/github_actions/actions/checko...
  • d44c984 Create FUNDING.yml
  • 49bff17 Bump actions/checkout from 4.1.7 to 4.2.0
  • 85c6026 Bump actions/setup-python from 5.1.1 to 5.2.0
  • See full diff in compare view

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Bumps [cachetools](https://github.com/tkem/cachetools) from 5.5.0 to 5.5.1.
- [Changelog](https://github.com/tkem/cachetools/blob/master/CHANGELOG.rst)
- [Commits](tkem/cachetools@v5.5.0...v5.5.1)

---
updated-dependencies:
- dependency-name: cachetools
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <[email protected]>
@dependabot dependabot bot added dependencies Pull requests that update a dependency file python Pull requests that update Python code labels Jan 22, 2025
Copy link

codecov bot commented Jan 22, 2025

❌ 1 Tests Failed:

Tests completed Failed Passed Skipped
1989 1 1988 9
View the top 1 failed tests by shortest run time
tests/test_translator.py::test_inject_remote_input
Stack Traces | 11s run time
request = <SubRequest 'context' for <Coroutine test_inject_remote_input>>
kwargs = {'chosen_deployment_types': ['local', 'docker', 'docker-compose', 'docker-wrapper', 'slurm']}
func = <function context at 0x10d40d580>
event_loop_fixture_id = 'tests/test_translator.py::<event_loop>'
setup = <function _wrap_asyncgen_fixture.<locals>._asyncgen_fixture_wrapper.<locals>.setup at 0x121803380>
setup_task = <Task finished name='Task-40526' coro=<_wrap_asyncgen_fixture.<locals>._asyncgen_fixture_wrapper.<locals>.setup() done...a94e: [126] OCI runtime exec failed: exec failed: unable to start container process: procReady not received: unknown')>

    @functools.wraps(fixture)
    def _asyncgen_fixture_wrapper(request: FixtureRequest, **kwargs: Any):
        func = _perhaps_rebind_fixture_func(fixture, request.instance)
        event_loop_fixture_id = _get_event_loop_fixture_id_for_async_fixture(
            request, func
        )
        event_loop = request.getfixturevalue(event_loop_fixture_id)
        kwargs.pop(event_loop_fixture_id, None)
        gen_obj = func(**_add_kwargs(func, kwargs, event_loop, request))
    
        async def setup():
            res = await gen_obj.__anext__()  # type: ignore[union-attr]
            return res
    
        context = contextvars.copy_context()
        setup_task = _create_task_in_context(event_loop, setup(), context)
>       result = event_loop.run_until_complete(setup_task)

.tox/py3.13-unit/lib/python3.13....../site-packages/pytest_asyncio/plugin.py:329: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
.../Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/asyncio/base_events.py:720: in run_until_complete
    return future.result()
.tox/py3.13-unit/lib/python3.13....../site-packages/pytest_asyncio/plugin.py:324: in setup
    res = await gen_obj.__anext__()  # type: ignore[union-attr]
tests/conftest.py:151: in context
    config = await get_deployment_config(_context, deployment_t)
tests/utils/deployment.py:62: in get_deployment_config
    return await get_docker_wrapper_deployment_config(_context)
tests/utils/deployment.py:116: in get_docker_wrapper_deployment_config
    await _context.deployment_manager.deploy(docker_dind_deployment)
streamflow/deployment/manager.py:180: in deploy
    await self._deploy(deployment_config, {deployment_name})
streamflow/deployment/manager.py:74: in _deploy
    await connector.deploy(deployment_config.external)
.../deployment/connector/container.py:1187: in deploy
    await self._populate_instance(self.containerId)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <streamflow.deployment.connector.container.DockerConnector object at 0x12179d450>
name = '2996a8d6bbcd6dc6eb51d3333c8cf11d105fe053520dc39c814e5574493da94e'

    async def _populate_instance(self, name: str):
        # Build execution location
        location = ExecutionLocation(
            name=name,
            deployment=self.deployment_name,
            stacked=True,
            wraps=self._inner_location.location,
        )
        # Inspect Docker container
        stdout, returncode = await self.connector.run(
            location=self._inner_location.location,
            command=[
                "docker",
                "inspect",
                "--format",
                "'{{json .}}'",
                name,
            ],
            capture_output=True,
        )
        if returncode == 0:
            try:
                container = json.loads(stdout) if stdout else {}
            except json.decoder.JSONDecodeError:
                raise WorkflowExecutionException(
                    f"Error inspecting Docker container {name}: {stdout}"
                )
        else:
            raise WorkflowExecutionException(
                f"Error inspecting Docker container {name}: [{returncode}] {stdout}"
            )
        # Check if the container user is the current host user
        if self._wraps_local():
            host_user = os.getuid()
        else:
            stdout, returncode = await self.connector.run(
                location=self._inner_location.location,
                command=["id", "-u"],
                capture_output=True,
            )
            if returncode == 0:
                try:
                    host_user = int(stdout)
                except ValueError:
                    raise WorkflowExecutionException(
                        f"Error retrieving volumes for Docker container {name}: {stdout}"
                    )
            else:
                raise WorkflowExecutionException(
                    f"Error retrieving volumes for Docker container {name}: [{returncode}] {stdout}"
                )
        stdout, returncode = await self.run(
            location=location,
            command=["id", "-u"],
            capture_output=True,
        )
        if returncode == 0:
            try:
                container_user = int(stdout)
            except ValueError:
                raise WorkflowExecutionException(
                    f"Error retrieving volumes for Docker container {name}: {stdout}"
                )
        else:
            raise WorkflowExecutionException(
                f"Error retrieving volumes for Docker container {name}: [{returncode}] {stdout}"
            )
        # Retrieve cores and memory
        stdout, returncode = await self.run(
            location=location,
            command=["test", "-e", ".../fs/cgroup/cpuset"],
            capture_output=True,
        )
        if returncode > 1:
            raise WorkflowExecutionException(
                f"Error retrieving cores for Docker container {name}: [{returncode}] {stdout}"
            )
        elif returncode == 0:
            # Handle Cgroups V1 filesystem
            stdout, returncode = await self.run(
                location=location,
                command=[
                    "cat",
                    ".../cgroup/cpu/cpu.cfs_quota_us",
                    ".../cgroup/cpu/cpu.cfs_period_us",
                    ".../fs/cgroup/cpuset/cpuset.effective_cpus",
                    ".../cgroup/memory/memory.limit_in_bytes",
                ],
                capture_output=True,
            )
            quota, period, cpuset, memory = stdout.splitlines()
            if int(quota) == -1:
                quota = "max"
            if int(memory) == 9223372036854771712:
                memory = "max"
        else:
            # Handle Cgroups V2 filesystem
            stdout, returncode = await self.run(
                location=location,
                command=[
                    "cat",
                    ".../fs/cgroup/cpu.max",
                    ".../fs/cgroup/cpuset.cpus.effective",
                    ".../fs/cgroup/memory.max",
                ],
                capture_output=True,
            )
            cfs, cpuset, memory = stdout.splitlines()
            quota, period = cfs.split()
        if returncode == 0:
            if quota != "max":
                cores = (int(quota) / 10**5) * (int(period) / 10**5)
            else:
                cores = 0.0
                for s in cpuset.split(","):
                    if "-" in s:
                        start, end = s.split("-")
                        cores += int(end) - int(start) + 1
                    else:
                        cores += 1
            if memory != "max":
                memory = float(memory) / 2**20
            else:
                stdout, returncode = await self.run(
                    location=location,
                    command=[
                        "cat",
                        "/proc/meminfo",
                        "|",
                        "grep",
                        "MemTotal",
                        "|",
                        "awk",
                        "'{print $2}'",
                    ],
                    capture_output=True,
                )
                if returncode == 0:
                    memory = float(stdout) / 2**10
                else:
>                   raise WorkflowExecutionException(
                        f"Error retrieving memory from `/proc/meminfo` for Docker container {name}: "
                        f"[{returncode}] {stdout}"
                    )
E                   streamflow.core.exception.WorkflowExecutionException: Error retrieving memory from `/proc/meminfo` for Docker container 2996a8d6bbcd6dc6eb51d3333c8cf11d105fe053520dc39c814e5574493da94e: [126] OCI runtime exec failed: exec failed: unable to start container process: procReady not received: unknown

.../deployment/connector/container.py:815: WorkflowExecutionException

To view more test analytics, go to the Test Analytics Dashboard
📢 Thoughts on this report? Let us know!

@GlassOfWhiskey GlassOfWhiskey merged commit 0eb11eb into master Jan 23, 2025
32 checks passed
@GlassOfWhiskey GlassOfWhiskey deleted the dependabot/pip/cachetools-5.5.1 branch January 23, 2025 08:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file python Pull requests that update Python code
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant