Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use aiojobs for background tasks #573

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 6 additions & 5 deletions aiocache/decorators.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,16 @@
import inspect
import logging

import aiojobs

from aiocache.base import SENTINEL
from aiocache.factory import Cache, caches
from aiocache.lock import RedLock


logger = logging.getLogger(__name__)
loop = asyncio.get_event_loop()
scheduler = loop.run_until_complete(aiojobs.create_scheduler(pending_limit=0, limit=None))
Comment on lines +14 to +15
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This won't work correctly. When the user runs their application, it'll be on a different loop.

You'll need to instantiate it inside the class, probably in the __init__() method.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Although, if this is in the decorator, that may have the same problem. We may need to think on this a little further, maybe we can create it in decorator() if it doesn't already exist...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This also highlights another issue to me, the scheduler and cache should both have .close() called at the end of the application. But, I don't see any way for a decorator to do that.

I'm starting to think that the decorators should have the Cache instance passed to them or something, so the developer can handle the lifetime of the caches properly.

Turns out this is more complex than I thought...



class cached:
Expand Down Expand Up @@ -112,9 +116,7 @@ async def decorator(
if aiocache_wait_for_write:
await self.set_in_cache(key, result)
else:
# TODO: Use aiojobs to avoid warnings.
asyncio.create_task(self.set_in_cache(key, result))

await scheduler.spawn(self.set_in_cache(key, result))
return result

def get_cache_key(self, f, args, kwargs):
Expand Down Expand Up @@ -336,8 +338,7 @@ async def decorator(
if aiocache_wait_for_write:
await self.set_in_cache(result, f, args, kwargs)
else:
# TODO: Use aiojobs to avoid warnings.
asyncio.create_task(self.set_in_cache(result, f, args, kwargs))
await scheduler.spawn(self.set_in_cache(result, f, args, kwargs))

return result

Expand Down
1 change: 1 addition & 0 deletions requirements-dev.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
-e .[dev,redis,memcached,msgpack]
aiojobs==1.0.0
aiohttp==3.8.1
flake8==4.0.1
flake8-bandit==3.0.0
Expand Down
1 change: 1 addition & 0 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@
'redis:python_version>="3.8"': ["aioredis>=1.3.0,<2.0"],
"memcached": ["aiomcache>=0.5.2"],
"msgpack": ["msgpack>=0.5.5"],
"aiojobs": ["aiojobs>=1.0.0"],
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't an extra, it's required, so needs to be added to install_requires.

},
include_package_data=True,
)