Async programming is crucial in making responsive and scalable web applications.
In python, it is usually done with asyncio, allowing developers to write concurrent, suspendable (non-blocking) code with ease.
However, there are times where we need to cache the output of the result of async functions.
In synchronous python, it is usually done with cache implementations such as LRU (Least Recently Used) cache in functools package, but it is not applicable to async functions since they return coroutine, not function outputs. Hence we need to implement a cache, preferably in the form of decorator for a concise code.
In order to do this, we shall first define a key that will be used to identify arguments to a function.
Note that it should be hashable so that it can be stored as a key in a dictionary (hash map).
We will focus on implementing LRU cache, which is basic yet one of most commonly used caching algorithm, notably in memory paging.
For a wrapper (decorator), we can define the following wrapper class using the aforementioned key and cache implementations.
The wrapper for TTL can be written in a similar way; just replace self.lru with an instance of TTL cache defined above.
Now, we can use this with an asynchronous function.