LRU cache question

 Problem

Design a data structure that follows the constraints of a Least Recently Used (LRU) cache.

Implement the LRUCache class:

  • LRUCache(int capacity) Initialize the LRU cache with a positive size capacity.
  • int get(int key) Return the value of the key if the key exists, otherwise return -1.
  • void put(int key, int value) Update the value of the key if the key exists. Otherwise, add the key-value pair to the cache. If the number of keys exceeds the capacity from this operation, evict the least recently used key.

The functions get and put must each run in O(1) average time complexity.

Example 1:

Input
["LRUCache", "put", "put", "get", "put", "get", "put", "get", "get", "get"]
[[2], [1, 1], [2, 2], [1], [3, 3], [2], [4, 4], [1], [3], [4]]
Output
[null, null, null, 1, null, -1, null, -1, 3, 4]

Explanation
LRUCache lRUCache = new LRUCache(2);
lRUCache.put(1, 1); // cache is {1=1}
lRUCache.put(2, 2); // cache is {1=1, 2=2}
lRUCache.get(1);    // return 1
lRUCache.put(3, 3); // LRU key was 2, evicts key 2, cache is {1=1, 3=3}
lRUCache.get(2);    // returns -1 (not found)
lRUCache.put(4, 4); // LRU key was 1, evicts key 1, cache is {4=4, 3=3}
lRUCache.get(1);    // return -1 (not found)
lRUCache.get(3);    // return 3
lRUCache.get(4);    // return 4



Solution


To achieve O(1) time for both get and set, we first need the hashmap to keep the record. 
A problem happens when trying to keep track of what's least used to evict.
For that, we can use a priority queue and keep the least used element in the first.

One thing that confused me is whether updating the same value (put with the existing key) is considered to be used. I initially thought it was not. That delayed the time to debug. Also, we need to think about the corner case as well.

Here is the rough algorithm:

- get(k)

 check if k exists in the hashmap.
 if so return the value
 move the k to the end of the priority queue
 
 return -1 if not found

- put (k, v)

 if k exists in the hashmap
   get the value element
   update the value
   move the k to the end of the priority queue

 if k is the key to the hashmap
    if the queue is at capacity, remove the element at the head from the priority queue
    add k, v to the hashmap
    add k to the end of the priority queue
   

UPDATE 03-09-2023:
Here is the python solution

'''
Implement the LRUCache class:
LRUCache(int capacity) Initialize the LRU cache with positive size capacity.
int get(int key) Return the value of the key if the key exists, otherwise return -1.
void put(int key, int value) Update the value of the key if the key exists. Otherwise, add the key-value pair to the cache.
If the number of keys exceeds the capacity from this operation, evict the least recently used key.
The functions get and put must each run in O(1) average time complexity.
Example 1:
Input
["LRUCache", "put", "put", "get", "put", "get", "put", "get", "get", "get"]
[[2], [1, 1], [2, 2], [1], [3, 3], [2], [4, 4], [1], [3], [4]]
Output
[null, null, null, 1, null, -1, null, -1, 3, 4]
Explanation
LRUCache lRUCache = new LRUCache(2);
lRUCache.put(1, 1); // cache is {1=1}
lRUCache.put(2, 2); // cache is {1=1, 2=2}
lRUCache.get(1); // return 1
lRUCache.put(3, 3); // LRU key was 2, evicts key 2, cache is {1=1, 3=3}
lRUCache.get(2); // returns -1 (not found)
lRUCache.put(4, 4); // LRU key was 1, evicts key 1, cache is {4=4, 3=3}
lRUCache.get(1); // return -1 (not found)
lRUCache.get(3); // return 3
lRUCache.get(4); // return 4
'''
class Node:
def __init__(self, key, value):
self.key = key
self.value = value
self.prev = self.next = None
class LRUCache:
def __init__(self, capacity):
self.capacity = capacity
self.map = {}
self.head = None
self.tail = None
def get(self, key):
if key in self.map:
node = self.map[key]
# move the node for key to the end of the tail.
self.move_to_tail(node)
return node.value
return -1
def put(self, key, value):
if key not in self.map:
self.evict_if_necessary()
new_node = Node(key, value)
self.map[key] = new_node
self.add_to_tail(new_node)
else:
found = self.map[key]
found.value = value
self.move_to_tail(found)
def add_to_tail(self, node):
if self.head is None:
self.head = self.tail = node
else:
# add to the tail.
self.tail.next = node
node.prev = self.tail
self.tail = node
def evict_if_necessary(self):
if len(self.map) == self.capacity:
# evict the head
to_evict = self.head
self.head = self.head.next
if self.head is not None:
self.head.prev = None
del self.map[to_evict.key]
del to_evict
def move_to_tail(self, node):
if self.tail == node:
return
# extract node.
prev = node.prev
next = node.next
if prev is not None:
prev.next = next
if next is not None:
next.prev = prev
if self.head == node:
self.head = next
self.add_to_tail(node)
# test case
lRUCache = LRUCache(2)
lRUCache.put(1, 1); # cache is {1=1}
lRUCache.put(2, 2); # cache is {1=1, 2=2}
result = lRUCache.get(1); # return 1
assert result == 1, "result ={}".format(result)
lRUCache.put(3, 3); # LRU key was 2, evicts key 2, cache is {1=1, 3=3}
result = lRUCache.get(2); # returns -1 (not found)
assert result == -1, "result ={}".format(result)
lRUCache.put(4, 4); # LRU key was 1, evicts key 1, cache is {4=4, 3=3}
result = lRUCache.get(1); # return -1 (not found)
assert result == -1, "result ={}".format(result)
result = lRUCache.get(3); # return 3
assert result == 3, "result ={}".format(result)
result = lRUCache.get(4); # return 4
assert result == 4, "result ={}".format(result)

Practice statistics

15 mins: algorithm
12 mins: to code (with bugs)
10 mins: to debug

Comments

Popular posts from this blog

Planting flowers with no adjacent flower plots

Find the shorted path from the vertex 0 for given list of vertices.