Which statement best describes the role of working memory in managing the current task state in an AI agent?
Think about what information an AI needs right now to perform a task.
Working memory temporarily holds information relevant to the current task, allowing the AI to focus and update its state as needed.
What is the output of this Python code simulating a working memory update for task state?
working_memory = {'step': 1, 'info': 'start'}
# Update working memory for next step
working_memory['step'] += 1
working_memory['info'] = 'processing'
print(working_memory)Check how the dictionary values change after the update.
The 'step' key is incremented by 1, and 'info' is updated to 'processing'.
Which model architecture is best suited for maintaining and updating working memory of the current task state in sequential decision-making?
Consider models that handle sequences and remember past inputs.
RNNs are designed to process sequences and maintain information over time, making them ideal for working memory tasks.
Which hyperparameter most directly affects the capacity of an RNN to maintain working memory over longer sequences?
Think about what controls the size of the memory in the network.
The number of hidden units determines how much information the RNN can store and process at each step.
Given this code snippet for updating an agent's working memory, what error causes the working memory to reset unexpectedly?
class Agent:
def __init__(self):
self.working_memory = {}
def update_memory(self, key, value):
working_memory = {}
working_memory[key] = value
agent = Agent()
agent.update_memory('task', 'start')
print(agent.working_memory)Check variable scope inside the method.
The method creates a new local dictionary named working_memory, which does not affect the instance's working_memory attribute.