Experiment - Word embeddings concept (Word2Vec)
Problem:You want to create word embeddings using Word2Vec to capture word meanings from a small text dataset.
Current Metrics:The current Word2Vec model is trained with vector size 50, window size 2, and 5 epochs. The embeddings do not show good similarity results; for example, 'king' and 'queen' have low similarity (0.2).
Issue:The model is undertrained and uses a small window size, resulting in poor word similarity capture.