Model Pipeline - Function calling in LLMs
This pipeline shows how a large language model (LLM) uses function calling to improve responses by invoking external functions during text generation.
This pipeline shows how a large language model (LLM) uses function calling to improve responses by invoking external functions during text generation.
Loss
1.0 |\
0.9 | \
0.8 | \
0.7 | \
0.6 | \
0.5 | \
0.4 | \
0.3 | \
0.2 | \
0.1 | \
0.0 +----------
1 2 3 4 5
Epochs| Epoch | Loss ↓ | Accuracy ↑ | Observation |
|---|---|---|---|
| 1 | 0.85 | 0.60 | Model starts learning to detect when to call functions. |
| 2 | 0.65 | 0.72 | Improved accuracy in predicting function calls. |
| 3 | 0.50 | 0.81 | Better integration of function outputs in responses. |
| 4 | 0.38 | 0.88 | Model effectively calls functions and generates accurate answers. |
| 5 | 0.30 | 0.92 | Training converges with high accuracy and low loss. |