Hallucinations and bad answers are more often than not our fault rather than the training or LLM in general.
While it is known that fundamentally GenAI is essentially guessing the next best word...a token predictor, without context we allow it to meander with too many pathways that lead away from our desired results.
Effective use of prompt frameworks, prompt techniques (CoT, ToT, SoT, etc), prompt engineering structures, feedback mechanisms, validation mechanisms, and other important elements providing context to our inquiries - these plus iteration - we can discover a significant decrease in so called hallucinations. When provided only a few possible lanes of travel, we greatly influence the potential of a correct response.