The Story Behind PromptLoc
The story of PromptLoc began in 2023 at Edimart. As a language service provider, we were eager to integrate Large Language Models into our daily work, but we quickly ran into a wall of frustrations with the existing memoQ plugins.
Our primary issue was context. We noticed that virtually all plugins suffered from "tunnel vision," seeing only one segment at a time. In professional translation, where coherence is key, this segment-level processing was a constant source of errors.
We also faced what we call resource blindness. Our projects are rich with metadata, terminology, and reference materials, yet the standard MT solutions couldn't see or use this data effectively. We were leaving our best assets on the table.
Furthermore, we felt a distinct lack of control. The "black box" nature of existing tools meant we couldn't tweak instructions or adapt the behavior to different clients. We knew that MT shouldn't be a one-size-fits-all solution; it needed to be adaptable to specific project requirements.
Finally, we realized that LSPs need more than just translation. We needed a tool capable of broader linguistic tasks—like revision, rephrasing, and document preparation—integrated directly into our memoQ workflow. Driven by these challenges, we built PromptLoc to be the solution we couldn't buy.
Our Learnings
Through this experience, we have consistently found that high-quality MT depends on:
- Processing more than a single segment at a time (true context awareness)
- Full integration of heavy resources (TM hits, TB allowed and forbidden terms)
- Complete control over prompts and parameters (no two projects require exactly the same solution)
- The ability to use reference files and inject relevant content into prompts based on semantics, not character matching. Think of it as LiveDocs, but with semantic awareness rather than edit-distance matching.