Most marketing teams buy their SEO tools. We built ours.
That is not a boast, it is context for what follows. In early 2026, Tomedes made a deliberate decision to stop relying solely on off-the-shelf SEO platforms and build a proprietary AI-powered command center that connects directly to our data, understands our content, and generates recommendations specific to what we do: professional human translation, enhanced by AI, for businesses operating across 270-plus languages. According to McKinsey's 2024 State of AI report, organizations that develop internal AI capabilities (rather than depending exclusively on third-party tools) are significantly more likely to report competitive advantage from their AI investments. We wanted to test that thesis in our own context.
This article is the honest account of what we built, how it works, why a translation agency needed something custom, and what it has changed about how our team operates. It is not a product pitch. It is a record of a real strategic shift, the kind we think more language service providers (LSPs) should be having.
Standard SEO platforms are built for generalist use cases. They measure rankings, crawl errors, and backlink profiles — all of which matter, but none of which understands the specific intent behind a query like "certified German to English legal translation" versus "translate German to English free."
For a company with 20 years of experience in professional translation, the gap between generic keyword data and actionable content insight was significant. We were spending time manually translating platform outputs into decisions that a tool with context about our services, our language pair pages, our client industries, and our positioning could have surfaced automatically.
The bigger issue was scale. Tomedes operates across 270-plus languages, dozens of service categories, multiple regional markets, and a growing suite of AI-enhanced tools. The content surface we need to manage (and the SEO signals we need to monitor) is far too specific for a platform built to serve e-commerce, SaaS, and news publishers equally. We needed something that thought like us.
The term "command center" is intentional. This is not a reporting dashboard. A dashboard shows you what happened. A command center helps you decide what to do next.
Tomedes' SEO command center is a Claude-powered AI system connected directly to our Google Search Console data, Google Analytics 4, and our internal content database. It ingests live performance signals (impressions, clicks, rankings, page-level conversion data) and uses that context to generate specific, prioritized recommendations: which pages to update, which keywords are underserved, where technical issues are depressing rankings, and what content gaps are costing us traffic.
The distinction that matters: it does not generate generic best practices. It generates recommendations calibrated to our actual data. When it flags a page for improvement, it references the specific query set that page is underperforming on, not a theoretical keyword universe. As we described it internally, it functions as "a decision machine by continuously adding context."
That framing (decision machine) is the key. It does not replace the marketing team's judgment. It gives us better inputs so our decisions are faster and better-grounded.
The honest answer: iteratively, and with more friction than we expected.
The foundation is a server-side Claude Code implementation. Our team built a pipeline that connects to Search Console and GA4 via their respective APIs, pulls structured data on a defined schedule, and passes that data as context to a Claude-powered analysis layer. The system runs both scheduled reports (weekly SEO performance summaries, anomaly detection, quick-win identification) and ad-hoc queries, where team members can ask specific questions about specific pages or keyword clusters.
Early versions ran locally, which limited who could use it. A key milestone was migrating to a server deployment with an SSH interface, which made it accessible to the broader content and marketing team without requiring everyone to manage local environments. A user interface layer is currently in development to make the system accessible without any command-line interaction at all.
The CI/CD pipeline matters here. All changes to the command center go through Git, with version control and rollback capability. This is not optional, it is what keeps a system that touches live SEO decisions stable and auditable. We learned that lesson early.
Total build time to a functional first version: several weeks of iterative development alongside our normal workload. The system has been in active use and continuous improvement since.
In practice, the command center serves four functions:
1. Weekly performance analysis. Every week, the system generates a structured report covering organic traffic trends, click-through rate changes, ranking movements by page cluster, and a prioritized list of quick wins — small, high-confidence changes that are likely to improve performance within the current search cycle.
2. Content gap identification. The system compares our existing content against the query universe our pages are appearing for but not ranking well on. It identifies specific keyword and topic gaps, and generates a brief for each — including the search intent, the recommended content approach, and suggested internal linking opportunities.
3. Technical SEO flagging. When crawl issues, hreflang errors, sitemap problems, or Core Web Vitals regressions appear in our data, the command center surfaces them with context: which pages are affected, what the likely ranking impact is, and what the fix looks like. This has compressed our response time on technical issues significantly.
4. EAT and authority recommendations. This is where translation-specific context matters most. The system understands that Expertise, Authoritativeness, and Trustworthiness signals (client mentions, ISO certifications, industry-specific case studies, expert bylines) are particularly important for a professional services brand in a YMYL-adjacent category like certified translation. It flags EAT gaps the same way it flags keyword gaps.
Several things shifted, some expected and some not.
The expected change: decision speed. Questions that previously required manual data pulls, cross-referencing multiple platform dashboards, and significant analyst time now have answers in minutes. The team spends less time finding data and more time acting on it.
The unexpected change: the quality of the questions we ask. When the barrier to getting an answer drops, you ask more questions. Our content and SEO discussions became more specific and more frequent — not because we mandated it, but because the tool made specificity easy. We started asking things like "which of our European language pair pages is losing the most ground to AI Overviews this quarter, and why?" instead of "how is our multilingual SEO performing?"
The harder truth: the command center also made our gaps more visible. When you have a system that continuously surfaces what is not working, you cannot avoid the uncomfortable data. Traffic to our tools section declined over a period of months before we had a clear, prioritized view of why and what to do about it. The command center did not cause that decline, but it did force an honest reckoning with it faster than we would have had otherwise.
That is, in our view, the right kind of discomfort. According to HubSpot's State of Marketing Report 2024, marketers who use AI tools for SEO analysis report significantly higher confidence in their content decisions — but only when those tools are connected to their own data rather than generic benchmarks. Our experience confirms that distinction.
Everywhere that context cannot be fully captured in data.
The command center is excellent at identifying what is happening and surfacing options. It is not equipped to make final calls on brand voice, editorial judgment, or strategic prioritization between competing opportunities. Those decisions require human expertise — specifically, people who understand the translation industry, our clients, and the long-term positioning we are building.
This is not a limitation to work around. It is the design principle. Tomedes is a human-first company. We leverage AI to enhance speed, consistency, and efficiency — while our experts ensure that every decision serves the business, the audience, and the reputation we have built over 20 years. The command center is an input into human decision-making, not a replacement for it.
There is also a quality-assurance dimension that matters specifically for a professional services brand. When the system recommends a content update, a human reviews that recommendation before it goes to production. When it flags a technical fix, an engineer confirms it before deployment. AI-generated recommendations without human review are a risk we are not willing to take with content that carries our brand's credibility.
Gartner's research on AI augmentation in marketing teams consistently finds that the highest-performing implementations are those where AI handles data processing and pattern recognition while humans retain decision authority. That is the model we operate.
Three things.
Start with the UI earlier. The command center ran via SSH for longer than it needed to, which limited adoption within the team. Building a basic web interface earlier would have accelerated the value we extracted from it. We underestimated how much friction a command-line interface creates for non-technical team members who had the most to gain from the tool.
Define success metrics before deployment. We had a general sense of what we wanted the tool to do, but we did not define specific, measurable success criteria upfront. That made it harder to assess early whether the system was working as intended or whether we were just generating more output. Clearer KPIs from the start (reduction in time-to-insight, increase in content output quality scores, improvement in tracked keyword positions) would have given us faster feedback loops.
Invest in documentation earlier. As the system grew more capable, the gap between what it could do and what the team knew it could do widened. Regular internal documentation (what the tool does, how to query it, what kinds of questions it answers well) would have accelerated adoption and reduced duplicated effort.
None of these are reasons not to build. They are the lessons that come from being early.
Q: Do you need a technical team to build something like this?
A: You need access to technical capability, but not necessarily a large engineering team. Tomedes' command center was built iteratively by a small team using Claude Code, API connections to existing platforms, and a Git-based deployment pipeline. The key requirement is not headcount, it is a willingness to invest in iterative development and accept that the first version will be imperfect.
Q: How is this different from just using an SEO platform like Semrush or Ahrefs?
A: Off-the-shelf platforms provide general SEO data across any industry. Our command center is connected to our own Search Console and GA4 data and generates recommendations calibrated to our specific content, language pairs, service categories, and competitive context. The difference is the depth of context: a generic platform tells you that a page has low click-through rate; our system tells you which specific queries are driving impressions without clicks on that page, what the intent gap is, and what content change is most likely to close it.
Q: Can smaller translation agencies build something similar?
A: Yes, though the scope should match the team's capacity. A smaller LSP might start with a more limited version (AI-assisted analysis of Search Console data for a defined set of priority pages) before expanding scope. The underlying approach is the same: connect your own data, define the questions you need answered, and build a system that generates specific answers rather than generic reports.
Q: Does this mean Tomedes is moving away from human SEO expertise?
A: The opposite. The command center exists to make our human SEO and content team more effective. By automating data aggregation, anomaly detection, and initial recommendation generation, we free up expert time for higher-value work: editorial judgment, strategic positioning, content that requires genuine translation industry knowledge to write well. AI handles the pattern recognition. Humans handle the decisions.
Q: How does this connect to Tomedes' translation services?
A: The same principle that drives the command center drives how we approach translation: AI enhances speed and consistency, human expertise ensures quality and contextual accuracy. We apply that philosophy to our own operations because we believe it is the right model — not just for our clients' translation programs, but for how any professional services company should think about integrating AI into their work.
Tomedes — Professional Translation in 270+ Languages
Human expertise. AI precision. Your content, handled right.
Get a free quote → · Try AI translation →
About the author
William Mamane
CMO of Tomedes
Connect on LinkedIn →
Try free AI tools to streamline transcription, translation, analysis, and more.
Use Free Tools
Post your Comment