Numerous language workbenches have been proposed over the past decade to ease the definition of Domain-Specific Languages (DSLs). Language workbenches enable language designers to specify DSLs using high-level meta-languages and to generate their implementation (e.g., parsers, interpreters, editors) automatically. However, little attention has been given to the performance of the resulting interpreters. In many domains where performance is key (e.g., scientific and high-performance computing), this forces language designer to handcraft ad-hoc optimizations in the generated interpreters code. In this paper, we propose to systematically exploit the information contained in language specifications to derive optimized Truffle-based language interpreters executed over the GraalVM. We implement our approach on top of the Eclipse Modeling Framework (EMF) by complementing its existing compilation chain with Truffle-specific information which drives the GraalVM to benefit from optimized just-in-time compilation. A key benefit of our approach is that it leverages existing language specifications and does not require additional information from language designers who remain oblivious of Truffle’s low-level intricacies. We evaluate our approach using a representative set of four DSLs and eight conforming programs. Compared to the standard interpreters generated by EMF running on the HotSpot VM, we observe an average speed-up of x2.96, ranging from x2.02 to x5.06. Although the benefits vary slightly from one DSL or program to another, we conclude that our approach yields significant performance gains while remaining transparent to language designers.