Elefas is a SQL schema for Postgres to implement a long-term translation memory : contrairly to Cyclotis, this memory is indexed and adapted to more than two languages for the same segment. But it is designed to be called in batch mode, analysing a document and returning found results in a format usable by CAT tools (TMX). It is not designed to be updated real-time: once it will contain millions of segments, updating indexes may become too slow to obtain answer in seconds.
Like Cyclotis, Elefas is a global project agregating by several elements to be downloaded separately:
This part contains tools to create the SQL database, command-line scripts to import and export data, and an API to access it. All these tools are designed to work on the server itself (meaning that the server which hosts the database will also execute the scripts): working with external machines should need to use one of the web interfaces.
It will be available in two versions:
- written in Perl and volontarily limited to support of TMX, XLIFF and already segmented text (TXT) files;
Actually available as Alpha release.
- written in Ruby in order to use Culter and Spongiae for support of more formats;
This version is still not started, as Spongiae is not mature enough to start real work with it.
Both versions will use the same SQL schema, and it will be possible to use one version to query the other.
Like for Cyclotis, interfaces give the possibility to use an Elefas database via an internet web site:
- HTML, for human interface
- REST, for use by other software
In both cases, you can import a file or submit an input file to retreive matching segments, the result will arrive in a directory on the server (in the future, it should instead arrive by e-mail).
Alpha release available here