Click here to download the files that accompany the video
Optimization of Model Parameters
When working on database development, after assigning models to phases, as described on the Assessment of Thermodynamic Data page, the undetermined (=optimizing) parameters of the models are fitted to the input data that was captured in the first step. The selected input data (experimental data and first principle calculation results) is written in POLY-3 syntax in a so-called POP file. This file is read into the PARROT Module, which allows model parameters to be optimized to this input data. The optimization demands extensive human judgement at different stages, mainly due to the fact that the optimizing parameters of all phases should be consistent with each other. In other words, the modeling task is a multi-objective optimization with constraints. This is analogous to training multiple-models in the machine learning context.
Traceability is important in any CALPHAD-type project, but especially when the objective is to develop a multicomponent database. It is crucial to keep track of every change or addition made. Using a version control system is highly recommended. It is also recommended that detailed descriptions/comments are made at every change and update.
All files used in the assessment should be stored in a way that is traceable in case a re-assessment is needed, for instance because new experimental information becomes available. These files are of many different types: setup files (text files containing model information), POP files (text files containing the selected input data), PAR files (the binary files where optimization is performed), EXP files (text files for graphic comparison with experimental data), TDB files (text files with the current state of the database under optimization), various other result files, and more.
Once the optimization is complete, the Gibbs energy functions with their optimized free parameters are stored in a text file, or so-called database, with a format that is readable by Thermo-Calc. The database can now be loaded into Thermo-Calc and used to make predictions.
Validation and Consistency
The final step in the process of developing a CALPHAD database is to validate the predictions against experimental results. The first step of validation follows with the optimization itself. When the fit to binary, ternary, and sometimes quaternary experimental data is good enough, there is a validated result. What “good enough” means in this context is up to the humans involved, but there are tools in PARROT that help with this judgment. However, good descriptions of binary and ternary sub-systems are not always enough.
When developing a multicomponent database for a specific alloy system (Fe/Steels, Ni-base Superalloys, Al-, Ti-, Mg- alloys, and so on), validation from commercial and other multicomponent alloys is of great importance. If the agreement with real multicomponent commercial alloys is not good for key data points, a re-optimization of one or more lower order systems is the only way to rectify this. A systematic evaluation of the agreement between calculations and experiments for real multicomponent alloys can also give good guidance on areas to improve and where more experimental information is desirable.
It is recommended that such validation is performed after every update of the database to ensure continuing good agreement between experiments and calculations. All of our major thermodynamic and properties databases include a collection of validation examples, which you can find on their respective pages in the database section. You can also read the Ni-based Superalloys Database Examples Collection as an example.
Figure 2. Calculated precipitate solvus temperature for various Ni-base superalloys compared with literature data. The calculation is included in the collection of examples used to validate our Ni-based superalloys database, TCNI.