Administration plugins for migrating from one Goobi workflow system to another Goobi workflow system
Name | Wert |
---|---|
The two plugins described here can be used to transfer data from one Goobi workflow system to another Goobi workflow system (Goobi-to-Goobi
). This documentation explains how to install, configure and use the associated plugins.
Before the export and import mechanism can be used, various installation and configuration steps must be completed. These are described in detail here:
The mechanism for transferring data from one Goobi workflow system to another Goobi workflow system (Goobi-to-Goobi
) is divided into three major steps.
These three steps are as follows:
The first step involves enriching the data within the file system on the source system with the information that Goobi has stored internally in the database for each process. When this step is performed, an additional xml file containing the database information on the workflow and some other necessary data is written to the folder for each Goobi process.
Erzeugung der Export-Verzeichnisse
After the complete creation and enrichment of the export directories on the source system, they can be transferred to the server of the target system. This can be done in different ways. Due to the amount of data involved, a transfer using rsync
has proven to be the most suitable.
Transfer der Export-Verzeichnisse
After the export directories have been successfully transferred to the target system, the data can be imported there. To do this, the data must be stored in the correct place in the system and some further precautions regarding the infrastructure must also be prepared.
Identifier
intranda_administration_goobi2goobi_export intranda_administration_goobi2goobi_import_infrastructure intranda_administration_goobi2goobi_import_data
Repository
Licence
GPL 2.0 or newer
Last change
25.07.2024 11:11:13
After the export directories have been created, the task folders can be copied from the source system to the target system. Depending on the amount of data involved, different methods can be used for the transfer.
If an external hard disk is to be used for the transfer, the cp
command can be used to copy from the source system to the hard disk and later back from the hard disk to the target system.
Example call for the copy operation from the source system to the external hard disk:
Example call for the copy operation from the external hard disk to the target system:
If a network connection can be established between the source system and the target system, data transfer is possible using the commands scp
or rsync
. The advantage of transferring data using rsync
is that any interruption of the connection can be resumed without having to start the entire transfer again from the beginning.
An example of such a call is as follows:
If the call should only transfer certain directories, use a maximum bandwidth and also exclude other data, such a call could also become a bit more extensive:
To export to an S3 Bucket to AWS you can use the script s3sync.py
.
The import of data on the target system takes place using two different plugins. These must first be installed and configured accordingly. More information about their installation and configuration can be found here:
After the successful installation, you can continue with the actual import. A distinction must be made here between the pure import of processes and the import of an exported infrastructure. Depending on the project, the import of the infrastructure may be necessary as the first step.
In the area for importing the infrastructure, the previously exported infrastructure of the source system can be imported. To do this, first open the plugin Goobi-to-Goobi Import - Infrastructure
in the Administration
menu.
At this point you can now upload a zip file that was previously created on the source system. After the successful upload, the file is unpacked on the server and analyzed. The user then receives a summary of the data to be imported.
If users, projects, groups, etc. already exist in the target system with the same name as the data to be imported, they do not count as new data to be imported and cannot be overwritten. After selecting the importing data, the import can be started by clicking on Execute import of infrastructure
.
If desired, the data can be manipulated during the import. This is possible by adapting the configuration file plugin_intranda_administration_goobi2goobi_import_infrastructure.xml
. More details can be found in the section Configuration for importing the infrastructure
here:
To import the processes from the source system, they must first be successfully exported and transferred to the target system. How the transfer of the sometimes very large amounts of data can take place is described here:
Transfer of export directories
Once the data has been completely transferred to the target system, you can start the import of the data. To do this, open the plugin Goobi-to-Goobi Import - Data
in the Administration
menu. There the configured rules for the import are displayed in the upper part of the user interface. If these rules are edited on the target system, they can be reloaded at any time by clicking on the Reload rules
button.
The actual import takes place in the lower area of the user interface. There the user can first search for the data to be imported by clicking on Reload files
. If this search takes longer than 10 seconds due to the large amount of data, the further search takes place in the background and the user gets the feedback that he should please update the page again after some time.
If files are successfully listed after the search of the data to be imported, they can now be selected. To do this, you can either select them individually or let Goobi select them all by clicking on Select all
. To do this, you need to select the rule that you want to apply to the import. This can either be selected directly or determined using Autodetect rule
. In this case, the system checks whether there is a rule that corresponds to the name of the project to which the process was assigned.
A click on the button Perform import of data
then starts the actual import. During this import, an internal Goobi ticket is created for each selected process and sent to the internal queue (Message Queue). The individual tickets are processed in the background and the processes are thus imported successively.
You can configure the import and the underlying rules in detail in the configuration file plugin_intranda_administration_goobi2goobi_import_data.xml
. Further information about this configuration can be found in the section Configuration for import of data
:
To start up the Goobi-to-Goobi mechanism, various plugins must be installed and configured on both the source and target systems. These are described in detail here.
First of all, the source system must be prepared for export. This includes first of all the installation of the correct plugin. Afterwards, only a permission for the appropriate users has to be configured to allow the export.
On the source system, the plugin plugin_intranda_administration_goobi2goobi_export
must first be installed to create the export directories. To do this, the following two files must be copied to the appropriate paths:
Please note that these files must be readable by the user tomcat
.
To enable the user to export data, the user must have the following roles:
These roles can be configured within the Goobi workflow user groups. To do this, simply select the roles on the right-hand side or enter them in the input field and then click on the plus icon.
With this configuration the preparation on the side of the initial system is already completed.
The target system must also be prepared for the import. After the installation of the corresponding plugin and the corresponding configuration files, some configurations have to be checked or made.
On the target system, the plugin plugin_intranda_administration_goobi2goobi_import
must first be installed to import the export directories. To do this, the following two files must be copied to the appropriate paths:
After the installation of the actual plugin, the corresponding configuration files must also be installed. These can be found under the following paths:
Again, please note that the installed files must all be readable for the user tomcat
.
To enable a user to perform the import, the user must have the following role:
This role can be configured within the Goobi workflow user groups by entering it in the input field on the right-hand side and clicking on the plus icon.
To influence the data to be imported during the import of the infrastructure, the configuration file plugin_intranda_administration_goobi2goobi_import_infrastructure.xml
can be adapted. This configuration can look like the following example:
In this configuration file all fields are optional. If a field is missing, its value is not overwritten during configuration. If the field is empty, it will be imported empty, otherwise it will be overwritten with the value from this configuration file. The fields for adding or removing are basically repeatable.
To import the data to the target system, you can specify in the configuration file plugin_intranda_administration_goobi2goobi_import_infrastructure.xml
where the data is located and how it should be processed during the import. This configuration can look like the following example:
In the upper part of the file, some general settings are made that apply to all imports. These general settings are followed by the individual configured rules.
General settings: globalConfig
The individual rules for the import operations will be defined within the <config>
element. The name of the rule is defined in <rulename>
. If no rule is explicitly selected during the import, it will be determined by the project name of the processes. The field is repeatable, so that several identical rules can be created, for example if the same workflow is used in different projects.
By means of <step>
individual steps of the process can be manipulated. All fields are optional. If they are not specified, the original value is used. Otherwise the field is overwritten with the configured field content. If the field is of type String, it can also be specified empty to empty it.
In this element, the assigned docket can be replaced. The xsl file to be used must exist on the server. If a docket has already been defined with the new specifications, it will be used, otherwise a new docket will be defined and stored in the database.
This rule can be used to change the assigned project. The project must already exist. Changes to the projects themselves can be made using Import infrastructure
.
| Element | Example | Meaning | | :--- | :--- | :— | | @name
| | Project A
| Old Project | | | newProjectName
| Project B
| New Project |
This rule is used to manipulate process properties.
| Element | Example | Meaning | | :--- | :--- | :— | | @name
| CollectionName
| Name of the property to be adjusted. | | oldPropertyValue
| Digitised
| Value of the property to be adjusted. If a value is specified, the property must contain this value. | | newPropertyName
| Collection
| New name of the property. Optional. | | newPropertyValue
| default collection
| New value of the property. Optional. |
This rule can be used to change the assigned rule set. If the ruleset does not yet exist, it is created and saved in the database. The file must exist on the server.
| Element | Example | Meaning | | :--- | :--- | :— | | @name
| Default
| Name of the ruleset used so far. | | newRulesetName
| default ruleset
| New name for the ruleset. | | newFileName
| ruleset.xml
| New file name for the ruleset. This must exist on the target system. |
With this rule the metadata can be changed. Values of existing metadata can be changed, new metadata added or existing metadata deleted.
Further general settings can be defined within a rule.
Element | Example | Meaning |
---|
Element | Example | Meaning |
---|
Element | Example | Meaning |
---|
Element | Example | Meaning |
---|
Element | Example | Meaning |
---|
|
| This specification is required if the database information to be imported is not located as xml files in the respective process folder. The specification contains the path to the database information within an s3 bucket and is not required when importing into a local file system. |
|
| Target directory into which the data is to be imported. |
|
| Name of the s3-bucket in which the data to be imported is located. This value is not required for imports into a local file system. |
|
| This parameter defines whether the process identifiers from the old system should be reused or whether new IDs should be created. |
|
| This parameter specifies the path to the folder containing the data to be imported. The value only needs to be configured if it differs from the value within |
|
| Contains the name of the step to be changed. |
|
| This value contains the type of manipulation. Possible values are |
|
| New name of the step. |
|
| New priority of the step. |
|
| Order of the step. |
|
| Controls whether to link to the user's home directory. |
|
| Sets the step status. Allowed values are |
|
| Contains in attributes the different settings of a step. |
|
| Defines scripts for the workflow steps |
|
| Defines the configuration of the HTTP call for the step. |
|
| Name of the assigned user group. This value can be repeated to define multiple user groups. |
|
| Name of the previously used docket. The change is only made if the process has previously used a docket with this name. |
|
| New name of the ticket. |
|
| New file name for the docket. |
|
| Internal name of the metadata. |
|
| Type of change. Allowed values are |
|
| Describes the position at which the change is to be made. Allowed values are |
|
| This regular expression checks whether the previous field content matches a defined value. This specification can be a fixed value or a regular expression. |
|
| If the value |
|
| Determines whether the process log of the source system should be transferred ( |
|
| Specifies whether the users of imported tasks in a workflow within Goobi should be created as deleted users ( |
The export from the source system consists of up to three sub-steps. However, before the export can take place, it must first be specified within the role system of Goobi workflow that the user must have export permissions. Information on the configurations to be made can be found here:
After configuring the required user rights, the actual export can begin. In most cases, only the first of the following three steps will be necessary.
For most purposes, only this sub-step is required to generate the export files for all desired processes. For all selected processes within the file system, an xml file with all relevant information about the process is generated from the database in the folder of each selected process.
To perform such an export for several processes together, you can start it using GoobiScript. The following GoobiScript command is required for this:
When you run this GoobiScript, you will find the relevant export xml file (e.g. 5_db_export.xml
) in each process folder.
To perform such an export for a single process, it is possible to start it within the details of a process. To do this, simply click on the corresponding icon for the export.
Unlike exporting via GoobiScript, this starts a download of the xml file that contains the database information.
Notice: This substep is optional and is only required in rare cases.
If you want to transfer more than just processes from one Goobi workflow to another, you can also generate export data for process templates. However, as GoobiScript is not available within the process template area, this export can be done from the provided Goobi-to-Goobi Export
plugin within the Administration
menu.
Now click on the button Generate database files for process templates
. This will also save an xml file with the database information for each process template in the file system and can be used for the transfer to the target system.
Notice: This substep is optional and is only required in rare cases.
If, in addition to the actual Goobi processes, you also want to transfer more detailed information about the infrastructure from one Goobi workflow to another, you can also have this exported within the export plugin. To do this, select the checkboxes provided within the Goobi-to-Goobi Export
plugin to influence the export in a targeted manner. The following parameters are available for this:
Once you have selected the desired information and clicked on the Download infrastructure as a zip file
button, Goobi generates a zip file and offers it for download under the name goobi-to-goobi-export.zip
. This zip file now contains all the information selected from the Goobi database for transfer to the target system.
Option | Meaning |
---|---|
LDAP groups
Exports the existing LDAP groups.
Users
Export of active users.
Include inactive users
In addition to the active users also export the deactivated users.
Create new passwords
Determines whether the existing passwords of the users should be exported as well. If the checkbox is set, new passwords must be set on the target system for the imported users after the import.
User groups
Export of user groups, permissions and additional roles.
User group assignments
Export all groups assigned to the user.
Projects
Export of the projects.
Project assignments
Export of all projects assigned to the user.
Rulesets
Export of rule set information.
Dockets
Export of the docket information.
Include files
Determine whether the exported zip file should include the rulesets and dockets.