The data_injection plugin

This extension works with version 0.70 or higher of GLPI. It enables the import of data from CSV files.

It can, for example:
  • import machines upon delivery (electronic delivery order in CSV)
  • import additional data
  • import equipment not managed by OCS
  • transmit from another tool of computer park management

Installation

  • Copy files in the GLPI directory plugins
  • Log in GLPI
  • In "configuration" menu, click "files injection"
  • click to "install the plugin"

Rights management

Access : Administration / Profiles / Files injection / Rights management

For every user profile, you can grant 2 rights : * Create a model (write) : author mode which allows the user to define the behavior of the extension during file data injection * Use a model (read) : user mode which allows file data injection

Use
Access : Plugins / Files injection

The extension appears like a wizard, with choices presented as sequential steps, and it's possible to return to the previous step to correct a choice.

Step 1 : Use or Manage Models
  • Create a new model
  • Modify an existing model
  • Delete an existing model
  • Use an existing model

The first step allows you to choose the action to be realized, according to your rights :

Models creation

Step 2 : File type information

This step allows you to define extension main options :
  • * Model's visibility :* * private : the model is visible to it's author only * public : the model is visible in the entity in which it was created (sub-entity to false) or in the entity plus his sub-entites (sub-entities to yes)
  • Data type to insert : it's the data type in the input file. A file must contain only 1 type of data (eg: only "Contacts" data, or only "Computers" data)
  • File type : at present, only CSV format is supported
  • Creation of lines : directs the extension to create objects if the line does not correspond to an existing base element
  • Update of lines : directs the extension to modify existing objects using data in the file
  • Presence of a header : Indicates if the first line of the CSV file is a header line that contains column content/names
  • File delimiter : field separator (by default a semicolon)
  • Add titles : tells the extension if object titles in the file (place, equipment type, models,...) must be created. The creation will also be limited by user mode rights
  • Update existing fields : tells the extension if file data can replace fields that already contain data.
  • Try to establish network connections : indicates if the extension must connect to equipment with network equipment when it finds essential information (place, network socket)
  • *Date format : * indicates the format of the dates as present in the CSV file. If the format is incorrect, the dates will not be imported
  • *Float format : * indicates the format of the floats as present in the CSV file. If the format is incorrect, the floats will not be imported
  • Port attributes: indicates the port's attributes to use in order to look for an existing port in the GLPI database

Step 3 : Selection of the file to upload

  • Choice of the file : choose a file present in your computer having exactly the same structure as the already imported data (example : place from which various level are separated by the sign >)
  • File encoding : ISO8859-1 for windows files, UTF-8 for files created under Linux. Automatic detection allows to determine the encoding but slows down the file treatment

Step 4 : Define the association of file fields and base fields

This step is essential. You must define, for each file column, the corresponding field in the data base.

Columns are identified by a header line(presence defined in step 2) or by their position (from 0). The file delimiter (defined at the step 2) must be defined correctly, or the column divisions in the file will not be imported correctly.

The button "see the file" allows to have a preview of file first lines to verify their contents and the performance made by the extension.

For each column, you can choose :
  • Don't retrieve data of this column
  • Choose a table : the data type defined at the step 2 or financial information
  • Choose a field : which will be fed by the import
  • Check connecting field : to direct that this column will be used to verify the existence of data in the base and to choose between creation of a new object or updating an existing object. Applicant fields are, for example, the serial number, the name or the inventory number. It's compulsory to define at least 1 connection field. If the field is checked, that field will have to be present in the CSV file used for the import.

Noting : it's possible to define "Comments" or "Notes" as destination for several file columns. Each column will be added in the field on a different line.

This step are important and a bad choice will damage inventory data. It's advised to test the import on a test database, and to backup the production database before importing any data. Also,this function should only be accessible to knowledgeable users.

Step 5 : Additional information

Thie step allows to define fields which will can/must be fed by user during the file injection.

For each field, the same value will be used for each file line and will have priority over the file content.
  • Table : the data type defined in step 2 or financial information
  • Field : which will be loaded by the import
  • Mandatory information : if the input by the model user is Mandatory or optional. A field marked as "Mandatory" for an import operation must contain unique data. When importing user data, if the LAST_NAME field is marked as mandatory, it will cause import errors if two users have the same last name. Instead, add a unique login value for each user and add a LOGIN field to the CSV file. Make this the only mandatory value in the import options wizard ("defining the model").
Use example :
  • Date of purchase during the import of a delivery order
  • Comments for define a criterion wich will be used by the search engine (example : IMPORT THE 31/10)
  • Place : of storage
  • etc

Step 6 : Recording the model

Confirmation of the recording. It's still possible to return to the previous step to verify and to correct some options.
  • Enter the model name : which will be visible in the menu of the step 1
  • Add a comment : whick will be visible during the model section at the step 1

After the recording, the extension suggests you : //do you want to use the model now ?// This choice allows you to win time by going directly at the step 3 of use because the file is already downloaded on the file server.

BE CAREFUL : if you work in sub-entities, you must select wanted correct entity before making the import.

Use a model

Step 1 : Choose the model

4 choices are availables :
  • Create a new model
  • Modify an existing model
  • Delete an existing model
  • Use an existing model

Step 2 : Use an existing model

Select the model in the models list ready to used. Click on 'Next'.

Complete the path of the CSV file to import. Specify, if you know, the file encoding. In the other way, let "automatic detection".

Step 3 : Information of adding informations

Step 4 : Confirmation before import

Step 5 : Result of the import

Additional Notes

Posted by a user in the Forums:
  1. A field marked as "Mandatory" for an import operation must contain unique data
    I was importing user data, and at first marking my LAST_NAME field as mandatory, but this caused import errors when two users had the same last name. Instead, I added a unique login value for each user and added a LOGIN field to my CSV file. Then, I made only this value mandatory in the import options wizard ("defining the model").
  2. Japanese names have to be enclosed in quotes, like this: * login1,"??","??",,Group1 * login2,"??","???",,Group2
    In my experience, it does not hurt to enclose every field in quotes, which can make preparing the CSV file easier. I did not test, but you might need to quote other kinds of non-ASCII text data, like Russian names or Arabic or whatever.
  3. If you are linking a field to a filed in a different database table than the primary one, you will get an "unsuccessful injection" warning if there is no value for that field
    For example, I wanted to associate my imported users to a group. So, I had my CSV data containing the group name in one of the fields, like above. However, some users had no group, and this caused the warnings in the import log. (The "Datas injection" field of the import log said 'Data not found (FK_group=")'.)
    So in my case, I made a dummy group and added that group name to the users with no group. This let me import without warnings.
    Also, I don't know how the plugin determined which field of the Groups I wanted to use to link the users to the groups. I entered the group name in the CSV data, and the import process magically did the thing I wanted (associate it to the entity in a different database table by using the Name field). I THINK the plugin looks up records in other tables by "Name" when creating these associations.
  4. In my experience, the import log always says "Datas to insert are incorrect" in the "Checking datas" field of the import log, even when there is no error and the result is "Import is successful". So, I ignored that ==
  5. I found I could not import objects of the same type with the same text in the "Name" field, even though that is possible in the web UI
    The case was, I was importing accounts. We have several file server accounts, where the username is the same as accounts on the mail server. I imported the mail accounts OK, but when I tried to import the file server accounts from CSV, the data_injection plugin threw tons of errors saying "you do not have the rights to update datas". I think it was looking up other accounts by the Name field to detect if they existed.
    Example: I am using Accounts plugin also (called compte in French I think). I usually make the "Name" field for an account the same as the login. My user John Smith has a mail server account with login jsmith, and also a file server account with login jsmith. Importing these two accounts would not work if the Name field for each account was to be jsmith. So instead, I made my CSV data such that each account had a different Name: "jsmith (mail)" and "jsmith (file server)".