This is an old revision of the document!
The Migrating a OAR database from MySQL to PostgreSQL is not straight-forward. A Tool like mysql2psql does not do the trick by itself.
A tool has been developed to help migrating from a mysql to a postgresql OAR database: oar-database-migrate.
See: https://github.com/oar-team/python-oar-cli
Once the postgresql database engine is setup, oar database created with the oar user (using the oar-database tool), one can use the oar-database-migrate tools, as follows:
$ sudo apt-get install python-pip $ sudo apt-get install python-psycopg2 $ sudo apt-get install python-mysqldb $ sudo pip install oar-cli
oar-database-migrate --current-db-url mysql://oarreader:PASSWORD@.../oar --new-db-url postgresql://oar:PASSWORD@/...
The procedure proposed here follows these steps:
This procedure has only been tested on a very small database (~1000 jobs), so it may contain some errors or oversights.
New packages:
$ sudo aptitude install oar-user-pgsql
In /etc/oar/oar.conf, edit these 2 lines:
DB_TYPE="Pg" DB_PORT=5432
New packages:
$ sudo aptitude install ruby-dbd-pg libdbd-pg-ruby oar-server-pgsql oar-user-pgsql postgresql postgresql-client
host oar oar_ro 127.0.0.1 255.255.255.255 md5 host oar oar 127.0.0.1 255.255.255.255 md5 host oar oar_ro <frontend ip> <netmask> md5 host oar oar <frontend ip> <netmask> md5
listen_addresses = '*'
This part can be improved, the postgresql database can be initialized using oar-database
$ sudo su - postgres
$ createuser -P Enter name of role to add: oar Enter password for new role: Enter it again: Shall the new role be a superuser? (y/n) n Shall the new role be allowed to create databases? (y/n) n Shall the new role be allowed to create more new roles? (y/n) n CREATE ROLE
$ createuser -P Enter name of role to add: oar_ro Enter password for new role: Enter it again: Shall the new role be a superuser? (y/n) n Shall the new role be allowed to create databases? (y/n) n Shall the new
$ createdb oar
$ sudo /etc/init.d/postgresql reload
$ psql -Uoar -h127.0.0.1 oar \i /usr/lib/oar/pg_structure.sql
$ psql oar GRANT ALL PRIVILEGES ON schema,accounting,admission_rules,assigned_resources, challenges,event_log_hostnames,event_logs,files,frag_jobs,gantt_jobs_predictions, gantt_jobs_predictions_visu,gantt_jobs_resources,gantt_jobs_resources_visu, job_dependencies,job_resource_descriptions,job_resource_groups, job_state_logs,job_types,jobs,moldable_job_descriptions,queues, resource_logs,resources,admission_rules_id_seq,event_logs_event_id_seq, files_file_id_seq,job_resource_groups_res_group_id_seq, job_state_logs_job_state_log_id_seq,job_types_job_type_id_seq, moldable_job_descriptions_moldable_id_seq,resource_logs_resource_log_id_seq, resources_resource_id_seq,jobs_job_id_seq TO oar;
GRANT SELECT ON schema,accounting,admission_rules,assigned_resources,event_log_hostnames, event_logs,files,frag_jobs,gantt_jobs_predictions,gantt_jobs_predictions_visu, gantt_jobs_resources,gantt_jobs_resources_visu,job_dependencies, job_resource_descriptions,job_resource_groups,job_state_logs,job_types, jobs,moldable_job_descriptions,queues,resource_logs,resources,admission_rules_id_seq, event_logs_event_id_seq,files_file_id_seq,job_resource_groups_res_group_id_seq, job_state_logs_job_state_log_id_seq,job_types_job_type_id_seq, moldable_job_descriptions_moldable_id_seq,resource_logs_resource_log_id_seq, resources_resource_id_seq,jobs_job_id_seq TO oar_ro; \q
Create this cron job:
$ crontab -l # m h dom mon dow command 02 02 * * * vacuumdb -a -f -z
Export all the admission rules to files:
$ for i in <list of admin rule id> ; do oaradmin -x $i -f adminrules_$i ; done
Stop OAR
$ sudo /etc/init.d/oar-server stop
Update the OAR server configuration in /etc/oar/oar.conf:
DB_TYPE="Pg" DB_PORT=5432
Dump the mysql db and stop mysql
$ mysqldump --compatible=postgresql --default-character-set=utf8 -r oar_data.mysql -u root oar $ sudo /etc/init.d/mysql stop
Create the postgresql data file with https://github.com/lanyrd/mysql-postgresql-converter
$ git clone https://github.com/lanyrd/mysql-postgresql-converter.git $ python ./mysql-postgresql-converter/db_converter.py oar_data.mysql oar.psql
In the file oar.psql, remove all the CREATE queries and these 2 INSERT queries:
INSERT INTO "gantt_jobs_predictions" VALUES (0,1408970596); INSERT INTO "gantt_jobs_predictions_visu" VALUES (0,1408970596);
Create all the missing properties in the right order (all the fields between last_available_upto and PRIMARY KEY in oar_data.mysql):
grep -A 40 'CREATE TABLE "resources"' oar_data.mysql
Append the parameter “-c” if the field is a varchar, or nothing if the field is an integer
$ oarproperty -a cpu $ oarproperty -a core $ oarproperty -a host -c $ oarproperty -a mem $ oarproperty -a maintenance -c $ oarproperty -a ip -c
Import the data file
$ sudo su - postgres $ psql \i oar.psql
Set the “next val” for all the sequences (implicitely created) in the postgresql database:
SELECT setval('admission_rules_id_seq', max(id)) FROM admission_rules; ALTER TABLE "admission_rules" ALTER COLUMN "id" SET DEFAULT nextval('admission_rules_id_seq'); SELECT setval('event_logs_event_id_seq', max(event_id)) FROM event_logs; ALTER TABLE "event_logs" ALTER COLUMN "event_id" SET DEFAULT nextval('event_logs_event_id_seq'); SELECT setval('files_file_id_seq', max(file_id)) FROM files; ALTER TABLE "files" ALTER COLUMN "file_id" SET DEFAULT nextval('files_file_id_seq'); SELECT setval('job_resource_groups_res_group_id_seq', max(res_group_id)) FROM job_resource_groups; ALTER TABLE "job_resource_groups" ALTER COLUMN "res_group_id" SET DEFAULT nextval('job_resource_groups_res_group_id_seq'); SELECT setval('job_state_logs_job_state_log_id_seq', max(job_state_log_id)) FROM job_state_logs; ALTER TABLE "job_state_logs" ALTER COLUMN "job_state_log_id" SET DEFAULT nextval('job_state_logs_job_state_log_id_seq'); SELECT setval('job_types_job_type_id_seq', max(job_type_id)) FROM job_types; ALTER TABLE "job_types" ALTER COLUMN "job_type_id" SET DEFAULT nextval('job_types_job_type_id_seq'); SELECT setval('jobs_job_id_seq', max(job_id)) FROM jobs; ALTER TABLE "jobs" ALTER COLUMN "job_id" SET DEFAULT nextval('jobs_job_id_seq'); SELECT setval('moldable_job_descriptions_moldable_id_seq', max(moldable_id)) FROM moldable_job_descriptions; ALTER TABLE "moldable_job_descriptions" ALTER COLUMN "moldable_id" SET DEFAULT nextval('moldable_job_descriptions_moldable_id_seq'); SELECT setval('resource_logs_resource_log_id_seq', max(resource_log_id)) FROM resource_logs; ALTER TABLE "resource_logs" ALTER COLUMN "resource_log_id" SET DEFAULT nextval('resource_logs_resource_log_id_seq'); SELECT setval('resources_resource_id_seq', max(resource_id)) FROM resources; ALTER TABLE "resources" ALTER COLUMN "resource_id" SET DEFAULT nextval('resources_resource_id_seq');
Ensure that you have one unique version recorded in the table “schema”
$ psql oar
oar=# select * from schema; version | name ---------+------ 2.5.2 | 2.5.0 | 2.5.0 | (3 rows)
oar=# delete from schema where version!='2.5.2'; DELETE 2 oar=# select * from schema; version | name ---------+------ 2.5.2 | (1 row)
$ sudo /etc/init.d/oar-server start
$ oaradmin -d `seq 1 14` $ for i in <list of admin rule id> ; do oaradmin -a $i -f adminrules_$i ; done
At this point, you should be able to remove the mysql packages and submit jobs from the frontend.