*** Wartungsfenster jeden ersten Mittwoch vormittag im Monat ***

Skip to content
Snippets Groups Projects
Commit b1c4f12c authored by Casillas Trujillo, Luis Alberto's avatar Casillas Trujillo, Luis Alberto
Browse files

updated the wien2k documentation page

parent 8ae0ad0c
No related branches found
No related tags found
1 merge request!93updated the wien2k documentation page
...@@ -10,7 +10,7 @@ ...@@ -10,7 +10,7 @@
## Available modules ## Available modules
Do note that the WIEN2k is for VSC5 Do note that the WIEN2k is for VSC5
``` ```
24.1-intel-2021.9.0-oneapi_vsc5 wien2k/24.1-intel-2021.9.0-oneapi_vsc5
``` ```
## Initial setup ## Initial setup
...@@ -43,8 +43,8 @@ module load wien2k/.24.1-intel-2021.9.0-oneapi ...@@ -43,8 +43,8 @@ module load wien2k/.24.1-intel-2021.9.0-oneapi
export cores_per_node=128 export cores_per_node=128
################## user setting for parallelization ################## user setting for parallelization
export OMP_NUM_THREADS=1 export OMP_NUM_THREADS=4
export mpi_jobs=128 # set it clever according to yourk-points export mpi_jobs=1 # set it clever according to yourk-points
# #
# together with the number of nodes this will create a .machines file with # together with the number of nodes this will create a .machines file with
# cores_per_node*number_of _nodes cores. # cores_per_node*number_of _nodes cores.
...@@ -80,8 +80,11 @@ line=$(($line+1)) ...@@ -80,8 +80,11 @@ line=$(($line+1))
done done
echo granularity:1 >> .machines echo granularity:1 >> .machines
echo extrafine:1 >> .machines echo extrafine:1 >> .machines
echo 'omp_lapw0:32' >>.machines # or set to 16 or 64 parallel jobs for LAPW0
##### add your commands here
run_lapw -p -NI -i 1 run_lapw -p -NI -i 1
``` ```
...@@ -100,6 +103,8 @@ Check the [user guide](http://wien2k.at/reg_user/textbooks/){:target="_blank"} f ...@@ -100,6 +103,8 @@ Check the [user guide](http://wien2k.at/reg_user/textbooks/){:target="_blank"} f
* It is strongly advised to use parallelisation via OMP_NUM_THREAD first and only use mpi-jobs when the simulations get very large (e.g. supercells of more than 60 atoms). The reason is because OMP_NUM_THREAD (shared memory parallelisation) works about 2 times faster than mpi-jobs (no shared memory parallelisation). * It is strongly advised to use parallelisation via OMP_NUM_THREAD first and only use mpi-jobs when the simulations get very large (e.g. supercells of more than 60 atoms). The reason is because OMP_NUM_THREAD (shared memory parallelisation) works about 2 times faster than mpi-jobs (no shared memory parallelisation).
* One can check the performance of the simulations, timing and errors in the case.dayfile. It is a good indicator if one needs to change the parallelization settings to get better performance.
* One can switch from the ELPA library to ScaLAPACK by changing the second line in the `.in1` or `.in1c` file. from `ELPA` to `SCALA`. * One can switch from the ELPA library to ScaLAPACK by changing the second line in the `.in1` or `.in1c` file. from `ELPA` to `SCALA`.
``` ```
6.50 10 6 ELPA pxq BL 64 (R-MT*K-MAX,MAX L IN WF,V-NMT,LIB) 6.50 10 6 ELPA pxq BL 64 (R-MT*K-MAX,MAX L IN WF,V-NMT,LIB)
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment