Neural network YaLM 100B wɔ adeyɛ mu.

Программирование

Wɔ June awiei no, Yandex
yii ntini mu nkitahodi bi a ɛwɔ nneɛma ɔpepepem 100 a wɔfrɛ no YaLM 100B adi kyerɛɛ ɔmanfo . Ɛyɛ ntini a ɛte sɛ GPT a ɛso sen biara wɔ ɔmanfo mu. Ɛka sɛnea wɔkyerɛkyerɛe, kyerɛɛ nhwɛso ahorow a eye sen biara ne nea ntini no tumi yɛ ho asɛm. Nanso so eye saa wɔ nneyɛe mu na wotumi de di dwuma wɔ fie? Asɛm no yɛ komm wɔ eyi ho, afei nso, ɛnyɛ mmerɛw saa sɛ wobɛtu mmirika na woahwɛ mu, efisɛ bɛyɛ 200 Gb GPU RAM na ɛho hia. Saa asɛm a wɔka faa Habré ho
yi da tebea no adi pɛpɛɛpɛ sen biara
.

Wɔkyerɛ sɛ, wɔ Yandex no, nnipa a wonim nyansa a wɔte saa no nyinaa, na wɔamfa How-to a ɛyɛ daa mpo ankyerɛw. Api biara nni hɔ ma mfonini kɛse, mfonini biara nni hɔ a wɔasiesie a wɔayi afi mu anaasɛ mfonini ketewa bi a wɔde ma nnipa mpapahwekwa (wɔ Google Colab mu). Wɔamfa nhwɛso biara amma wɔ sɛnea wɔbɛhyehyɛ model no, sɛnea wɔbɛhyehyɛ nsɛm ho. Ɛyɛ kɛkɛ sɛ asɛm no kyerɛ nuances mmienu bi ma nerds na ɛno ara ne no. Ɛdɔɔso sɛ wobɛhwɛ sɛnea sikakorabea no de nkyerɛwde “C” yɛɛ no ​​yiye na woayɛ saa ara. Me nyaa adwene sɛ saa mfonini yi yɛ sɔhwɛ a entumi nyɛ yiye no mu biako pɛ a na ɛyɛ awerɛhow sɛ wɔbɛtow agu nwura mu, enti wɔde too Open Source mu de kyerɛɛ mfonini akɛse a Yandex yɛ, na nea ɛsen saa no, ɛyɛ open source!

Nsɛmmisa pii wɔ Intanɛt so sɛnea wobɛtu yalm anaa mpo wobɛsɔ ahwɛ wɔ intanɛt so, nanso mmuae biara nni eyi ho. Na meka wɔn a wɔde di dwuma a wobisaa nsɛm yi ho. Na fi ase susuw ho. Esiane sɛ na mihia ɔkwan bi a mɛfa so ayɛ nsɛm ama sikasɛm ho robɔt ankasa nti. Sɛnea ɛbɛyɛ a ɛnyɛ gyinapɛn ahorow no nko na wobetumi ahyɛ ho nkɔm, na mmom wɔbɛka ho asɛm wɔ nsɛm mu nso, a egyina sikasɛm ho amanneɛbɔ ahorow so. Ne titiriw no, ɛbɛyɛ nea sikasɛm mu nhwehwɛmufo yɛ no ara pɛ, na wɔde nyansa a wɔde ayɛ nneɛma di dwuma nkutoo. Akwan abien na ɛwɔ hɔ a wɔfa so tu mmirika yalm.
Fa server bi tua ho ka wɔ mununkum no mune 200+ Gb GPU RAM anaa sesa koodu no na fa deepspeed zero offload tu mmirika (bere a GPU no di neural network no fã bi ho dwuma nnidiso nnidiso, na wɔde nea aka no sie wɔ CPU RAM anaa NVMe mu). Nea edi kan no bo yɛ den yiye, bɛyɛ ruble 2500 dɔnhwerew biara anaasɛ ɔpepem 1.7 ɔsram biara. Nea ɛto so abien a wonnim, efisɛ wɔmfa koodu a ɛwɔ adekorabea no mma,
hints nko ara wɔ adekorabea no asɛm no mu, a ɛnyɛ den sɛ wobɛyɛ. Momma yenfi ase yɛ mmerɛw.

YaLM 100B Akwankyerɛ a Wɔde Fi Ase

1. Yɛgye 200 GB GPU RAM, sɛ nhwɛso no wɔ ha .

Neural network YaLM 100B wɔ adeyɛ mu.

Anyɛ yiye koraa no, wuhia video memory a ne nyinaa yɛ 200 GB. 8×40 = 320 GB na ɛyɛ. Eyi nkutoo na ɛfata. Nea ennu 200 no ntumi nyɛ yiye, pii betumi ayɛ yiye. Agyan no kyerɛ CPU RAM no, yɛnhwɛ no. Obetumi ayɛ obiara.

Yɛkyerɛ disk a bɛyɛ 300 GB, sɛnea ɛbɛyɛ a ɛne spare na ɛyɛ papa sɛ ɛyɛ ntɛmntɛm disk, efisɛ. wɔde data gigabyte du du bɛkɔ mu na wɔde afi mu.

Neural network YaLM 100B wɔ adeyɛ mu. Sɛ woreyɛ ade wɔ fibea ahorow mu a, paw Ubuntu ML (Mfiri Adesua). Eyi yɛ ahyɛde sɛnea ɛbɛyɛ a wɔbɛhyehyɛ video kaad ahorow no na ɛho nhia sɛ wɔde biribiara ka ho.

Sɛ woreyɛ server a, nuances wɔ hɔ a quota wom, ebia wubenya atenka sɛ nnwinnade no nni hɔ, nanso nokwarem no, nea ehia ara ne sɛ wobɛma quota ahorow no akɔ soro wɔ nhyehyɛe no mu. Sɛ wɔde server no yɛ adwuma wie a (ɛbɛtumi agye simma 5-10), fa ssh so kɔ server no so anaasɛ tẽẽ wɔ wɛb console no mu wɔ server krataafa no so na yɛ ahyɛde no.

nvidia-smi a ɛwɔ hɔ no

Ɛsɛ sɛ nea efi mu ba ne pon a video cards, driver version ne cuda wom. Bɛyɛ sɛ ɛte sɛ eyi.
Neural network YaLM 100B wɔ adeyɛ mu. Wɔ draiver version header no mu ne baabi a. Wɔ benkum so no, mfiri no nɔma ahorow no wɔ hɔ, wɔ mfinimfini no, mfiri no memory no kɛse wɔ hɔ. Sɛ wunni saa nsɛm yi a, ɛnde na woaboaboa server no ano afi baabi a ɛnteɛ. Ubuntu ML (Machine Learnong) ho hia, sɛnea yɛaka ho asɛm wɔ atifi hɔ no.

2. Fa YaLM yɛ adekorabea no clone

sudo git clone https://github.com/yandex/YaLM-100B/ Ɔdɔ a ɛyɛ
den sɛ wobɛbɔ wo bra

Clone kɔ wo home folder no so sɛnea ɛbɛyɛ a ɛho renhia sɛ wo sesa docker config no wɔ ɛno akyi. Sɛ wɔayɛ no clone wɔ baabi foforo a, ɛnde
kɔ ha na fa ɔkwan a ɛkɔ baabi a wɔayɛ no clone no ka ho.

3. Twe checkpoints (mfitiase model ntetee ho nsɛm) .

sudo chmod +x ./twe/twe.sh
sudo bash ./twe/twe.sh

Eyi begye bɛyɛ dɔnhwerew biako. Sɛnea ɛbɛyɛ a yɛrensɛe bere kwa no, yɛbɔ ssh nkitahodi foforo na ɛne no hyia no yefi ase yɛ docker container.

4. Fa nvidiadocker 2 no hyɛ mu

Normal docker no mfata,
nvidia-docker2 ho hia .
https://docs.nvidia.com/datacenter/cloud-native/akoradeɛ-adwinnadeɛ/install-akwankyerɛ.html#nvidia-akoradeɛ-adwinnadeɛ-nhyehyɛeɛ

5. Yɛbɛkyekyere ade a wɔde gu mu ama YaLM

cd yalm
sudo chmod +x ./docker/*
sudo bash ./docker/build.sh. Ɔde ne nsa kyerɛɛ ne so, na ɔde ne nsa kyerɛɛ ne so bio

Ɛyɛ bɛyɛ dɔnhwerew biako nso.

Asetra hack. Wubetumi atwe checkpoints, ahyɛ docker na woasi container wɔ server a ne bo nyɛ den so a video card biako ka ho. Ɛbɛyɛ saa ara wɔ bere mu, enti wubetumi akora sika kakra so. Sɛ yɛboaboa ano wɔ server a ne bo nyɛ den so wie a, yɛpopa, na yɛyɛ combat server de disk a efi server a ne bo nyɛ den mu di dwuma. Afei worentua bere a wode bɛtwɛn nhyiam no na woabɔ mmeae a wɔhwɛ nneɛma so no ho ka ntra so.

6. Siesie emu nsɛm

6.1 Nneɛma a wɔde hwɛ nneɛma so

Sɛ checkpoints no download no wie a, ɛsɛ sɛ wode hyɛ configs no mu. Akwan abien na ɛwɔ hɔ, parameters a ɛteɛ anaasɛ transfer checkpoints. Baabiara a wɔhwɛ kwan sɛ checkpoints no bɛba wɔ adwuma no directory titiriw no mu, sɛnea ɛte biara no, ɛsɛ sɛ wɔde nea wɔatwe no fi download folda a ɛwɔ atifi hɔ no mu kɔ baabi foforo. Sɛ wowɔ yalm folda no mu a, execute

mv ./twe/yalm100b_hwɛbea ./ .

Anaasɛ sesa akwan a ɛkɔ fael ahorow a ɛwɔ nhwɛso fael ahorow no mu
https://github.com/yandex/YaLM-100B/blob/c91b7d7fe8dbf39c9e307d6d324446d0df136a23/examples/generate_interactive.sh#L8-L9

6.2 Video kaad ahorow

Yɛhwɛ sɛ wɔde video card ahorow no ahyɛ mu yiye anaa. Sɛ wowɔ video kaad awotwe a, ɛnde ɛho nhia sɛ wosakra biribiara. Sɛ nɔma no yɛ soronko a, ɛnde yɛsesa saa nkyerɛwde yi
Neural network YaLM 100B wɔ adeyɛ mu. Wɔ nkyerɛwde a ɛto so abien no mu no, mfiri a wɔde di dwuma no nɔma (wubetumi ahwɛ wɔ nvidia-smi mu, a woafi ase dedaw). Nea ɛto so anan no, wɔn dodow.

7. Fa docker container no tu mmirika

Sɛ wowɔ yalm folda no mu a, yɛ ahyɛde no

sudo bash ./docker/ntu mmirika.sh

Sɛ biribiara yɛ OK a, ɛnde wɔde wo bɛkɔ container bi a ɛsɛ sɛ wokɔ yalm folda a ɛwɔ wo home directory no mu.

cd ~/yalm a ɛyɛ fɛ

8. Fa nhwɛso a efi YaLM 100B mu no tu mmirika

Yɛayɛ krado sɛ yɛde nhwɛso ahorow no biako befi ase. Wɔaka wɔn ho asɛm
wɔ ha .

chmod +x ./nhwɛso/yɛ_nkitahodi.sh
./nhwɛso/yɛ_nkitahodi.sh

Nya abotare, ɛka sɛ wobɛtwɛn simma 10-15 bio kosi sɛ wɔbɛbɔ GPT mfonini no na wɔde nneɛma a emu duru fi mmeae a wɔhwɛ nneɛma so no ahyɛ mu.
Neural network YaLM 100B wɔ adeyɛ mu.

Sɛ ɔdan no wie a, MegatronML bɛka akyerɛ wo sɛ hyɛ nsɛm a ɛfa ho no mu na woayɛ nsɛm. Hwɛ yiye bere a worekyerɛw nsɛm no. Wɔ tebea horow bi mu no, mfomso bi ba, dwumadi no bɔ na ɛsɛ sɛ wufi nhyiam no ase bio. Enti, eye sɛ wode nhwɛso ahorow a egye nsɛm fi fael bi mu bedi dwuma.

9. Nea efi adwuma no mu ba

Neural network YaLM 100B wɔ adeyɛ mu.
Neural network YaLM 100B wɔ adeyɛ mu. Ɛte sɛ nea ɛyɛ anigye. Nokwarem no, eyinom yɛ nhwɛso pa ara kwa. Mede sɔhwɛ no tuu mmirika wɔ nhwɛsode ahorow so. Sɛnea wɔhwɛ kwan no, dodow a nsɛm a ɛfa ho no ye no, dodow no ara na wɔbɛkyerɛw nsɛm a ntease wom. Wobetumi ahwɛ awo ntoatoaso a wɔde sɔ hwɛ no nyinaa wɔ link ahorow no so:

Wɔ bo no ho no, ɛbɔɔ me bɛyɛ ruble mpem 9 de gyee server ahorow a ɛsono ne tumi fi ntetee ne ahosiesie so kosi awo ntoatoaso so. Abasamtu titiriw ne sɛ wuntumi nnya biribiara ntɛm ara. Egye bere tenten paa ansa na woafi ase na nsɛm no ntumi nwo ntɛmntɛm sɛnea yɛpɛ, esiane ɛka a wɔbɔ wɔ server no ho dɔnhwerew biara nti.
Neural network YaLM 100B wɔ adeyɛ mu.  

Sɛnea wobɛtumi ayɛ YaLM a 200Gb GPU RAM nni mu?

Ɛsɛ sɛ wode deepspeed zero offload ka config no ho. Wɔn a wonim nea yɛreka ho asɛm no fam no, ɛbɛyɛ mmerɛw yiye sɛ wɔbɛyɛ. Wɔ afoforo fam no, eyi nyɛ adwuma a ɛho nhia koraa. Ɛho hia sɛ wohu sɛ offload betumi ayɛ CPU RAM anaa NVMe mu. Wo werɛ betumi afi NVMe ho asɛm mprempren, efisɛ. data dodow bi a ɛdɔɔso yiye na wɔreyɛ ho adwuma na disk no ntumi nnyina ano. Zero offload CPU yɛ nokware kɛse. Ampa, eyi nti ɛsɛ sɛ wunya 200+ Gb CPU RAM wɔ stock mu, a ɛno nso nyɛ nea ne bo yɛ mmerɛw. Na wɔbɛkyerɛw nsɛm biako bɛyɛ simma 20-40, efisɛ wonnya ntumi mfaa no nsɛso wɔ video kaad abien so. Sɛnea wubetumi ahu wɔ screenshot a ɛwɔ ase ha no mu no, video card biako pɛ na ɛde ne ho hyɛɛ awo ntoatoaso no mu, na afei ɛyɛ memory no nkyem anan mu biako pɛ. Ɛda so ara yɛ nea wontumi nhu nea enti a wɔmfa 24 GB no nyinaa nni dwuma, .
Neural network YaLM 100B wɔ adeyɛ mu. Wiɛ, sɛ yɛde bɛwie a, mɛka sɛ ɛyɛ yie sɛ wobɛtu mmirika wɔ RTX 3070 TI baako mpo so. Nanso ntease pɔtee biara nni eyi mu, efisɛ. NVMe remma wo kwan mma wonyɛ 150 GB data ho adwuma ntɛmntɛm wɔ swap no mu, a ɛwɔ appendage a ɛwɔ 96 GB RAM no mu.
Neural network YaLM 100B wɔ adeyɛ mu.

Nsɛm a wɔaboaboa ano

Nokwarem no, mɛda so ara abɔ mmɔden sɛ mɛhwehwɛ akwan a eye sen biara a wɔfa so tow hyɛn no. Nanso ɛde besi ha no madu asɛm no ho sɛ YaLM 100b bo yɛ den dodo / ɛyɛ brɛoo dodo ma me nnwuma. Sika koro no ara ho no, nkurɔfo bɛkyerɛw pii na ayɛ papa kɛse. Nanso misusuw sɛ ɛyɛ bere tiaa mu de, yɛbɛhwɛ. Sɛ wo hia mmoa wɔ launching, setting up yalm, anaa wopɛ sɛ wohu nea ɛfiri mu ba wɔ wo context examples no so a, kyerɛw kɔ mail anaa telegram no so.

pskucherov
Rate author
Add a comment

  1. Olha

    Статья на Мега актуальную тему! Спасибо.

    Reply
  2. Данила

    Крутая статья! Спасибо автору!

    Reply
  3. Дмитрий

    СПАСИБО !!!
    три дня эту информацию искал
    нет подобного о RuGPT3 и Порфириче?

    Reply