{"id":833,"date":"2014-02-21T03:00:44","date_gmt":"2014-02-21T01:00:44","guid":{"rendered":"http:\/\/www.shukko.com\/x3\/?p=833"},"modified":"2014-05-24T02:13:26","modified_gmt":"2014-05-24T00:13:26","slug":"8-disk-ile-veya-4-disk-ile-ile-veya-canin-nasil-isterse-proxmox-uzerinde-software-raid-10-kurulumu","status":"publish","type":"post","link":"https:\/\/www.shukko.com\/x3\/2014\/02\/21\/8-disk-ile-veya-4-disk-ile-ile-veya-canin-nasil-isterse-proxmox-uzerinde-software-raid-10-kurulumu\/","title":{"rendered":"8 disk ile veya 4 disk ile ile veya canin nasil isterse Proxmox Uzerinde Software Raid 10 Kurulumu"},"content":{"rendered":"<p>kurulum icin elimizde uygun bir makinamiz var.<\/p>\n<p>bu makinamizda 4 adet 2tb data diskimiz mevcut,<\/p>\n<p>biz bu disklerimizi software raid 10 olarak proxmox altinda calistirmak istiyoruz<\/p>\n<p>daha onceki bir yazimda once debian wheezy kurmus \u00a0daha sonra onun uzerine lvm raid yapip isi hallettmistim<\/p>\n<p>fakat bu bana cazip gelmiyor, bu tur raid kurulumu guncellemelerde sorun cikartiyor.<\/p>\n<p>O yuzden bu kez yapmak istedigim oncelikle 4 diskimizin 1.cisine normal sekilde proxmox kurduktan sonra sistemi proxmox calisirken raid 10 haline getirmek<\/p>\n<p>adimlar su sekildedir:<\/p>\n<p>1- guncel proxmox isosu download edilir<br \/>\n2- \/dev\/sda uzerine normal proxmox kurulumu yapilir<br \/>\n3- hersey calisir hale geldikden sonra ssh ile sisteme baglanilir<br \/>\n4- proxmox icin gerekli repo ayarlari yapilir ve sistem guncellenir son olarak mdadm paketleri sisteme yuklenir<br \/>\n<code><br \/>\nnano \/etc\/apt\/sources.list<br \/>\n------------<br \/>\ndeb http:\/\/ftp.de.debian.org\/debian wheezy main contrib<br \/>\n# security updates<br \/>\ndeb http:\/\/security.debian.org\/ wheezy\/updates main contrib<br \/>\n# PVE pve-no-subscription repository provided by proxmox.com, NOT recommended for production use<br \/>\ndeb http:\/\/download.proxmox.com\/debian wheezy pve-no-subscription<br \/>\n-------------<\/code><\/p>\n<p>apt-get update<br \/>\napt-get dist-upgrade<\/p>\n<p>apt-get install mdadm<\/p>\n<p>5-bu asamada partition tablolarimizi disk1 den disk2,3,ve 4 e kopyalayacagiz<br \/>\nancak bundan once eger sistemde olurda daha onceden bir mdadm yapilandirmasi varsa eski disklerde bunu halletmek icin diskleri sifirlayalim, bu komut disklerde eski partitionlari ve mbr yi silecek<br \/>\n<code><br \/>\n# dd if=\/dev\/zero of=\/dev\/sdx bs=512 count=1<br \/>\n<\/code><br \/>\nbundan sonra partition tablolarimizi kopyalayalim 4disk icin su sekilde<br \/>\n<code><br \/>\nsfdisk -d \/dev\/sda | sfdisk -f \/dev\/sdb<br \/>\nsfdisk -d \/dev\/sda | sfdisk -f \/dev\/sdc<br \/>\nsfdisk -d \/dev\/sda | sfdisk -f \/dev\/sdd<br \/>\n<\/code><\/p>\n<p>NOT NOT NOT \/\/ GUNCELLEME GUNCELLEME<\/p>\n<p>EGER PARTITIONLARIMIZ OLDUDA GPT OLDU ISE<\/p>\n<p>gdisk kur<\/p>\n<p>Copy the partition scheme from <i>\/dev\/sda<\/i> to <i>\/dev\/sdb<\/i>:<\/p>\n<div class=\"highlight\">\n<pre><code class=\"text language-text\" data-lang=\"text\">sgdisk -R=\/dev\/sdb \/dev\/sda\r\n<\/code><\/pre>\n<\/div>\n<p>buda zorunlu Now randomizes the GUID:<\/p>\n<div class=\"highlight\">\n<pre><code class=\"text language-text\" data-lang=\"text\">gdisk -G \/dev\/sdb\r\n<\/code><\/pre>\n<\/div>\n<p>&nbsp;<\/p>\n<p>6- 3 diskimizdeki partition formatini RAID olarak belirleyelim<br \/>\n<code><br \/>\nsfdisk -c \/dev\/sdb 1 fd<br \/>\nsfdisk -c \/dev\/sdb 2 fd<br \/>\nsfdisk -c \/dev\/sdc 1 fd<br \/>\nsfdisk -c \/dev\/sdc 2 fd<br \/>\nsfdisk -c \/dev\/sdd 1 fd<br \/>\nsfdisk -c \/dev\/sdd 2 fd<br \/>\n<\/code><\/p>\n<p>NOT NOT NOT \/\/ GUNCELLEME GUNCELLEME<\/p>\n<p>GPT icin soyle yaptim<br \/>\nBelki baska kolay yolu vardir , bulamadim noobum.<\/p>\n<p>gdisk \/dev\/sdb<br \/>\nt ye bas<br \/>\npartition sec 1 &gt; FD00 yap<\/p>\n<p>tum disklerdeki tum partititonlara yapinca w kaydet q cik<\/p>\n<p>7- Raid yapilandirmamizi INITIALIZE edelim<br \/>\nONEMLI NOT: eger daha onceden disk yapilandirmamizda raid kullanmis isek<br \/>\nmdadm yi sisteme entegre ettigimizde bunlar mdadm.conf dosyamiz icine otomatik olarak yazilmis olabilir, o yuzden raid yapimizi initialize ettikten sonra \/etc\/mdadm\/mdadm.conf dosyamizi incelememiz gerek<br \/>\neger gereksiz eski raid array uuid bilgisi var ise bunlari silmeli ve yeni yapiyi icine olusturmaliyiz.<br \/>\n<code><br \/>\nmdadm --create \/dev\/md0 --level=1 --raid-disks=4 missing \/dev\/sdb1 \/dev\/sdc1 \/dev\/sdd1<br \/>\nmdadm --create \/dev\/md1 --level=10 --raid-disks=4 missing \/dev\/sdb2 \/dev\/sdc2 \/dev\/sdd2<br \/>\n<\/code><br \/>\nconf dosyamiza goz atalim eski yapilar varsa silelim, yeni yapimizi kayit etmek icin<br \/>\n<code><br \/>\nmdadm --examine --scan &gt;&gt; \/etc\/mdadm\/mdadm.conf<br \/>\n<\/code><br \/>\nislem tamamdir<\/p>\n<p>8- \/boot dizinimizi \/dev\/md0 uzerine tasiyalim ve fstab dosyamizi \/dev\/md0 dan boot edecek hale getirelim<br \/>\n<code><br \/>\nmkfs.ext3 \/dev\/md0<br \/>\nmkdir \/mnt\/md0<br \/>\nmount \/dev\/md0 \/mnt\/md0<br \/>\ncp -ax \/boot\/* \/mnt\/md0<br \/>\n<\/code><br \/>\nsonra<br \/>\n<code><br \/>\nnano \/etc\/fstab su sekilde olmasi gerek, basitce UUID satirimizi devre disi birakiyoruz<br \/>\n-----------------<br \/>\n# \/dev\/pve\/root \/ ext3 errors=remount-ro 0 1<br \/>\n\/dev\/pve\/data \/var\/lib\/vz ext3 defaults 0 1<br \/>\n#UUID=cc425576-edf6-4895-9aed-ccfd89aeb0fb \/boot ext3 defaults 0 1<br \/>\n\/dev\/md0 \/boot ext3 defaults 0 1<br \/>\n\/dev\/pve\/swap none swap sw 0 0<br \/>\nproc \/proc proc defaults 0 0<br \/>\n-------------------<br \/>\n<\/code><\/p>\n<p>9- sistemi reboot ediyoruz.<br \/>\neger hersey yolunda giderse sistemimiz \/dev\/md0 uzerinden boot edecek demektir.<br \/>\nbravo ciddi bir asamayi hallettik |:)<\/p>\n<p>sistem acildikdan sonra gerekli kontrolleri yapalim<br \/>\n<code><br \/>\nmount | grep boot<br \/>\ndedigimizde asagidaki gibi bir satir cikmasi gerek<br \/>\n\/dev\/md0 on \/boot type ext3 (rw,relatime,errors=continue,user_xattr,acl,barrier=0,data=ordered)<br \/>\n<\/code><br \/>\nbunu gorduysek islem tamam demektir.<\/p>\n<p>10- simdi gruba \/dev\/md0 dan boot etmek istedigimizi soyleyelim kisaca asagidaki komutlari girelim<\/p>\n<p><code><br \/>\necho '# customizations' &gt;&gt; \/etc\/default\/grub<br \/>\necho 'GRUB_DISABLE_LINUX_UUID=true' &gt;&gt; \/etc\/default\/grub<br \/>\necho 'GRUB_PRELOAD_MODULES=\"raid dmraid\"' &gt;&gt; \/etc\/default\/grub<br \/>\necho raid1 &gt;&gt; \/etc\/modules<br \/>\necho raid10 &gt;&gt; \/etc\/modules<br \/>\necho raid1 &gt;&gt; \/etc\/initramfs-tools\/modules<br \/>\necho raid10 &gt;&gt; \/etc\/initramfs-tools\/modules<br \/>\ngrub-install \/dev\/sda<br \/>\ngrub-install \/dev\/sdb<br \/>\ngrub-install \/dev\/sdc<br \/>\ngrub-install \/dev\/sdd<br \/>\nupdate-grub<br \/>\nupdate-initramfs -u<br \/>\n<\/code><br \/>\nislem tamam<\/p>\n<p>11- simdi \/dev\/sda1 i raid arrayimiz icine katmaliyiz<br \/>\n<code><br \/>\nsfdisk -c \/dev\/sda 1 fd<br \/>\nmdadm \u2013add \/dev\/md0 \/dev\/sda1<br \/>\n<\/code><\/p>\n<p>12- simdiki adimdan once burada cok uzun vakit alacak bir lvm tasima islemi yapacagimizdan<br \/>\nscreen<br \/>\nkurup calistirip bunun altinda islemleri yapmakta fayda var.<\/p>\n<p>LVM yi \/dev\/md1 uzerine tasiyacagiz<\/p>\n<p><code><br \/>\npvcreate \/dev\/md1<br \/>\nvgextend pve \/dev\/md1<br \/>\npvmove \/dev\/sda2 \/dev\/md1<br \/>\n<\/code><\/p>\n<p>pvmove cok uzun surecek. bu arada yatip uyumak en iyisi, ya da disari cikip hava alin. 2tb disk ve guncel bir islemci ile en az 2-3 saat surecektir \ud83d\ude42<\/p>\n<p>islem bittikten sonra sda2 uzerindeki pveyi reduce edip remove edecegiz<br \/>\n<code><br \/>\nvgreduce pve \/dev\/sda2<br \/>\npvremove \/dev\/sda2<br \/>\n<\/code><\/p>\n<p>13- en son asamada \/dev\/sda2 yide raid yapimiz icine katacagiz<br \/>\n<code><br \/>\nsfdisk --change-id \/dev\/sda 2 fd<br \/>\nmdadm --add \/dev\/md1 \/dev\/sda2<br \/>\n<\/code><\/p>\n<p>14- bundan sonra raidimizin rebuild edisini guzel guzel izleyebiliriz \ud83d\ude42<br \/>\n<code><br \/>\nwatch -n 5 cat \/proc\/mdstat<br \/>\n<\/code><br \/>\nhatta dilersek bunu biraz hizlandirabiliriz<br \/>\n<code><br \/>\necho 800000 &gt; \/proc\/sys\/dev\/raid\/speed_limit_min<br \/>\necho 1600000 &gt; \/proc\/sys\/dev\/raid\/speed_limit_max<br \/>\n<\/code><\/p>\n<p>gule gule kullaniniz.<br \/>\nPROXMOX SOFTWARE RAID 10 KURULUMUNUZ KULLANIMA HAZIRDIR<\/p>\n<p>EK:<br \/>\n15&#8211; bu islemleri yaptiktan sonra df -h komutumuza makinamiz su sekilde yanit veriyor<br \/>\n<code><br \/>\nFilesystem Size Used Avail Use% Mounted on<br \/>\nudev 10M 0 10M 0% \/dev<br \/>\ntmpfs 3.2G 416K 3.2G 1% \/run<br \/>\n\/dev\/mapper\/pve-root 20G 1.2G 18G 7% \/<br \/>\ntmpfs 5.0M 0 5.0M 0% \/run\/lock<br \/>\ntmpfs 6.3G 3.1M 6.3G 1% \/run\/shm<br \/>\n\/dev\/mapper\/pve-data 1.8T 196M 1.8T 1% \/var\/lib\/vz<br \/>\n\/dev\/md0 495M 58M 412M 13% \/boot<br \/>\n\/dev\/fuse 30M 12K 30M 1% \/etc\/pve<br \/>\n<\/code><br \/>\n\/var\/lib\/vz\/ 2TB mi? bir yerde yanlislik var 4 TB olmali idi \ud83d\ude42<br \/>\nEh normal, Kalan raid 10 diskimiz bos vg alani olarak duruyor. BKNZ:<br \/>\n<code><br \/>\nvgdisplay<br \/>\n--- Volume group ---<br \/>\nVG Name pve<br \/>\nSystem ID<br \/>\nFormat lvm2<br \/>\nMetadata Areas 1<br \/>\nMetadata Sequence No 11<br \/>\nVG Access read\/write<br \/>\nVG Status resizable<br \/>\nMAX LV 0<br \/>\nCur LV 3<br \/>\nOpen LV 3<br \/>\nMax PV 0<br \/>\nCur PV 1<br \/>\nAct PV 1<br \/>\nVG Size 3.64 TiB<br \/>\nPE Size 4.00 MiB<br \/>\nTotal PE 953544<br \/>\nAlloc PE \/ Size 472709 \/ 1.80 TiB<br \/>\nFree PE \/ Size 480835 \/ 1.83 TiB<br \/>\nVG UUID 16k1ou-8jQ7-OB63-Jesb-s7p4-SOPW-deKGGc<br \/>\n<\/code><\/p>\n<p>Pek Guzel, ne yapmamiz lazim? Bu bos alanimizi mevcut LVM alanimiza dahil edip \/var\/lib\/vz\/ altinda kullanilabilir hale getirmeliyiz.<br \/>\nBu asamada linux LVM engin tecrubelerimizden faydalanacagiz.<\/p>\n<p>once standart komutlar ile duruma bakalim:<\/p>\n<p>lvdisplay<br \/>\npvdisplay<br \/>\nvgdisplay<\/p>\n<p><code><br \/>\nroot@pmd04:~# vgs<br \/>\nVG #PV #LV #SN Attr VSize VFree<br \/>\npve 1 3 0 wz--n- 3.64t 1.83t<br \/>\nroot@pmd04:~# pvs<br \/>\nPV VG Fmt Attr PSize PFree<br \/>\n\/dev\/md1 pve lvm2 a-- 3.64t 1.83t<br \/>\nroot@pmd04:~# lvs<br \/>\nLV VG Attr LSize Pool Origin Data% Move Log Copy% Convert<br \/>\ndata pve -wi-ao--- 1.78t<br \/>\nroot pve -wi-ao--- 20.00g<br \/>\nswap pve -wi-ao--- 8.00g<br \/>\n<\/code><\/p>\n<p>sonra<br \/>\nVG bos alanimizi extend edelim ve daha sonra LV mize dahil edelim<br \/>\n<code><br \/>\nroot@pmd04:~# lvextend -l +100%FREE \/dev\/pve\/data<br \/>\nExtending logical volume data to 3.61 TiB<br \/>\nLogical volume data successfully resized<br \/>\nroot@pmd04:~# resize2fs \/dev\/pve\/data<br \/>\nresize2fs 1.42.5 (29-Jul-2012)<br \/>\nFilesystem at \/dev\/pve\/data is mounted on \/var\/lib\/vz; on-line resizing required<br \/>\nold_desc_blocks = 118, new_desc_blocks = 232<br \/>\nPerforming an on-line resize of \/dev\/pve\/data to 969089024 (4k) blocks.<br \/>\nThe filesystem on \/dev\/pve\/data is now 969089024 blocks long.<br \/>\nroot@pmd04:~# df -h<br \/>\nFilesystem Size Used Avail Use% Mounted on<br \/>\nudev 10M 0 10M 0% \/dev<br \/>\ntmpfs 3.2G 416K 3.2G 1% \/run<br \/>\n\/dev\/mapper\/pve-root 20G 1.2G 18G 7% \/<br \/>\ntmpfs 5.0M 0 5.0M 0% \/run\/lock<br \/>\ntmpfs 6.3G 3.1M 6.3G 1% \/run\/shm<br \/>\n\/dev\/mapper\/pve-data 3.6T 197M 3.6T 1% \/var\/lib\/vz<br \/>\n\/dev\/md0 495M 58M 412M 13% \/boot<br \/>\n\/dev\/fuse 30M 12K 30M 1% \/etc\/pve<br \/>\nroot@pmd04:~#<br \/>\n<\/code><\/p>\n<p>cok guzel mi oldu ne oldu ?<br \/>\nevet oldu<br \/>\ntamam o zaman |:)<\/p>\n<p>&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;-<\/p>\n<p>EK &#8211; GPT alamanca<\/p>\n<p>&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;-<\/p>\n<h2>Proxmox 3.1 auf Softraid mit GPT<\/h2>\n<p><a class=\"st_tag internal_tag\" title=\"Posts tagged with Proxmox\" href=\"http:\/\/alexunil.net\/tag\/proxmox\/\" rel=\"tag\">Proxmox<\/a> unterst\u00fctzt offiziell kein <a class=\"st_tag internal_tag\" title=\"Posts tagged with softraid\" href=\"http:\/\/alexunil.net\/tag\/softraid\/\" rel=\"tag\">Softraid<\/a>, man kann es aber nach der Installation in ein <a class=\"st_tag internal_tag\" title=\"Posts tagged with softraid\" href=\"http:\/\/alexunil.net\/tag\/softraid\/\" rel=\"tag\">Softraid<\/a> verwandeln:<br \/>\n<a title=\"proxmox softraid\" href=\"http:\/\/boffblog.wordpress.com\/2013\/08\/22\/how-to-install-proxmox-ve-3-0-on-software-raid\/\">http:\/\/boffblog.wordpress.com\/2013\/08\/22\/how-to-install-proxmox-ve-3-0-on-software-raid\/<\/a><\/p>\n<p>Bei gro\u00dfen Festplatten verwendet proxmox aber <a title=\"GUID Partition Table\" href=\"http:\/\/de.wikipedia.org\/wiki\/GUID_Partition_Table\">GPT<\/a> zur Partitionierung. Daher erh\u00e4lt man schon beim kopieren der Partitionstabelle eine Fehlermeldung:<br \/>\n\u201cWARNING: <a class=\"st_tag internal_tag\" title=\"Posts tagged with GPT\" href=\"http:\/\/alexunil.net\/tag\/gpt\/\" rel=\"tag\">GPT<\/a> (<a class=\"st_tag internal_tag\" title=\"Posts tagged with GUID Partition Table\" href=\"http:\/\/alexunil.net\/tag\/guid-partition-table\/\" rel=\"tag\">GUID Partition Table<\/a>) detected on \u2018\/dev\/sda\u2019! The util sfdisk doesn\u2019t support <a class=\"st_tag internal_tag\" title=\"Posts tagged with GPT\" href=\"http:\/\/alexunil.net\/tag\/gpt\/\" rel=\"tag\">GPT<\/a>. Use GNU Parted.\u201d<br \/>\nAbhilfe schafft die Verwendung von gdisk. F\u00fcr was genau die 1. Partition belegt ist weiss ich nicht. Boot lag bei mir auf \/dev\/sda2 und die lvm-Volumes lagen auf \/dev\/sda3<br \/>\nSomit habe ich folgende Befehle verwendet:<\/p>\n<p><code>apt-get update<br \/>\napt-get dist-upgrade<br \/>\napt-get install mdadm gdisk<br \/>\nsgdisk -R \/dev\/sdb \/dev\/sda <\/code>!!!ACHTUNG Reihenfolge beachten, wird in dem Fall von recht nach links kopiert<code><br \/>\nsgdisk -G \/dev\/sdb<br \/>\ndd if=\/dev\/sda1 of=\/dev\/sdb1 <\/code>NOTWENDIG?<code><br \/>\nsgdisk -t 2:fd00 \/dev\/sdb<br \/>\nsgdisk -t 3:fd00 \/dev\/sdb<\/code><br \/>\nReboot notwendig?<br \/>\n<code><br \/>\nmdadm --create \/dev\/md0 --level=1 --raid-disks=2 missing \/dev\/sdb2<br \/>\nmdadm --create \/dev\/md1 --level=1 --raid-disks=2 missing \/dev\/sdb3<br \/>\nmkfs.ext3 \/dev\/md0<br \/>\nmkdir \/mnt\/md0<br \/>\nmount \/dev\/md0 \/mnt\/md0<br \/>\ncp -ax \/boot\/* \/mnt\/md0<\/code><br \/>\n\/etc\/fstab editieren und die UUID vor \/boot durch \/dev\/md0 ersetzen<br \/>\nund nochmal booten!<br \/>\n<code><br \/>\necho \u2018GRUB_DISABLE_LINUX_UUID=true\u2019 &gt;&gt; \/etc\/default\/grub<br \/>\necho \u2018GRUB_PRELOAD_MODULES=\"raid dmraid\"\u2018 &gt;&gt; \/etc\/default\/grub<br \/>\necho raid1 &gt;&gt; \/etc\/modules<br \/>\necho raid1 &gt;&gt; \/etc\/initramfs-tools\/modules<br \/>\ngrub-install \/dev\/sda<br \/>\ngrub-install \/dev\/sdb<br \/>\nupdate-grub<br \/>\nupdate-initramfs -u<br \/>\nmdadm --add \/dev\/md0 \/dev\/sda2<br \/>\npvcreate \/dev\/md1<br \/>\nvgextend pve \/dev\/md1<br \/>\npvmove \/dev\/sda3 \/dev\/md1<br \/>\nvgreduce pve \/dev\/sda3<br \/>\npvremove \/dev\/sda3<br \/>\nsgdisk -t 3:fd00 \/dev\/sda<br \/>\nmdadm --add \/dev\/md1 \/dev\/sda3<br \/>\ncat \/proc\/mdstat<\/code><\/p>\n<hr \/>\n<p>&nbsp;<\/p>\n<hr \/>\n<p>&nbsp;<\/p>\n<p>GUNCELLEME 23 MAYIS 2014<\/p>\n<p>Bu is cok Kabak Tadi verdi<\/p>\n<p>Ama ne kadar ugrastigimi ben biliyorum \ud83d\ude42<\/p>\n<p>Bildigim seyi o yuzden yeni yine yeniden bir daha yazayim<\/p>\n<p>bu kez gene 8 disk ile .bash_history dosyam uzerinden gidecegim<\/p>\n<p>Yukaridaki hersey burada var kisa minik aciklamalar ile<\/p>\n<p>Bir iki puf noktasida var<\/p>\n<p>Bunu goz onune almak son olarak ve ileride uygulamak yerinde bir karar olacaktir.<\/p>\n<p>Yazmamaya karar verdim.<\/p>\n<p>Cok daraltti cunku beni<\/p>\n<p>bir daha ugrasip bir daha yaparim sonra&#8230;<\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>kurulum icin elimizde uygun bir makinamiz var. bu makinamizda 4 adet 2tb data diskimiz mevcut, biz bu disklerimizi software raid 10 olarak proxmox altinda calistirmak istiyoruz daha onceki bir yazimda once debian wheezy kurmus \u00a0daha sonra onun uzerine lvm raid yapip isi hallettmistim fakat bu bana cazip gelmiyor, bu tur raid kurulumu guncellemelerde sorun cikartiyor. [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-833","post","type-post","status-publish","format-standard","hentry","category-kategerisiz"],"_links":{"self":[{"href":"https:\/\/www.shukko.com\/x3\/wp-json\/wp\/v2\/posts\/833","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.shukko.com\/x3\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.shukko.com\/x3\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.shukko.com\/x3\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.shukko.com\/x3\/wp-json\/wp\/v2\/comments?post=833"}],"version-history":[{"count":10,"href":"https:\/\/www.shukko.com\/x3\/wp-json\/wp\/v2\/posts\/833\/revisions"}],"predecessor-version":[{"id":907,"href":"https:\/\/www.shukko.com\/x3\/wp-json\/wp\/v2\/posts\/833\/revisions\/907"}],"wp:attachment":[{"href":"https:\/\/www.shukko.com\/x3\/wp-json\/wp\/v2\/media?parent=833"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.shukko.com\/x3\/wp-json\/wp\/v2\/categories?post=833"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.shukko.com\/x3\/wp-json\/wp\/v2\/tags?post=833"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}