The DNS was created in 1984 and in 1985 top level domains were defined. The first top level domains were COM, ORG, EDU, GOV, MIL and ccTLD.

In April 1985 cmu.edu, purdue.edu, rice.edu and ucla.edu were the first registered domain names.

The first .gov was css.gov and was registered in June 1985.

The first .org was mitre.org and was registered in July 1985.

Now for the first .com which was registered on March 15 1985 and it was symbolics.com which still happens to be up and running, although not much to look at.

Now for the first 100 registered domains:

SYMBOLICS.COM
BBN.COM
THINK.COM
MCC.COM
DEC.COM
NORTHROP.COM
XEROX.COM
SRI.COM
HP.COM
BELLCORE.COM
IBM.COM
SUN.COM
INTEL.COM
TI.COM
ATT.COM
GMR.COM
TEK.COM
FMC.COM
UB.COM
BELL-ATL.COM
GE.COM
GREBYN.COM
ISC.COM
NSC.COM
STARGATE.COM
BOEING.COM
ITCORP.COM
SIEMENS.COM
PYRAMID.COM
ALPHACDC.COM
BDM.COM
FLUKE.COM
INMET.COM
KESMAI.COM
MENTOR.COM
NEC.COM
RAY.COM
ROSEMOUNT.COM
VORTEX.COM
ALCOA.COM
GTE.COM
ADOBE.COM
AMD.COM
DAS.COM
DATA-IO.COM
OCTOPUS.COM
PORTAL.COM
TELTONE.COM
3COM.COM
AMDAHL.COM
CCUR.COM
CI.COM
CONVERGENT.COM
DG.COM
PEREGRINE.COM
QUAD.COM
SQ.COM
TANDY.COM
TTI.COM
UNISYS.COM
CGI.COM
CTS.COM
SPDCC.COM
APPLE.COM
NMA.COM
PRIME.COM
PHILIPS.COM
DATACUBE.COM
KAI.COM
TIC.COM
VINE.COM
NCR.COM
CISCO.COM
RDL.COM
SLB.COM
PARCPLACE.COM
UTC.COM
IDE.COM
TRW.COM
UNIPRESS.COM
DUPONT.COM
LOCKHEED.COM
ROSETTA.COM
TOAD.COM
QUICK.COM
ALLIED.COM
DSC.COM
SCO.COM
GENE.COM
KCCS.COM
SPECTRA.COM
WLK.COM
MENTAT.COM
WYSE.COM
CFG.COM
MARBLE.COM
CAYMAN.COM
ENTITY.COM
KSR.COM
NYNEXST.COM
March 15 1985
April 24 1985
May 24 1985
July 11 1985
September 30 1985
November 7 1985
January 9 1986
January 17 1986
March 3 1986
March 5 1986
March 19 1986
March 19 1986
March 25 1986
March 25 1986
April 25 1986
May 8 1986
May 8 1986
July 10 1986
July 10 1986
August 5 1986
August 5 1986
August 5 1986
August 5 1986
August 5 1986
August 5 1986
September 2 1986
September 18 1986
September 29 1986
October 18 1986
October 27 1986
October 27 1986
October 27 1986
October 27 1986
October 27 1986
October 27 1986
October 27 1986
October 27 1986
October 27 1986
October 27 1986
November 5 1986
November 5 1986
November 17 1986
November 17 1986
November 17 1986
November 17 1986
November 17 1986
November 17 1986
November 17 1986
December 11 1986
December 11 1986
December 11 1986
December 11 1986
December 11 1986
December 11 1986
December 11 1986
December 11 1986
December 11 1986
December 11 1986
December 11 1986
December 11 1986
January 19 1987
January 19 1987
January 19 1987
February 19 1987
March 4 1987
March 4 1987
April 4 1987
April 23 1987
April 23 1987
April 23 1987
April 23 1987
April 30 1987
May 14 1987
May 14 1987
May 20 1987
May 27 1987
May 27 1987
June 26 1987
July 9 1987
July 13 1987
July 27 1987
July 27 1987
July 28 1987
August 18 1987
August 31 1987
September 3 1987
September 3 1987
September 3 1987
September 22 1987
September 22 1987
September 22 1987
September 22 1987
September 30 1987
October 14 1987
November 2 1987
November 9 1987
November 16 1987
November 16 1987
November 24 1987
November 30 1987

Problem not in the awbs irrp module all new domain default on irrp
renewalmode=AUTORENEW

how to disable auto renew all domains on irrp

The commands are listed MANUAL under registrar document (see HOME link in side menu, and look for API SMTP-DIRECT Manual under REGISTRAR DOCUMENTS).

You can do individual domain updates by clicking DOMAINS in the side menu of your web-interface followed by SET RENEWALMODE. Enter the domain name and select the option you want (DEFAULT/AUTO-RENEW, AUTO-DELETE, AUTO-EXPIRE).

You may do mulitple domains using the batch option. Click MISCELLANEOUS in the side menu of the web-interface followed by EXECUTE BATCH and then see steps below;

1) In the command section enter the following:

command = SetDomainRenewalmode
renewalmode = AUTORENEW or AUTOEXPIRE or AUTODELETE

2) In batch parameter enter: domain

3) In the value list enter domain names to be modified. Please remember to list only one domain per line.

4) Hit EXECUTE to process.

There is no renewal reminder feature as this is a reseller account and resellers are expected to keep track of their registered domains. There is a sort feature you can use to see what domains are expiring or need renewal. Please make sure to set RENEWALMODE back to DEFAULT for those domains that you wish to renew otherwise they will still be EXPIRED or DELETED based on what their RENEWALMODE is set at.

Regards,
HM

Promox server ustunde bulunan openvz vps nodelar durduk yere read-only file system olursa hemen uzulmemek lazim.
/dev/pve/data nin fsck ye ihtiyaci var demektir.
once eger makinamiz uzakta ise bir kvm baglayip veyahut monitor klavyemizi hazir ettikten sonra
kernel paniclere aldirmadan

fsck /dev/pve/data

tabi vakit alacak
ama hersey calisacak sonucta…

There was definitely enough space on the device where the locks are stored (default /usr/local/apache2/logs/). I tried to explicetely different Lockfiles using the LockFile-directive but this did not help. I also tried a non-default AcceptMutex (flock) which then solved the acceptlock-issue and ended in the rewrite_log_lock-issue.

Only reboot of the system helped out of my crisis.

Solution: There were myriads of semaphore-arrays left, owned by my apache-user.

ipcs -s | grep apache

Removing this semaphores immediately solved the problem.


ipcs -s | grep apache | perl -e 'while () { @a=split(/\s+/); print `ipcrm sem $a[1]`}'

Helm Server NTFS Folder Permissions

Here is some some of the folder permissions to secure Windows Server with HELM control panel.

C Drive Root

SYSTEM – FULL
Administrators – FULL

C:Domains

SYSTEM – FULL INHERITED
Administrators – FULL INHERITED
IIS_WPG – Read, Execute

C:Domainsdomain.com

SYSTEM – FULL INHERITED
Administrators – FULL INHERITED
domain.com_web – Read, Execute, Write, Modify
domains.com – Read, Execute, Write, Modify

C:PHP

SYSTEM – FULL INHERITED
Administrators – FULL INHERITED
HELMWEBUSERS – Read, Execute, List

C:PHPuploadtemp

SYSTEM – FULL INHERITED
Administrators – FULL INHERITED
HELMWEBUSERS – Write
CREATOR OWNER – Read, Write, Delete, Change Permission

C:PHPsessiondata

SYSTEM – FULL INHERITED
Administrators – FULL INHERITED
HELMWEBUSERS – Write
CREATOR OWNER – Read, Write, Delete, Change Permission

C:Perl

SYSTEM – FULL INHERITED
Administrators – FULL INHERITED

C:Perlbin
SYSTEM – FULL INHERITED
Administrators – FULL INHERITED
HELMWEBUSERS – Read, Execute, List

C:Perllib

SYSTEM – FULL INHERITED
Administrators – FULL INHERITED
HELMWEBUSERS – Read, Execute, List

C:InetpubmailrootDrop

SYSTEM – FULL INHERITED
Administrators – FULL INHERITED
INTERACTIVE – LIST INHERITED
NETWORK SERVICE – LIST INHERITED
HELMWEBUSERS – Read

C:Inetpubftproot

SYSTEM – FULL INHERITED
Administrators – FULL INHERITED
INTERACTIVE – LIST INHERITED
NETWORK SERVICE – LIST INHERITED
HELMFTPUSERS – Read

alintidir. orjinal adresi: http://ipucu.enderunix.org/view.php?id=605&lang=tr

LVM OLUŞTURMA

Örnek1:

sistemde 100GB /dev/sdb 150GB /dev/sdc şeklinde tanımlı olan iki
diskten 250GBlik LVM oluşturalım.

1. pvcreate /dev/sdb (diskleri fiziksel olarak sisteme tanımladık.)

2. pvcreate /dev/sdc (diskleri fiziksel olarak sisteme tanımladık.)

3. vgcreate vg0 /dev/sdb /dev/sdc (vg0 isminde volume grup oluşturduk.)

4. lvcreate -L150GB vg0 (logical drieve oluşturduk.)

5. vgdisplay (oluşan logical drive’ı kontrol ediyoruz
ve /dev/vg0/lvol0 şeklinde oluşan logical drive’ı görüyoruz.)

6. mkfs.ext3 /dev/vg0/lvol0 (format atıyoruz. istediğiniz dosya sistemini kullanabilirsiniz.)

7. mkdir /mydisk (diski bağlamak için bir dizin oluşturuyoruz.)

8. mount /dev/vg0/lvol0 /mydisk (diski buraya bağlıyoruz.)

Örnek2:

şimdi vg0 grubuna sistemde tanımlı olan 250GBlik /dev/sdd diskini ekleyelim ve bu diskin
120 GBlik kısmını lvol0 logic driverine ekleyelim ve yeni alanı mount edelim.

(Dikkat : bu işlem sırasında lvol0’In içindeki bilgiler doğal olarak kaybolur.)

1. umount /mydisk (/dev/vg0/lvol0 çözüyoruz.)

2. fdisk ile /dev/sdd diskinin 120GBlik kısmını /dev/sdd1 yapıyoruz.

3. pvcreate /dev/sdd1

4. vgextend vg0 /dev/sdd1 (vg0 grubuna /dev/sdd1’i ekliyoruz.)

5. lvextend -L+120GB /dev/vg0/lvol0 /dev/sdd1 (lvol0 isimli logical drivera sdd1’i ekliyoruz.)

6. mkfs.ext3 /dev/vg0/lvol0

7. mkdir /mydisk (diski bağlamak için bir dizin oluşturuyoruz.)

8. mount /dev/vg0/lvol0 /mydisk (diski buraya bağlıyoruz.)

Örnek3:

Allah muhafaza Üstünde LVM kurulu yukarıdaki sistemin göçtüğünü (işletim sisteminden kastediyorum.) düşünelim.
Şimdi vg0 isimli disk gurubunu lvm kurulu olan başka bir bilgisayara tanıtalım.

1. İlk önce sistemi göçen bilgisayardaki /etc/lvm/backup/vg0 isimli yaptığımız tanımlamaların
kayıtlı olduğu dosyayı bi yere kaydedelim.

2. Göçen bilgisayarı kapatıp diskleri üstünden sökelim ve üzerine ismini yazalım. (örneğin : sdb)

3. Bu diskleri çalışan bir linux makinaya takalım. (Dİkkat diskin ismi yeni makinada hangi porta takarsanız onun adını alır.)

Bu makinada tek bir ide disk olduğunu düşünürsek diskin adı hda olur.

Bu makinada serial ata portu olduğunu varsayarak port1’e sdb etiketli diski, port2 ye sdc etiketli diski, port3’e
sdd etiketli diski takarsak :

Eski Makinada Yeni makinda

sdb sda
sdc sdb
sdd sdc

şeklini alır.

4. Daha önce kaydettiğimiz dosyadaki bilgilere göre yeni düzeni uyarlıyoruz.

5. Şimdi ilk iki örnekte oluşturulan vg0 isimli grubu tekrar oluşturalım.

pvcreate /dev/sda
pvcreate /dev/sdb
pvcreate /dev/sdc1

vgcreate vg0 /dev/sda /dev/sdb /dev/sdc1
lvcreate -L270GB vg0

6. Artık eski sistemdeki grubu yeni sistemde tanımlamış olduk.

7. mkdir /myoldgrup (grubu bağlıyacağımız bir dizin oluşturuyoruz.)

8. mount /dev/vg0/lvol0 /myoldgrup (logic sürücümüzü bu dizine bağlıyoruz.)

İşlem tamam! Hayırlı olsun. Sisteminizdeki verilerinizi kurtardınız. Bakın bakalım dosyalar yerindemi :))

KOMUTLAR VE AÇIKLAMALARI :

pvcreate : Sisteme diski veya bölümü fiziksel olarak tanıtır. (fdiskten sonra genellikle kullanılır.)

Örn: pvcreate /dev/sdb1

pvdisplay : Fiziksel tanımlı diskleri ve bölümleri ekrana listeler.

Örn: pvdisplay

pvremove : Fiziksel tanımlı diskin veya bölümün tanımını iptal eder.

Örn: pvremove /dev/sdb1

Volume Grup Komutları

vgcreate : Volume Grubu oluşuturur.

Örn: vgcreate vg0 /dev/sdb1 /dev/sdb2

vgextend : Volume Grubuna disk veya bölüm ekler.

Örn : vgextend vg0 /dev/sdc
Örn : vgextend vg0 /dev/sdb1

vgreduce : Volume Grubundan disk veya bölüm çıkartır.

Örn : vgreduce vg0 /dev/sdc
Örn : vgreduce vg0 /dev/sdb1

vgremove : Volume Grubununu kaldırır.

Örn : vgreduce vg0

vgdisplay : Volume Grubunu listeler.

Örn : vgdisplay vg0

vgcfgbackup : Mevcut volume grubun ayarlarını bir dosyaya yedekler.

örn : vgcfgbackup vg0 (bu komuttan sonra ayarlar /etc/lvm/backup dizini altına yedeklenir.)

vgcfgrestore : Mevcut volume grubun ayarlarını dosyadan düzenler.

örn : vgcfgrestore vg0

Logical Volume Komutları

lvcreate : Tamınlı logical gurubundan disk alanı oluşturur.

Örnek : toplam 300GB disk alanına sahip vg0 isimli volume grubundan
bir adet 80 Gb, bir adet 55GB lik logic drive oluşturalım.

lvcreate -L80GB vg0 (Bu komuttan sonra /dev/vg0/lvol0 isimli 80Gblik logiv drive oluşur.)
lvcreate -L40GB vg0 (Bu komuttan sonra /dev/vg0/lvol1 isimli 40Gblik logiv drive oluşur.)

lvdisplay : logical driverleri ekrana listeler.

lvremove : logical driveri kaldırır.

Örnek : lvremove /dev/vg0/lvol0

lvreduce : tanımlı logical driverdan blok siler.

Örnek : lvreduce -L-10GB /dev/vg0/lvol0 (10GBlik alanı iptal eder.)

lvextend : tanımlı logical drivera blok ekler.

Örnek : lvextend -L+10GB /dev/vg0/lvol0 (10GBlik alan ekler.)

Diğer LVM Komutları

lvm : lvm komutlarını ekrana açıklamasıyla beraber listeler.

lvmdiskscan : Sistemdeki tüm diskleri listeler.

Not: LVM’nin diğer raid sistemlerine göre daha gelişmiş seçenekleri ve avantajları vardır.
Burada LVM özet olarak anlatılmıştır. Tabiki disk ile ilgili hangi program ve sistem
kullanılırsa kullanılsın;veri kaybını önlemek amacıyla dikkatli uygulama yapmak gereklidir.

For future refrence, and should you be using your server in an enviroment where people can upload their own ASP code, Unregister the FSO, most webhosts do

To Unregister the FileSystem COM Object
At the command prompt – type:
regsvr32 scrrun.dll /u

ne kadar proxy varsa “etik” olan

hepsini bloklayalim rahat edelim

.htaccess icine yaz gitsin


RewriteCond %{HTTP:VIA} !^$ [OR]
RewriteCond %{HTTP:FORWARDED} !^$ [OR]
RewriteCond %{HTTP:USERAGENT_VIA} !^$ [OR]
RewriteCond %{HTTP:X_FORWARDED_FOR} !^$ [OR]
RewriteCond %{HTTP:PROXY_CONNECTION} !^$ [OR]
RewriteCond %{HTTP:XPROXY_CONNECTION} !^$ [OR]
RewriteCond %{HTTP:HTTP_PC_REMOTE_ADDR} !^$ [OR]
RewriteCond %{HTTP:HTTP_CLIENT_IP} !^$
RewriteCond %{HTTP_REFERER} !(.*)allowed-domain.tld(.*) #allow certain sites
RewriteRule ^(.*)$ - [F]

TAKEN FROM: http://www.rfxn.com/nginx-caching-proxy/

Nginx: Caching Proxy

Recently I started to tackle a load problem on one of my personal sites, the issue was that of a poorly written but exceedingly MySQL heavy application and the load it would induce on the SQL server when 400-500 people were hammering the site at once. Further compounding this was Apache’s horrible ability to gracefully handle excessive requests on object heavy pages (i.e: images). This left me with a site that was almost unusable during peak hours — or worse — would crash the MySQL server and take Apache with it by frenzied F5ing from users.

I went through all the usual rituals in an effort to better the situation, from PHP APC then Eaccelerator, to mod_proxy+mod_cache, to tuning Apache timeouts/prefork settings and adjusting MySQL cache/buffer options. The extreme was setting up a MySQL replication cluster with MySQL-Proxy doing RW splitting/load balancing across the cluster and memcached, but this quickly turned into a beast to manage and memcached was eating memory at phenomenal rates.

Although I did improve things a bit, I had done so at the expense of vastly increased hardware demand and complexity. However, the site was still choking during peak hours and in a situation where switching applications and/or getting it reprogrammed is not at all an option, I had to start thinking outside the box or more to the point, outside Apache.

I have experience with lighttpd and pound reverse proxy, they are both phenomenal applications but neither directly handles caching in a graceful fashion (in pounds case not at all). This is when I took a look a nginx which to date I had never tried but heard many great things about. I fired up a new Xen guest running CentOS 5.4, 2GB RAM & 2 CPU cores….. an hour later I had nginx installed, configured and proxy-caching traffic for the site in question.

The impact was immediate and significant — the SQL server loads dropped from an average of 4-5 down to 0.5-1.0 and the web server loads were near non-existent from previously being on the brink of crashing every afternoon.

Enough with my ramblings, lets get into nginx. You can download the latest release from http://nginx.org and although I could not find a binary version of it, compiling was straight forward with no real issues.

First up we need to satisfy some requirements for the configure options we will be using, I encourage you to look at ‘./configure –help’ list of available options as there are some nice features at your disposal.

yum install -y zlib zlib-devel openssl-devel gd gd-devel pcre pcre-devel

Once the above packages are installed we are good to go with downloading and compiling the latest version of nginx:

wget http://nginx.org/download/nginx-0.8.36.tar.gz
tar xvfz nginx-0.8.36.tar.gz
cd nginx-0.8.36/
./configure --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_image_filter_module --with-http_gzip_static_module
make && make install

This will install nginx into ‘/usr/local/nginx’, if you would like to relocate it you can use ‘–prefix=/path’ on the configure options. The path layout for nginx is very straight forward, for the purpose of this post we are assuming the defaults:

[root@atlas ~]# ls /usr/local/nginx
conf  fastcgi_temp  html  logs  sbin

[root@atlas nginx]# cd /usr/local/nginx

[root@atlas nginx]# ls conf/
fastcgi.conf  fastcgi.conf.default  fastcgi_params  fastcgi_params.default  koi-utf  koi-win  mime.types  mime.types.default  nginx.conf  nginx.conf.default  win-utf

The layout will be very familiar to anyone that has worked with Apache and true to that, nginx breaks the configuration down into a global set of options and then the individual web site virtual host options. The ‘conf/’ folder might look a little intimidating but you only need to be concerned with the nginx.conf file which we are going to go ahead and overwrite, a copy of the defaults is already saved for you as nginx.conf.default.

My nginx configuration file is available at http://www.rfxn.com/downloads/nginx.conf.atlas, be sure to rename it to nginx.conf or copy the contents listed below into ‘conf/nginx.conf’:

user  nobody nobody;

worker_processes     4;
worker_rlimit_nofile 8192;

pid /var/run/nginx.pid;

events {
  worker_connections 2048;
}

http {
    include       mime.types;
    default_type  application/octet-stream;

    log_format main '$remote_addr - $remote_user [$time_local] '
                    '"$request" $status  $body_bytes_sent "$http_referer" '
                    '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  logs/nginx_access.log  main;
    error_log  logs/nginx_error.log debug;

    server_names_hash_bucket_size 64;
    sendfile on;
    tcp_nopush     on;
    tcp_nodelay    off;
    keepalive_timeout  30;

    gzip  on;
    gzip_comp_level 9;
    gzip_proxied any;

    proxy_buffering on;
    proxy_cache_path /usr/local/nginx/proxy levels=1:2 keys_zone=one:15m inactive=7d max_size=1000m;
    proxy_buffer_size 4k;
    proxy_buffers 100 8k;
    proxy_connect_timeout      60;
    proxy_send_timeout         60;
    proxy_read_timeout         60;

    include /usr/local/nginx/vhosts/*.conf;
}

Lets take a moment to review some of the more important options in nginx.conf before we move along…

user nobody nobody;
If you are running this on a server with an apache install or other software using the user ‘nobody’, it might be wise to create a user specifically for nginx (i.e: useradd nginx -d /usr/local/nginx -s /bin/false)

worker_processes 4;
This should reflect the number of CPU cores which you can find out by running ‘cat /proc/cpuinfo | grep processor‘ — I recommend a setting of at least 2 but no more than 6, nginx is VERY efficient.

proxy_cache_path /usr/local/nginx/proxy … inactive=7d max_size=1000m;
The ‘inactive’ option is the maximum age of content in the cache path and the ‘max_size’ is the maximum on disk size of the cache path. If you are serving up lots of object heavy content such as images, you are going to want to increase this.

proxy_send|read_timeout 60;
These timeout values are important, if you run any scripts through admin interfaces or other maintenance URL’s, these values will cause the proxy to time them out — that said increase them to sane values as appropriate, anything more than 300 is probably excessive and you should consider running such tasks from cronjobs.

Apache style MaxClients
Finally, maximum amount of connections, or MaxClients, that nginx can accept is determined by worker_processes * worker_connections/2 (2 fd per session) = 8192 MaxClients in our configuration.

Moving along we need to create two paths that we defined in our configuration, the first is the content caching folder and the second is where we will create our vhosts.

mkdir /usr/local/nginx/proxy /usr/local/nginx/vhosts /usr/local/nginx/client_body_temp /usr/local/nginx/fastcgi_temp  /usr/local/nginx/proxy_temp

chown nobody.nobody /usr/local/nginx/proxy /usr/local/nginx/vhosts /usr/local/nginx/client_body_temp /usr/local/nginx/fastcgi_temp  /usr/local/nginx/proxy_temp

Lets go ahead and get our initial vhosts file created, my template is available from http://www.rfxn.com/downloads/nginx.vhost.conf and should be saved to ‘/usr/local/nginx/vhosts/myforums.com.conf’, the contents of which are as follows:

server {
    listen 80;
    server_name myforums.com alias www.myforuns.com;

    access_log  logs/myforums.com_access.log  main;
    error_log  logs/myforums.com_error.log debug;

    location / {
        proxy_pass http://10.10.6.230;
        proxy_redirect     off;
        proxy_set_header   Host             $host;
        proxy_set_header   X-Real-IP        $remote_addr;
        proxy_set_header   X-Forwarded-For  $proxy_add_x_forwarded_for;

        proxy_cache               one;
        proxy_cache_key         backend$request_uri;
        proxy_cache_valid       200 301 302 20m;
        proxy_cache_valid       404 1m;
        proxy_cache_valid       any 15m;
        proxy_cache_use_stale   error timeout invalid_header updating;
    }

    location /admin {
        proxy_pass http://10.10.6.230;
        proxy_set_header   Host             $host;
        proxy_set_header   X-Real-IP        $remote_addr;
        proxy_set_header   X-Forwarded-For  $proxy_add_x_forwarded_for;
    }
}

The obvious changes you want to make are ‘myforums.com’ to whatever domain you are serving, you can append multiple aliases to the server_name string such as ‘server_name domain.com alias www.domain.com alias sub.domain.com;‘. Now, lets take a look at some of the important options in the vhosts configuration:

listen 80;
This is the port which nginx will listen on for this vhost, by default unless you specify an IP address with it, you will bind port 80 on all local IP’s for nginx — you can limit this by setting the value as ‘listen 10.10.3.5:80;‘.

proxy_pass http://10.10.6.230;
Here we are telling nginx where to find our content aka the backend server, this should be an IP and it is also important to not forget setting the ‘proxy_set_header Host’ option so that the backend server knows what vhost to serve.

proxy_cache_valid
This allows us to define cache times based on HTTP status codes for our content, for 99% of traffic it will fall under the ‘200 301 302 20m’ value. If you are running allot of dynamic content you may want to lower this from 20m to 10m or 5m, any lower defeats the purpose of caching. The ‘404 1m’ value ensures that not found pages are not stored for long in case you are updating the site/have a temporary error but also prevent 404’s from choking up the backend server. Then the ‘any 15m’ value grabs all other content and caches it for 15m, again if you are running a very dynamic site you may want to lower this.

proxy_cache_use_stale
When the cache has stale content, that is content which has expired but not yet been updated, nginx can serve this content in the event errors are encountered. Here we are telling nginx to serve stale cache data if there is an error/timeout/invalid header talking to the backend servers or if another nginx worker process is busy updating the cache. This is really useful in the event your web server crashes, as to clients they will receive data from the cache.

location /admin
With this location statement we are telling nginx to take all requests to ‘http://myforums.com/admin’ and pass it off directly to our backend server with no further interaction — no caching.

That’s it! You can start nginx by running ‘/usr/local/nginx/sbin/nginx’, it should not generate any errors if you did everything right! To start nginx on boot you can append the command into ‘/etc/rc.local’. All you have to do now is point the respective domain DNS records to the IP of the server running nginx and it will start proxy-caching for you. If you wanted to run nginx on the same host as your Apache server you could set Apache to listen on port 8080 and then adjust the ‘proxy_pass’ options accordingly as ‘proxy_pass http://127.0.0.1:8080;’.

Extended Usage:
If you wanted to have nginx serve static content instead of Apache, since it is so horrible at it, we need to declare a new location option in our vhosts/*.conf file. We have two options here, we can either point nginx to a local path with our static content or have nginx cache our static content then retain it for longer periods of time — the later is far simpler.

Serve static content from a local path:

        location ~* ^.+.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js)$ {
            root   /home/myuser/public_html;
            expires 1d;
        }

In the above, we are telling nginx that our static content is located at ‘/home/myuser/public_html’, paths must be relative!! When a user requests ‘http://www.mydomain.com/img/flyingpigs.jpg’, nginx will look for it at ‘/home/myuser/public_html/img/flyingpigs.jpg’. The expires option can have values in seconds, minutes, hours or days — if you have allot of dynamic images on your site then you might consider an option like 2h or 30m, anything lower defeats the purpose. Using this method has a slight performance benefit over the cache option below.

Serve static content from cache:

        location ~* ^.+.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js)$ {
             proxy_cache_valid 200 301 302 120m;
             expires 2d;
             proxy_pass http://10.10.6.230;
             proxy_cache one;
        }

With this setup we are telling nginx to cache our static content just like we did with the parent site itself, except that we are defining an extended time period for which the content is valid/cached. The time values are, content is valid for 2h (nginx updates cache) and every 2 days the content expires (client browsers cache expires and requests again). Using this method is simple and does not require copying static content to a dedicated nginx host.

We can also do load balancing very easily with nginx, this is done by setting an alias for a group of servers, we then define this alias in place of addresses in our ‘proxy_pass’ settings. In the ‘upstream’ option shown below, we want to list all of our web servers that load should be distributed across:

  upstream my_server_group {
    server 10.10.6.230:8000 weight=1;
    server 10.10.6.231:8000 weight=2 max_fails=3  fail_timeout=30s;
    server 10.10.6.15:8080 weight=2;
    server 10.10.6.17:8081
  }

This must be placed in the ‘http { }’ section of the ‘conf/nginx.conf’ file, then the server group can be used in any vhost. To do this we would replace ‘proxy_pass http://208.76.83.135;’ with ‘proxy_pass http://my_server_group;’. The requests will be distributed across the server group in a round-robin fashion with respect to the weighted values, if any. If a request to one of the servers fails, nginx will try the next server until it finds a working server. In the event no working servers can be found, nginx will fall back to stale cache data and ultimately an error if that’s not available.

Conclusion:
This has turned into a longer post than I had planned but oh well, I hope it proves to be useful. If you need any help on the configuration options, please check out http://wiki.nginx.org, it covers just about everything one could need.

Although I noted this nginx setup is deployed on a Xen guest (CentOS 5.4, 2GB RAM & 2 CPU cores), it proved to be so efficient, that these specs were overkill for it. You could easily run nginx on a 1GB guest with a single core, a recycled server or locally on the Apache server. I should also mention that I took apart the MySQL replication cluster and am now running with a single MySQL server without issue — down from 4.