Sunday, August 31, 2008

How to setting SAMBA using GUI

you're going to need an easy way to share files between linux and windows. In this article we will walk through setting up a Samba share on your Suse 10 machine that you can map as a drive on your Windows machine. To start, click the House icon in the lower left of your Suse 10 desktop to open a File Manager as a normal user. This will put you in your home directory which is normally /home/yourusername, in my case it is /home/frodo



Next we will create a new folder in our home directory and name it webshare.
Click the Edit Menu and choose Create New -> Folder, then enter the name of the folder



You have probably figured out by now that linux lets you have as many desktops as you like. I have configured 6 of them as you can see 1 through 6 in the menu bar at the bottom of my screen (above). To
configure how many desktops you can right click in one of the numbered squares in the menu bar and choose Configure Desktops. In the screen above Desktop 1 is active and I have my File Manager open in that desktop. Next, click on desktop 2 and we will open YaST on this desktop. System -> Control Center (YaST), you will be prompted for the root password.



As shown above, click the Network Services item on the left then click Samba Server to configure Samba and add our new webshare folder as a share.
You will be prompted for a domain name. If you have a local domain name enter it here. If not feel free to make something up like we did during the Suse 10 installation when we chose a host name and domain in the network settings. I put middleearth.home for mine as shown below.


I don't have a domain controller on my local domain at home and I don't want to configure Samba as a domain controller, so I checked Not a Domain Controller.


Next I set it to start Samba during bootup then I click the Shares tab to setup my share. Also be sure to check the box and open a port in the Firewall to allow your local network to access the share.



Now I create a share named webshare that points to my /home/frodo/webshare folder. I unchecked inherit ACLs and unchecked Read Only


Next Click Finish as shown below.



I had to do some modification to the configuration file for Samba to get the share to actually work for me. I'm in no way a Samba or linux guru yet myself so don't think this is neccesarily the best practice but this is what worked for me and it should work for you. If someone with more expertise out there wants to email me with a better procedure I will update this article happily. You'll need to open System ->File Manager -> File Manager Super User Mode to do this because only root has permission on the configuration files. Now browse to /etc/samba/ and open smb.conf in a Text Editor



Scroll to the bottom of the file and you'll see the section for the share you just setup in YaST. Add the missing settings as shown above, of course put your linux user name where I have frodo. Now save and
close the file.

Before I move on I'd like to talk a little about linux configuration in general. While there are some nice GUI tools like YaST for configuring many of the settings on your linux machine, most of the GUI configuration tools are really just a front end on top of text files where the settings are stored. Almost everything you will
ever need to configure is located in a text file somewhere below the /etc folder. Its just a matter of knowing which file has which settings and what the possible options are for each setting. There is a lot of
documentation available for these settings both online and in the man and info pages locally on the linux machine Those coming from a windows world sometimes perceive this as an indicator that linux administration is difficult. But think about it, as an ASP.NET developer you are probably very familiar with storing settings for your web apps in the Web.config file and editing these settings seems trivial to you. When you are familiar with the file and the settings it seems very easy and the fact that all you need is a text editor is very convenient isn't it? I say this because this is a paradigm shift I went through. I used to think linux administration was difficult and then as I became familiar with the files and settings of interest to me my point of view changed.

Now getting back to setting up the share, we now need to restart the Samba service for our setting changes in the smb.conf file to take effect. Open YaST again or switch to the desktop where you already have it open if you still have it open. Then as shown below click System in the left pane and then System Services (Runlevel)



This is where you can configure which services are running similar to what you would do on windows using the Service Management Console.



Click on smb Samba File and Print services as shown above then click Disable to stop the service



Now after it stops click it again and then click Enable to start the service again.



Now while you're in the services administration section of YaST, scroll up the service list and see if apache2 is already running. If not start that service as well by clicking it then clicking Enable. Now click
Finish and it will save your settings.

Now most likely you are like me and working at home so you don't have a Domain Name Server (DNS) to resolve names of the computers on your local network. Since I want to be able to connect to my Suse 10 machine by name I will create an entry in my hosts file to map a friendly name to the ip address of the Suse 10 machine. On windows, this file is typically located at c:\Windows\system32\drivers\etc\

Interesting that we also have an etc folder on Windows isn't it? Anyway, the hosts file in that folder is just a text file, you can open it in Notepad or a text editor of your choosing (Don't use a Word Processor)

You will see a line with

127.0.0.1 localhost


this is how every machine knows that the hostname localhost refers to itself. The use of localhost is a conventional naming on both windows and linux. 127.0.0.1 is what is known as the loopback ip address
becuase it always points to the machine you are on.

Now we will add the ip address of your Suse 10 machine with a friendly name of your choosing. I put the following 2 lines:

172.16.0.21 isengard

172.16.0.21 isengard.middleearth.home

Note that you can map multiple names to the same ip address. I chose names that correspond to the host name I setup on my Suse machine but I could put whatever I want for host name mappings like

172.16.0.21 hamsandwich

This will resolve the host name(s) I put to the ip address I put at least for this Windows machine. Save the host file and then in a web browser on your windows machine go to http://isengard (or whatever host name you put) and you should see the default apache web page served from your Suse 10 machine. Nice huh?

Finally, open Windows Explorer
and choose Tools -> Map Network Drive, then pick an available drive letter and for the Folder put \\isengard\webshare (of course replace isengard with whatver host name you put in your hosts file.

Stay tuned for the next article and we will finally learn to setup mojoPortal on the Suse 10 machine using apache Virtual Hosts. Learning this will allow you to setup as many sites as you like on your Suse 10
machine so you can develop and test your own applications.

Thanks
to www.joeaudette.com

Sunday, August 24, 2008

backup data MySQL using bash script

How to backup your MySQL tables and data every night using a bash script and cron

Summary:
This tutorial will show you how to write a simple bash shell script which will extract your database schema, compress the data and email you the backup. Utilising cron, this script can be configured to run in the early hours of the morning when your web server is least active.

After completing your database enabled web site, you need an automated method for backing up all that valuable data. Below is a bash shell script which can be used to backup all your clients databases using a nightly cron job.
Bash Shell Script (mysqlbackup)

#!/bin/sh
mysqldump -uroot -ppwd --opt db1 > /sqldata/db1.sql
mysqldump -uroot -ppwd --opt db2 > /sqldata/db2.sql

cd /sqldata/
tar -zcvf sqldata.tgz *.sql
cd /scripts/
perl emailsql.cgi

A bash script is a text file containing commands that can be interpreted by the bash shell. Above is a cut down version of the original script which I keep in a directory called /myscripts/. This is important for when we look at adding the script to the cron tab.

The first line of the script tells the operating system (Unix) where to find the bash interpreter, you may need to change this line to work on your systems. The second and third lines call the MySQL utility mysqldump which is used to export the data, the output from this command is then piped into a text file.

For example the first mysqldump statement has 4 parameters passed to it :-

* -u = your MySQL username. (substitute root with your username)
* -p = your MySQL password. (substitute pwd with your password)
* --opt adds the most common and useful command line options, resulting in the quickest possible export. This option automatically includes --add-drop-table, --add-locks, --extended-insert, --quick and --use-locks.
* the database name to extract. (substitute db1 with your database name)
* the > /sqldata/db1.sql redirects all the output to a file called db1.sql in a directory /sqldata/. You can create the file in any directory you have rights to, however for consistency I would suggest naming the resulting .sql file the same as the database name.

You simply repeat this process for each database you want to backup. The following line changes directory to the /sqldata/ directory and performs a tar compression adding all the .sql files into one archive file called sqldata.tgz. After changing back to the scripts directory I finally run a Perl script (emailsql.cgi) which attaches the sqldata.tgz archive to an email and forwards it to two offsite email accounts. Alternatively you could ftp the sqldata.tgz to an offsite machine.

After creating the script, you need to make it executable by CHMODing the file permissions to 700. At this point you should be able to test the script by entering /myscripts/mysqlbackup from the shell prompt.
The emailsql.cgi script.

The example Perl script below shows how you can attach the archive to an email, and send it to your email inbox. This Perl script requires the MIME::Lite Module which you may need install on your server.
(How to install Perl Modules).

#!/usr/bin/perl -w
use MIME::Lite;

$msg = MIME::Lite->new(

From => mysqlbackup@yoursite.co.id,
To => you@yoursite.co.id,
Subject => sqldata.tgz MySQL backup!,
Type => text/plain,
Data => "Here are the MySQL database backups.");

$msg->attach(Type=>application/x-tar,
Path =>"/sqldata/sqldata.tgz",
Filename =>"sqldata.tgz");

$msg->send;

Adding the Script to Cron

Cron is a scheduling tool for Unix, it allows you to specify when a program or script should be run. To edit your current cron table, enter crontab -e from the system prompt. This will load your current cron table into your default text editor, when you save and exit your editor the crontab file will be loaded and ready for use.


0 2 * * * /myscripts/mysqlbackup
0 5 * * 0 /myscripts/reindex


The above example shows my current crontab. The file has two entries, one for each script I wish to run. The first entry tells cron to run the mysqlbackup script every morning at 2am. The second entry runs my search engine indexer every Sunday morning at 5am.

There are five fields for setting the date and time separated by a space, followed by the name of the script you wish to run. The five time settings are in the following order.

* Minutes - in the range of 0 - 59
* Hour - in the range of 0 - 23
* Day of month - in the range 1 - 31
* Month - in the range 1 -12
* Day of week - in the range 0 - 6 (0 = Sunday)

Any field with a * means run every possible match, so for example a * in the day of month field will run the script every single day of the month at the specified time.
Additional References

* BASH - GNU Project
* BASH reference manual
* TAR - GNU Project
* MIME::Lite Module

Saturday, August 23, 2008

CPAN - Installing Perl Modules

When you have been developing Perl CGI scripts for while you will invariable need to install one of the many useful modules located in the Comprehensive Perl Archive Network.
CPAN is a group of servers around the world which provide access to Perl source code, and hundreds of Modules that have been contributed by volunteers.

Installing Modules

Part of what makes CPAN powerful, is that Perl supports it directly with the CPAN.pm module which is distributed with Perl. Many of the database projects we have developed could not of been possible without the templating system HTML::Template.

On most operating systems, you can install a CPAN module by typing the following from the command line :-

perl -MCPAN -e install HTML::Template

where HTML::Template is the name of the module you wish to install.

This command will automatically find, download, compile, and install the module onto your system.

If this is the first time you have run the command on your system, you may be prompted with multi-choice questions, so that the CPAN module can find the nearest FTP server.

Activestate Perl and Windows 32 Systems

If you are using ActiveState Perl on a windows based server, then you can use the PPM command-line utility.

from the /bin sub folder of perl type:- ppm
then type:- install HTML::Template

Using CPAN modules without Root access

If you are using a web hosting account which doesn't give you access to the operating system, so that you can install perl modules as described above, then you can still use modules by setting up your own lib folder.

Typically the best place to locate the lib folder is under you cgi-bin directory. The tree structure below shows where you would need to upload the Template.pm file to use Html::Template.

The module name will determine where the .pm files need to be located under your lib directory. The text prior to the :: will become the directory name.






For example the Calc.pm module that comes with Date::Calc will need to go into a sub-directory of lib called Date.



To use the module within a Perl script, you need to call the Lib module, passing it the full system path to your lib folder. You can then use the modules in the normal way,









see the example code below. Please note some CPAN modules rely on other modules being present, and if thats the case then you will need to upload them as well.

#/path/to/perl -w
use strict;

use lib(/full/path/to/cgi-bin/lib);
use Html::Template;

.. rest of code

Tuesday, August 19, 2008

Summary script for IPTABLES

iptables acctualy use for firewall script in linux with kernel 2.4 above.
some time peopple use tool for make security system in linux. among software security tool for make script like arno's iptables , wonder shaper , Linux Firewall and NAT for DSL , Easy firewall, and the other.

so i wan to make summary becasue acctualy i willy hapy if i write script manual.

_____________________________________________________________________________________

Logging connections with IPtables

Logging ALL incomming and outgoing traffic


iptables -A OUTPUT -j LOG
iptables -A INPUT -j LOG
iptables -A FORWARD -j LOG
iptables -t nat -A PREROUTING -j LOG
iptables -t nat -A POSTROUTING -j LOG
iptables -t nat -A OUTPUT -j LOG

Description: Above commands will enable logging for all input/output/forwarded/routed traffic in /var/log/messages file. (Log file depend on syslog setting).

A Customized Logging Chain to Log all ssh connections


iptables -N LOGIT # special chain to log all except fragments
iptables -A LOGIT -m state --state ESTABLISHED -j RETURN # don't log frags
iptables -A LOGIT -j LOG
iptables -A LOGIT -j RETURN

Above commands will create a new chain LOGIT and will set it to log all except fragments. Now lets use this chain.

iptables -A INPUT -p tcp --dport 22 -j LOGIT

Description: It will log all connections to port 22 (SSH).

Below is the complete shell script for above loging.
#!/bin/bash
iptables -N LOGIT # special chain to log all except fragments

iptables -A LOGIT -m state --state ESTABLISHED -j RETURN # don't log frags
iptables -A LOGIT -j LOG
iptables -A LOGIT -j RETURN

iptables -A INPUT -p tcp --dport 22 -j LOGIT
#end

Reverse script to delete above iptables config.
#!/bin/bash

iptables -D LOGIT -m state --state ESTABLISHED -j RETURN
iptables -D LOGIT -j LOG
iptables -D LOGIT -j RETURN

iptables -D INPUT -p tcp --dport 22 -j LOGIT
iptables -X LOGIT


#end
_________________________________________________________________________________________________

Blocking traffic with IPtables

Blocking an IP (Drop connection)

Example: iptables -A INPUT -s 192.168.0.1 -j DROP

Blocking an IP (Rejecting connection)

Example: iptables -A INPUT -s 192.168.0.1 -j REJECT

Blocking access of an ip to a certain port

Example: iptables -A INPUT -p tcp -s 192.168.1.50 --dport 110 -j REJECT
Description: This will reject connection from 192.168.1.50 at port 110.
Example: iptables -A INPUT -p udp -s 192.168.1.50 --dport 52 -j REJECT
Description: This will reject udp traffic from 192.168.1.50 at port 52

Blocking All Incomming Traffic at a port

Example: iptables -A INPUT -p tcp --dport 110 -j REJECT
Description: This will reject ALL Incomming connections/Traffic at port 110.

Blocking Incomming Pings

Example: iptables -A INPUT -p icmp -j DROP
Description: Usefull to protect against automated network scans to detect live ips.

Blocking access to an external ip from within your server

Example: iptables -A OUTPUT -p tcp -d 192.168.1.50 -j REJECT
Description: This will block access to 192.168.1.50 from with in your server. Means your server users can not access that ip from with in the server

Blocking access to an external port of an external ip

Example: iptables -A OUTPUT -p tcp -d 192.168.1.50 --dport 25 -j REJECT
Description: Port 25 of 192.168.1.50 will not be accessable from with in your server


Routing with IPtables

Redirecting a tcp port to another port

Example: iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-ports 8080
Description: Port 80 will be redirected to port 8080, Means if you will connect at port 80 of this server then you will actually connected to 8080

Redirecting traffic from specific ip at a tcp port to another port

Example: iptables -t nat -A PREROUTING -p tcp -s 192.168.1.40 --dport 80 -j REDIRECT --to-ports 8080
Description: All traffic from 192.168.1.40 at Port 80 will be redirected to port 8080, Means if 192.168.1.40 will connect at port 80 of this server then it will actually connected to 8080
Note: REDIRECT target can be used only to redirect traffic to the machine itself. To route traffic to other places, Use DNAT (see below)

Routing traffic from specific port to another server


Example:
echo 1 > /proc/sys/net/ipv4/ip_forward
iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
iptables -t nat -A PREROUTING -p tcp -d 10.10.10.10 --dport 72 -j DNAT --to 33.55.37.226:25
Description: Above commands will route the traffic for port 72 of ip 10.10.10.10 to port 25 of ip 33.55.37.226 .


Listing and Deleting current rules

Example
: iptables -L
Description: It will list all chains and rules

Example: iptables -L chain_name
Description: It will list all rules in a specific chain

Example: iptables -D LOGIT -j LOG
Description: It will delete the specific rule. The rule must be exact as it was executed.

Example: iptables -F chain_name
Description: It will delete all rules in chain_name

Example: iptables -F
Description: It will delete all rules in all chains


this article i catch from http://www.openpages.info/,

Sunday, August 17, 2008

Whats is the BANDWIDTH

BandWidth Explained

Most hosting companies offer a variety of bandwidth options in their plans. So exactly what is bandwidth as it relates to web hosting? Put simply, bandwidth is the amount of traffic that is allowed to occur between your web site and the rest of the internet. The amount of bandwidth a hosting company can provide is determined by their network connections, both internal to their data center and external to the public internet.

Network Connectivity

The internet, in the most simplest of terms, is a group of millions of computers connected by networks. These connections within the internet can be large or small depending upon the cabling and equipment that is used at a particular internet location. It is the size of each network connection that determines how much bandwidth is available. For example, if you use a DSL connection to connect to the internet, you have 1.54 Mega bits (Mb) of bandwidth. Bandwidth therefore is measured in bits (a single 0 or 1). Bits are grouped in bytes which form words, text, and other information that is transferred between your computer and the internet.

If you have a DSL connection to the internet, you have dedicated bandwidth between your computer and your internet provider. But your internet provider may have thousands of DSL connections to their location. All of these connection aggregate at your internet provider who then has their own dedicated connection to the internet (or multiple connections) which is much larger than your single connection. They must have enough bandwidth to serve your computing needs as well as all of their other customers. So while you have a 1.54Mb connection to your internet provider, your internet provider may have a 255Mb connection to the internet so it can accommodate your needs and up to 166 other users (255/1.54).

Traffic

A very simple analogy to use to understand bandwidth and traffic is to think of highways and cars. Bandwidth is the number of lanes on the highway and traffic is the number of cars on the highway. If you are the only car on a highway, you can travel very quickly. If you are stuck in the middle of rush hour, you may travel very slowly since all of the lanes are being used up.

Traffic is simply the number of bits that are transferred on network connections. It is easiest to understand traffic using examples. One Gigabyte is 2 to the 30th power (1,073,741,824) bytes. One gigabyte is equal to 1,024 megabytes. To put this in perspective, it takes one byte to store one character. Imagine 100 file cabinets in a building, each of these cabinets holds 1000 folders. Each folder has 100 papers. Each paper contains 100 characters - A GB is all the characters in the building. An MP3 song is about 4MB, the same song in wav format is about 40MB, a full length movie can be 800MB to 1000MB (1000MB = 1GB).

If you were to transfer this MP3 song from a web site to your computer, you would create 4MB of traffic between the web site you are downloading from and your computer. Depending upon the network connection between the web site and the internet, the transfer may occur very quickly, or it could take time if other people are also downloading files at the same time. If, for example, the web site you download from has a 10MB connection to the internet, and you are the only person accessing that web site to download your MP3, your 4MB file will be the only traffic on that web site. However, if three people are all downloading that same MP at the same time, 12MB (3 x 4MB) of traffic has been created. Because in this example, the host only has 10MB of bandwidth, someone will have to wait. The network equipment at the hosting company will cycle through each person downloading the file and transfer a small portion at a time so each person’s file transfer can take place, but the transfer for everyone downloading the file will be slower. If 100 people all came to the site and downloaded the MP3 at the same time, the transfers would be extremely slow. If the host wanted to decrease the time it took to download files simultaneously, it could increase the bandwidth of their internet connection (at a cost due to upgrading equipment).

Hosting Bandwidth

In the example above, we discussed traffic in terms of downloading an MP3 file. However, each time you visit a web site, you are creating traffic, because in order to view that web page on your computer, the web page is first downloaded to your computer (between the web site and you) which is then displayed using your browser software (Internet Explorer, Netscape, etc.) . The page itself is simply a file that creates traffic just like the MP3 file in the example above (however, a web page is usually much smaller than a music file).

A web page may be very small or large depending upon the amount of text and the number and quality of images integrated within the web page. For example, the home page for CNN.com is about 200KB (200 Kilobytes = 200,000 bytes = 1,600,000 bits). This is typically large for a web page. In comparison, Yahoo’s home page is about 70KB.

How Much Bandwidth Is Enough?

It depends (don’t you hate that answer). But in truth, it does. Since bandwidth is a significant determinant of hosting plan prices, you should take time to determine just how much is right for you. Almost all hosting plans have bandwidth requirements measured in months, so you need to estimate the amount of bandwidth that will be required by your site on a monthly basis

If you do not intend to provide file download capability from your site, the formula for calculating bandwidth is fairly straightforward:

Average Daily Visitors x Average Page Views x Average Page Size x 31 x Fudge Factor

If you intend to allow people to download files from your site, your bandwidth calculation should be:

[(Average Daily Visitors x Average Page Views x Average Page Size) +
(Average Daily File Downloads x Average File Size)] x 31 x Fudge Factor

Let us examine each item in the formula:

Average Daily Visitors - The number of people you expect to visit your site, on average, each day. Depending upon how you market your site, this number could be from 1 to 1,000,000.

Average Page Views - On average, the number of web pages you expect a person to view. If you have 50 web pages in your web site, an average person may only view 5 of those pages each time they visit.

Average Page Size - The average size of your web pages, in Kilobytes (KB). If you have already designed your site, you can calculate this directly.

Average Daily File Downloads - The number of downloads you expect to occur on your site. This is a function of the numbers of visitors and how many times a visitor downloads a file, on average, each day.

Average File Size - Average file size of files that are downloadable from your site. Similar to your web pages, if you already know which files can be downloaded, you can calculate this directly.

Fudge Factor - A number greater than 1. Using 1.5 would be safe, which assumes that your estimate is off by 50%. However, if you were very unsure, you could use 2 or 3 to ensure that your bandwidth requirements are more than met.

Usually, hosting plans offer bandwidth in terms of Gigabytes (GB) per month. This is why our formula takes daily averages and multiplies them by 31.

Summary

Most personal or small business sites will not need more than 1GB of bandwidth per month. If you have a web site that is composed of static web pages and you expect little traffic to your site on a daily basis, go with a low bandwidth plan. If you go over the amount of bandwidth allocated in your plan, your hosting company could charge you over usage fees, so if you think the traffic to your site will be significant, you may want to go through the calculations above to estimate the amount of bandwidth required in a hosting plan.

Thursday, August 14, 2008

SQL command

Acctually basic for command in database is SQL. SQL = Structured Query Languaged.
because i dont remember all the command, so i write in here. so sometimes i need i can see wherever in the world, with internet connection sure.

ABORT -- abort the current transaction
ALTER DATABASE -- change a database
ALTER GROUP -- add users to a group or remove users from a group
ALTER TABLE -- change the definition of a table
ALTER TRIGGER -- change the definition of a trigger
ALTER USER -- change a database user account
ANALYZE -- collect statistics about a database
BEGIN -- start a transaction block
CHECKPOINT -- force a transaction log checkpoint
CLOSE -- close a cursor
CLUSTER -- cluster a table according to an index
COMMENT -- define or change the comment of an object
COMMIT -- commit the current transaction
COPY -- copy data between files and tables
CREATE AGGREGATE -- define a new aggregate function
CREATE CAST -- define a user-defined cast
CREATE CONSTRAINT TRIGGER -- define a new constraint trigger
CREATE CONVERSION -- define a user-defined conversion
CREATE DATABASE -- create a new database
CREATE DOMAIN -- define a new domain
CREATE FUNCTION -- define a new function
CREATE GROUP -- define a new user group
CREATE INDEX -- define a new index
CREATE LANGUAGE -- define a new procedural language
CREATE OPERATOR -- define a new operator
CREATE OPERATOR CLASS -- define a new operator class for indexes
CREATE RULE -- define a new rewrite rule
CREATE SCHEMA -- define a new schema
CREATE SEQUENCE -- define a new sequence generator
CREATE TABLE -- define a new table
CREATE TABLE AS -- create a new table from the results of a query
CREATE TRIGGER -- define a new trigger
CREATE TYPE -- define a new data type
CREATE USER -- define a new database user account
CREATE VIEW -- define a new view
DEALLOCATE -- remove a prepared query
DECLARE -- define a cursor
DELETE -- delete rows of a table
DROP AGGREGATE -- remove a user-defined aggregate function
DROP CAST -- remove a user-defined cast
DROP CONVERSION -- remove a user-defined conversion
DROP DATABASE -- remove a database
DROP DOMAIN -- remove a user-defined domain
DROP FUNCTION -- remove a user-defined function
DROP GROUP -- remove a user group
DROP INDEX -- remove an index
DROP LANGUAGE -- remove a user-defined procedural language
DROP OPERATOR -- remove a user-defined operator
DROP OPERATOR CLASS -- remove a user-defined operator class
DROP RULE -- remove a rewrite rule
DROP SCHEMA -- remove a schema
DROP SEQUENCE -- remove a sequence
DROP TABLE -- remove a table
DROP TRIGGER -- remove a trigger
DROP TYPE -- remove a user-defined data type
DROP USER -- remove a database user account
DROP VIEW -- remove a view
END -- commit the current transaction
EXECUTE -- execute a prepared query
EXPLAIN -- show the execution plan of a statement
FETCH -- retrieve rows from a table using a cursor
GRANT -- define access privileges
INSERT -- create new rows in a table
LISTEN -- listen for a notification
LOAD -- load or reload a shared library file
LOCK -- explicitly lock a table
MOVE -- position a cursor on a specified row of a table
NOTIFY -- generate a notification
PREPARE -- create a prepared query
REINDEX -- rebuild corrupted indexes
RESET -- restore the value of a run-time parameter to a default value
REVOKE -- remove access privileges
ROLLBACK -- abort the current transaction
SELECT -- retrieve rows from a table or view
SELECT INTO -- create a new table from the results of a query
SET -- change a run-time parameter
SET CONSTRAINTS -- set the constraint mode of the current transaction
SET SESSION AUTHORIZATION -- set the session user identifier and the current user identifier of the current session
SET TRANSACTION -- set the characteristics of the current transaction
SHOW -- show the value of a run-time parameter
START TRANSACTION -- start a transaction block
TRUNCATE -- empty a table
UNLISTEN -- stop listening for a notification
UPDATE -- update rows of a table
VACUUM -- garbage-collect and optionally analyze a database

Wednesday, August 13, 2008

Basic Command in Linux

The basic commands used in Linux are common to every distro:

ifconfig - Configures and displays the IP parameters of a network interface
route - Used to set static routes and view the routing table
hostname - Necessary for viewing and setting the hostname of the system
netstat - Flexible command for viewing information about network statistics, current connections, listeing ports
arp - Shows and manages the arp table
mii-tool - Used to set the interface parameters at data link layer (half/full duplex, interface speed, autonegotiation...)

Many distro are now including the iproute2 tools with enhanced routing and networking tools:

ip - Multi purpose command for viewing and setting TCP/IP parameters and routes.
tc - Traffic control command, used for classifying, prioritizing, sharing, and limiting both inbound and outbound traffic.

Every distro has its own configuration tool that operate on variously defined configuration files.
Some of them are common: /etc/resolv.conf, /etc/nsswitch.conf, /etc/hosts, /etc/services, /etc/protocols

Some, typically the ones where are defined IP addresses and routes, change. Here are some relevant files for various distro, their syntax may vary according the scripts used to handle them:



Debian
/etc/network/interfaces - Interfaces and network parameters

RedHat
Graphical interface: redhat-config-network
/etc/sysconfig/network-scripts/ifcfg-* - Configuration files for each interface.

The same file can be found, divided per profile, in /etc/sysconfig/networking/devices/*
/etc/sysconfig/network - Hostname, default gateway, general configuration
/etc/sysconfig/static-routes - Static routes (if any)

SlackWare
Graphical interface: Netconfig
/etc/rc.d/rc.inet1 - IP and network parameters
/etc/rc.d/rc.inet2 - Network Services configuration

Mandrake
Graphical interface: Drakconnect
/etc/sysconfig/network-scripts/ifcfg-* - Configuration files for each interface. The same file can be found, divided per profile, in /etc/sysconfig/networking/devices/*
/etc/sysconfig/network - Hostname, default gateway, general configuration
/etc/sysconfig/static-routes - Static routes (if any)

Gentoo
/etc/conf.d/net - Ip network and interfaces parameters
/etc/conf.d/routes - Static routes

SUSE
Graphical interface: Yast2
/etc/sysconfig/network/ifcfg-* - Configuration files for each interface.
/etc/sysconfig/network/config - General network configuration.

Fedora - Network configuration

Netowrk confgiruation on Fedora 2 is quite similar to the one for other versions of RedHat Linux,
besides the standard files, the main configuration is done on /etc/sysconfig/network where is defined the hostname and can be placed the default gateway and in the files of the /etc/sysconfig/network-scripts/ directory.

The TCP/IP network setup is done with the script /etc/init.d/network, with obviously must be started before other network services on a connected machine.
The official graphical configuration tool is system-config-network (Menu System Settings - Network), from here is possible to define the IP parameters for all the interfaces found on the system (tab Devices, modifies the /etc/sysconfig/network-scripts/ifcfg-interface and /etc/sysconfig/networking/devices/ifcfg-interface files), the IP of the DNS servers (tab DNS, modifies /etc/resolv.conf), the static host IP assignement (tab Hosts, modifies /etc/hosts).

Fedora supports also user's profiles, with differnet network settings.
The Network Configuration tools easily let the user define a profile and its parameters, the relevant system files are placed in the directory /etc/sysconfig/networking/profiles/profilename/. Currently Fedora doesn not allow the definition of a profile at boot time, when the machine is started the default "Common" profile is used, to switch to a custom one either launch system-config-network graphical tool and select your profile or type system-config-network-cmd -p profilename --activate.
RedHat provides other network configuration tools:
netconfig is an old text configuration tool, which is obsolete and may be used to a fast configuration;
system-config-network-tui is the text version of the graphical Network Configuration Tool.
system-config-network-druid (Menu System tools - Internet configuration wizard) is a guided wizard which helps an easy configuration of Ethernet, modem, ISDN, DSL, wireless configuration.

Firewall configuration
Red Hat stores the firewall configuration in the /etc/sysconfig/iptables file which is formatted in order to be used by the iptables-restore command. Firewalling is managed with the /etc/init.d/iptables script which can be followed by arguments like start to activate firewalling, stop to disable it, panic to shutdown any Internet access, status to view the current iptables rules.
A simple and not extremely flexible configuration tool is system-config-firewall, which is adeguate for a desktop machine but surely not for a router/firewall.

Tuesday, August 12, 2008

How do spammers harvest email addresses

Actually this post i catch from http://www.private.org.il/harvest.html

reason why i grab in here because i'm weak for remember original address, so i'am copy to my blog if sometimes i have this problem.

Credit for Uri Raz


There are many ways in which spammers can get your email address. The ones I know of are :

  1. From posts to UseNet with your email address.

    Spammers regularily scan UseNet for email address, using ready made programs designed to do so. Some programs just look at articles headers which contain email address (From:, Reply-To:, etc), while other programs check the articles' bodies, starting with programs that look at signatures, through programs that take everything that contain a '@' character and attempt to demunge munged email addresses.

    There have been reports of spammers demunging email addresses on occasions, ranging from demunging a single address for purposes of revenge spamming to automatic methods that try to unmunge email addresses that were munged in some common ways, e.g. remove such strings as 'nospam' from email addresses.

    As people who where spammed frequently report that spam frequency to their mailbox dropped sharply after a period in which they did not post to UseNet, as well as evidence to spammers' chase after 'fresh' and 'live' addresses, this technique seems to be the primary source of email addresses for spammers.

  2. From mailing lists.

    Spammers regularily attempt to get the lists of subscribers to mailing lists [some mail servers will give those upon request],knowing that the email addresses are unmunged and that only a few of the addresses are invalid.

    When mail servers are configured to refuse such requests, another trick might be used - spammers might send an email to the mailing list with the headers Return-Receipt-To: or X-Confirm-Reading-To: . Those headers would cause some mail transfer agents and reading programs to send email back to the saying that the email was delivered to / read at a given email address, divulging it to spammers.

    A different technique used by spammers is to request a mailing lists server to give him the list of all mailing lists it carries (an option implemented by some mailing list servers for the convenience of legitimate users), and then send the spam to the mailing list's address, leaving the server to do the hard work of forwarding a copy to each subscribed email address.

    [I know spammers use this trick from bad experience - some spammer used this trick on the list server of the company for which I work, easily covering most of the employees, including employees working well under a month and whose email addresses would be hard to findin other ways.]

  3. From web pages.

    Spammers have programs which spider through web pages, looking for email addresses, e.g. email addresses contained in mailto: HTML tags [those you can click on and get a mail window opened]

    Some spammers even target their mail based on web pages. I've discovered a web page of mine appeared in Yahoo as some spammer harvested email addresses from each new page appearing in Yahoo and sent me a spam regarding that web page.

    A widely used technique to fight this technique is the 'poison' CGI script. The script creates a page with several bogus email addresses and a link to itself. Spammers' software visiting the page would harvest the bogus email addresses and follow up the link, entering an infinite loop polluting their lists with bogus email addresses.

    For more information about the poision script, see http://www.monkeys.com/wpoison/

  4. From various web and paper forms.

    Some sites request various details via forms, e.g. guest books & registrations forms. Spammers can get email addresses from those either because the form becomes available on the world wide web, or because the site sells / gives the emails list to others.

    Some companies would sell / give email lists filled in on paper forms, e.g. organizers of conventions would make a list of participants' email addresses, and sell it when it's no longer needed.

    Some spammers would actually type E-mail addresses from printed material, e.g. professional directories & conference proceedings.

    Domain name registration forms are a favourite as well - addresses are most usually correct and updated, and people read the emails sent to them expecting important messages.

  5. Via an Ident daemon.

    Many unix computers run a daemon (a program which runs in the background, initiated by the system administrator), intended to allow other computers to identify people who connect to them.

    When a person surfs from such a computer connects to a web site or news server, the site or server can connect the person's computer back and ask that daemon's for the person's email address.

    Some chat clients on PCs behave similarily, so using IRC can cause an email address to be given out to spammers.

  6. From a web browser.

    Some sites use various tricks to extract a surfer's email address from the web browser, sometimes without the surfer noticing it. Those techniques include :

    1. Making the browser fetch one of the page's images through an anonymous FTP connection to the site.

      Some browsers would give the email address the user has configured into the browser as the password for the anonymous FTP account. A surfer not aware of this technique will not notice that the email address has leaked.

    2. Using JavaScript to make the browser send an email to a chosen email address with the email address configured into the browser.

      Some browsers would allow email to be sent when the mouse passes over some part of a page. Unless the browser is properly configured, no warning will be issued.

    3. Using the HTTP_FROM header that browsers send to the server.

      Some browsers pass a header with your email address to every web server you visit. To check if your browser simply gives your email address to everybody this way, visit http://www.cs.rochester.edu/u/ferguson/BrowserCheck.cgi

    It's worth noting here that when one reads E-mail with a browser (or any mail reader that understands HTML), the reader should be aware of active content (Java applets, Javascript, VB, etc) as well as web bugs.

    An E-mail containing HTML may contain a script that upon being read (or even the subject being highlighted) automatically sends E-mail to any E-mail addresses. A good example of this case is the Melissa virus. Such a script could send the spammer not only the reader's E-mail address but all the addresses on the reader's address book.
    http://www.cert.org/advisories/CA-99-04-Melissa-Macro-Virus.html

    A web bugs FAQ by Richard M. Smith can be read at http://www.tiac.net/users/smiths/privacy/wbfaq.htm

  7. From IRC and chat rooms.

    Some IRC clients will give a user's email address to anyone who cares to ask it. Many spammers harvest email addresses from IRC, knowing that those are 'live' addresses and send spam to those email addresses.

    This method is used beside the annoying IRCbots that send messages interactively to IRC and chat rooms without attempting to recognize who is participating in the first place.

    This is another major source of email addresses for spammers, especially as this is one of the first public activities newbies join, making it easy for spammers to harvest 'fresh' addresses of people who might have very little experience dealing with spam.

    AOL chat rooms are the most popular of those - according to reports there's a utility that can get the screen names of participants in AOL chat rooms. The utility is reported to be specialized for AOL due to two main reasons - AOL makes the list of the actively participating users' screen names available and AOL users are considered prime targets by spammers due to the reputation of AOL as being the ISP of choice by newbies.

  8. From finger daemons.

    Some finger daemons are set to be very friendly - a finger query asking for john@host will produce list info including login names for all people named John on that host. A query for @host will produce a list of all currently logged-on users.

    Spammers use this information to get extensive users list from hosts, and of active accounts - ones which are 'live' and will read their mail soon enough to be really attractive spam targets.

  9. AOL profiles.

    Spammers harvest AOL names from user profiles lists, as it allows them to 'target' their mailing lists. Also, AOL has a name being the choice ISP of newbies, who might not know how to recognize scams or know how to handle spam.

  10. From domain contact points.

    Every domain has one to three contact points - administration, technical, and billing. The contact point includes the email address of the contact person.

    As the contact points are freely available, e.g. using the 'whois' command, spammers harvest the email addresses from the contact points for lists of domains (the list of domain is usually made available to the public by the domain registries). This is a tempting methods for spammers, as those email addresses are most usually valid and mail sent to it is being read regularily.

  11. By guessing & cleaning.

    Some spammers guess email addresses, send a test message (or a real spam) to a list which includes the guessed addresses. Then they wait for either an error message to return by email, indicating that the email address is correct, or for a confirmation. A confirmation could be solicited by inserting non-standard but commonly used mail headers requesting that the delivery system and/or mail client send a confirmation of delivery or reading. No news are, of coures, good news for the spammer.

    Specifically, the headers are -
    Return-Receipt-To: which causes a delivery confirmation to be sent, and
    X-Confirm-Reading-To: which causes a reading confirmation to be sent.

    Another method of confirming valid email addresses is sending HTML in the email's body (that is sending a web page as the email's content), and embedding in the HTML an image. Mail clients that decode HTML, e.g. as Outlook and Eudora do in the preview pane, will attempt fetching the image - and some spammers put the recipient's email address in the image's URL, and check the web server's log for the email addresses of recipients who viewed the spam.

    So it's good advice to set the mail client to *not* preview rich media emails, which would protect the recipient from both accidently confirming their email addresses to spammers and viruses.

    Guessing could be done based on the fact that email addresses are based on people's names, usually in commonly used ways (first.last@domain or an initial of one name followed / preceded by the other @domain)

    Also, some email addresses are standard - postmaster is mandated by the RFCs for internet mail. Other common email addresses are postmaster, hostmaster, root [for unix hosts], etc.

  12. From white & yellow pages.

    There are various sites that serve as white pages, sometimes named people finders web sites. Yellow pages now have an email directory on the web.

    Those white/yellow pages contain addresses from various sources, e.g. from UseNet, but sometimes your E-mail address will be registered for you. Example - HotMail will add E-mail addresses to BigFoot by default, making new addresses available to the public.

    Spammers go through those directories in order to get email addresses. Most directories prohibit email address harvesting by spammers, but as those databases have a large databases of email addresses + names, it's a tempting target for spammers.

  13. By having access to the same computer.

    If a spammer has an access to a computer, he can usually get a list of valid usernames (and therefore email addresses) on that computer.

    On unix computers the users file (/etc/passwd) is commonly world readable, and the list of currently logged-in users is listed via the 'who' command.

  14. From a previous owner of the email address.

    An email address might have been owned by someone else, who disposed of it. This might happen with dialup usernames at ISPs - somebody signs up for an ISP, has his/her email address harvested by spammers, and cancel the account. When somebody else signs up with the same ISP with the same username, spammers already know of it.

    Similar things can happen with AOL screen names - somebody uses a screen name, gets tired of it, releases it. Later on somebody else might take the same screen name.

  15. Using social engineering.

    This method means the spammer uses a hoax to convince peopleinto giving him valid E-mail addresses.

  16. A good example is Richard Douche's "Free CD's" chain letter. The letter promises a free CD for every person to whom the letter is forwarded to as long as it is CC'ed to Richard.

    Richard claimed to be associated with Amazon and Music blvd, among other companies, who authorized him to make this offer. Yet hesupplied no references to web pages and used a free E-mail address.

    All Richard wanted was to get people to send him valid E-mail addresses in order to build a list of addresses to spam and/or sell.

  17. From the address book and emails on other people's computers.

    Some viruses & worms spread by emailing themselves to all the email addresses they can find in the email address book. As some people forward jokes and other material by email to their friends, putting their friends' email addresses on either the To: or Cc: fields, rather than the BCc: field, some viruses and warms scan the mail folders for email addresses that are not in the address book, in hope to hit addresses the computer owner's friends' friends, friends' friends' friends, etc.

    If it wasn't already done, it's just a matter of time before such malware will not only spam copies of itself, but also send the extracted list of email addresses to it's creator.

    As invisible email addresses can't be harvested, it's good advice to have the email addresesses of recipients of jokes & the like on BCc:, and if forwarded from somebody else remove from the email's body all the email addresses inserted by the previous sender.

  18. Buying lists from others.

    This one covers two types of trades. The first type consists of buying a list of email addresses (often on CD) that were harvested via other methods, e.g. someone harvesting email addresses from UseNet and sells the list either to a company that wishes to advertise via email (sometimes passing off the list as that of people who opted-in for emailed advertisements) or to others who resell the list.

    The second type consists of a company who got the email addresses legitimately (e.g. a magazine that asks subscribers for their email in order to keep in touch over the Internet) and sells the list for the extra income. This extends to selling of email addresses acompany got via other means, e.g. people who just emailed the companywith inquiries in any context.

    The third type consist of technical staff selling the email address for money to spammers. There was a news story about an AOL employee who sold AOL email addresses to a spammer.

  19. By hacking into sites.

    I've heard rumours that sites that supply free email addresses were hacked in order to get the list of email addresses, somewhatlike e-commerce sites being hacked to get a list of credit cards.

If your address was harvested and you get spammed, the following pages could assist you in tracking the spammer down :

  1. MindSpring's page explaining how to get an email's headers
    http://help.mindspring.com/features/emailheaders/extended.htm

  2. The spam FAQ, maintained by Ken Hollis.
    http://digital.net/~gandalf/spamfaq.html
    http://www.cs.ruu.nl/wais/html/na-dir/net-abuse-faq/spam-faq.html

  3. The Reporting Spam page, an excellent resource.
    http://www.ao.net/waytosuccess/

  4. Reading Mail headers.
    http://www.stopspam.org/email/headers/headers.html

  5. Julian Haight's Spam Cop page.
    http://spamcop.net/

  6. Chris Hibbert's Junk Mail FAQ.
    http://www.fortnet.org/WidowNet/faqs/junkmail.htm

  7. Sam Spade, Spam hunter.
    http://samspade.org/

  8. Penn's Page of Spam.
    http://home.att.net/~penn/spam.htm

  9. WD Baseley's Address Munging FAQ
    http://members.aol.com/emailfaq/mungfaq.html

  10. Fight Spam on the Internet site
    http://spam.abuse.net/

  11. The Spam Recycling Center
    http://www.spamrecycle.com/

  12. The Junk Busters Site
    http://www.junkbusters.com/

  13. The Junk Email site
    http://www.junkemail.org/

  14. BCP 30: Anti-Spam Recommendations for SMTP MTAs
    ftp://ftp.isi.edu/in-notes/bcp/bcp30.txt

  15. FYI 28: Netiquette Guidelines
    ftp://ftp.isi.edu/in-notes/fyi/fyi28.txt

    FYI 35: DON'T SPEW
    A Set of Guidelines for Mass Unsolicited Mailings and Postings
    ftp://ftp.isi.edu/in-notes/fyi/fyi35.txt

Several sites on the web will help in tracing spam :

  1. Pete Bowden's list of traceroute gateways
    http://www.missing.com/traceroute.html
    To find traceroute gateways in any country, visit here.
    http://www.traceroute.org/

  2. Allwhois.com gates to whois on any domain world-wide
    http://www.allwhois.com/

  3. A list of whois servers, collected by Matt Power
    ftp://sipb.mit.edu/pub/whois/whois-servers.list

  4. Alldomains.com site - links to NICs worldwide.
    http://www.alldomains.com/
    A similar page can be found at
    http://www.forumnett.no/domreg.html

  5. The Coalition Against Usolicited Commerical E-mail.
    http://www.cauce.org/
    The European CAUCE.
    http://www.euro.cauce.org/en/index.html
    The Coalition Against Unsolicited Bulk Email, Australia.
    http://www.caube.org.au/
    The Russian Anti-Spam organization.
    http://www.antispam.ru/

  6. No More Spam - ISP Spam-Blocking Interferes With Business
    http://www.byte.com/columns/digitalbiz/1999/04/0405coombs.html

  7. Removing the Spam, By Geoff Mulligan, Published by Addison Wesley, ISBN 0-201-37957-0
    A good book about handling spam.

Legal resources :

  1. FTC Consumer Alert - FTC Names Its Dirty Dozen: 12 Scams Most Likely to Arrive Via Bulk email
    http://www.ftc.gov/bcp/conline/pubs/alerts/doznalrt.htm

  2. Report to the Federal Trade Commision of the Ad-Hoc Working Group on Unsolicited Commercial Mail. http://www.cdt.org/spam/

  3. Pyramid Schemes, Ponzi Schemes, and Related Frauds
    http://www.impulse.net/~thebob/Pyramid.html

  4. The AOL vs. Cyberpromo case
    http://legal.web.aol.com/decisions/dljunk/cyber.html

    Nine New Lawsuits Press Release.
    http://legal.web.aol.com/decisions/dljunk/ninepress.html

  5. "Intel scores in email suit", by Jim Hu, CNET News.com.
    http://www.news.com/News/Item/0,4,29574,00.html?st.ne.ni.lh

  6. The John Marshall Law School spam page
    http://www.jmls.edu/cyber/index/spam.html

  7. First amendment issues related to UBE, by Paul L. Schmehl.
    http://www.utdallas.edu/~pauls/spam_law.html

  8. U.S. Anti-Spam Laws
    http://www.the-dma.org/antispam/statespamlaws.shtml

  9. The UK Data Protection Law
    http://www.dataprotection.gov.uk/

  10. The Italian Anti-Spam Law
    http://www.parlamento.it/parlam/leggi/deleghe/99185dl.htm

  11. The Austrian Telecm Law
    http://www.parlament.gv.at/pd/pm/XX/I/texte/020/I02064_.html
    http://www.bmv.gv.at/tk/3telecom/recht/tkg/inhalt.htm

  12. The Norwegian Marketing Control Act
    http://www.forbrukerombudet.no/html/engelsk/themcact.htm