Tuesday, 30 June 2015

Load Balancing Scheduling Algorithms


Round Robin
  • Essentially this is a simple mechanism in which the content access request is responded to by the load balance in a rotational basis, the first request grants access to the first available content server giving its IP address and the second to the second server IP address and so on. 
  • The moment a server IP address has been given its IP address is moved to the back of the list of available IP addresses and gradually it moves back to the top of the list and becomes available again. 
  • The frequency that it returns to the top depends on the number of available servers in the round robin server cluster being used. 
  • A good way to think of this is a method of server allocation on a continuous looping fashion.
  • With this method incoming requests are distributed sequentially across the server farm (cluster),  i.e. the available servers. 
  • If this method is selected, all the servers assigned to a Virtual Service should have the similar  resource capacity and host identical applications. 
  • Choose round robin if all servers have the  same or similar performance and are running the same load. 
Weighted Round Robin
  • This method balances out the weakness of the simple round robin: Incoming requests are  distributed across the cluster in a sequential manner, while taking account of a static  “weighting” that can be pre-assigned per server. 
  • The administrator simply defines the capacities of the servers available by weighting the servers.  
  • The most efficient server A, for example, is given the weighting 100, whilst a much less powerful  server B is weighted at 50. 
  • This means that Server A would always receive two consecutive  requests before Server B receives its first one, and so on. 
Least Connection
  • Both round robin methods do not take into account that the system does not recognize how  many connections are maintained over a given time. 
  • It could therefore happen that Server B is  overloaded, although it receives fewer connections than Server A, because the users of this  server maintain their connections longer. 
  • This means that the connections, and thus the load for  the server, accumulate. 
  • This potential problem can be avoided with the "least connections" method: 
    • Requests are  distributed on the basis of the connections that every server is currently maintaining. 
    • The server  in the cluster with the least number of active connections automatically receives the next  request. 
    • Basically, the same principle applies here as for the simple round robin: The servers  related to a Virtual Service should ideally have the similar resource capacities. 
    • Please note that in configurations with low traffic rates, the traffic will not balance out and the  first server will be preferred. 
    • This is because if all the servers are equal, then the first server is  preferred. 
    • Until the traffic reaches a level where the first server continually has active traffic, the  first server will always be selected.

Fixed Weighted
  • The highest weight Real Server is only used when other Real Server(s) are given lower weight  values. 
  • However, if highest weight server falls, the Real Server with the next highest priority  number will be available to serve clients. 
  • The weight for each Real Server should be assigned  based on the priority among Real Server(s). 

How to install a package in linux

Command-line process:
  • Compiling and Installing software from source/manual
  • Installing RPM's using the Redhat Package Manager
  • Installing using Debian's apt-get
  • Installing with fedora / yum

Compiling and Installing software from source/manual
  • Generally when you download a package for installation that ends with tgz, gz, bz2, or *zip this will be a source installation.
  • If your file ends with a "bz2" you will first have to ucompress the file will the command bunzip2 APPLICATION.tar.bz2. This will result in a new file like APPLICATION.tar. Tar is an archive system that rolls up directories into a file. To unpack the directory you would issue a command similar to tar xvf APPLICATION.tar. Unpacking the directory would then result in a directory (in our example)APPLICATION. 
  • If the downloaded file ended in tgz or gz then you have a compressed archive and you simply have to add the "z" switch to the tar command to both uncompress and unpack the archive. This command would look like tar xvfz APPLICATION.tgz, which would result in the directoryAPPLICATION.
  • Once you have your directory unpacked you need to change into that directory (with the command cd APPLICATION). Once inside this directory issue the ls command. You will most likely see either a README file or an INSTALL file. Open those up and see if there are any special instructions for installation. If there are no special instructions then the standard compilation steps will most likely work. Here's how this works:
    • su to the root user
    • From within the APPLICATION directory issue the command ./configure. This will generate a make file for the compilation.
    • Issue the command make.
    • Issue the command make install
  • That's it. If all went as planned, the application should be installed

Installing RPM's using the Redhat Package Manager
  • Installing via RPM is actually quite simple. Here's how this works. 
  • Once you have downloaded the rpm file you want to install, open up a terminal window and issue the following commands:
    • su (you will be prompted to enter the root password)
    • rpm -ivh filename.rpm (where filename is the actual name of the file you downloaded)
  • That's it. If all went well your package should now be installed.
  • If you want to make sure your package was installed you can issue the command rpm -q filename and you should see the name of the package and the version that is installed.
  • If you want to remove that package you just installed (or another package) issue the command:
    • rpm -e filename
  • and the package will disappear.
Installing software with Apt-get
  • This is one of the best installation systems available. With apt-get you do not have to download a package, you just have to know the name. Here's how apt-get works (I am going to assume Ubuntu is the distribution, so you'll make use of sudo). Open up a terminal window and issue the following:
    • sudo apt-get install package_name
    • to install the needed package.
  • To remove a package with apt-get you would issue the command:
    • sudo apt-get remove package_name
    • to remove the package from your system.

Installing with fedora / yum
  • yum install
  • yum remove
  • yum update

Active FTP vs. Passive FTP

FTP

  • FTP is a TCP based service exclusively. 
  • There is no UDP component to FTP. 
  • FTP is an unusual service that it utilizes two ports, a 'data' port and a 'command' port (also known as the control port). 
  • Traditionally these are port 21 for the command port and port 20 for the data port. 
  • The confusion begins however, when we find that depending on the mode, the data port is not always on port 20.


Active FTP
  • In active mode FTP the client connects from a random unprivileged port (N > 1023) to the FTP server's command port, port 21. 
  • Then, the client starts listening to port N+1 and sends the FTP command PORT N+1 to the FTP server. 
  • The server will then connect back to the client's specified data port from its local data port, which is port 20.

From the server-side firewall's standpoint, to support active mode FTP the following communication channels need to be opened:
  • FTP server's port 21 from anywhere (Client initiates connection)
  • FTP server's port 21 to ports > 1023 (Server responds to client's control port)
  • FTP server's port 20 to ports > 1023 (Server initiates data connection to client's data port)
  • FTP server's port 20 from ports > 1023 (Client sends ACKs to server's data port)
When drawn out, the connection appears as follows:
  • In Step 1, The client's command port contacts the server's command port and sends the command PORT 1027
  • The server then sends an ACK back to the client's command port in step 2. 
  • In step 3 the server initiates a connection on its local data port to the data port the client specified earlier. 
  • Finally, the client sends an ACK back as shown in step 4.
  • The main problem with active mode FTP actually falls on the client side. The FTP client doesn't make the actual connection to the data port of the server--it simply tells the server what port it is listening on and the server connects back to the specified port on the client. From the client side firewall this appears to be an outside system initiating a connection to an internal client--something that is usually blocked.

Passive FTP
In order to resolve the issue of the server initiating the connection to the client a different method for FTP connections was developed. This was known as passive mode, or PASV, after the command used by the client to tell the server it is in passive mode.
  • In passive mode FTP the client initiates both connections to the server, solving the problem of firewalls filtering the incoming data port connection to the client from the server. 
  • When opening an FTP connection, the client opens two random unprivileged ports locally  (N > 1023 and N+1). 
  • The first port contacts the server on port 21, but instead of then issuing a PORTcommand and allowing the server to connect back to its data port, the client will issue the PASV command.
  • The result of this is that the server then opens a random unprivileged port (P > 1023) and sends P back to the client in response to the PASV command. 
  • The client then initiates the connection from port N+1 to port P on the server to transfer data.

From the server-side firewall's standpoint, to support passive mode FTP the following communication channels need to be opened:
  • FTP server's port 21 from anywhere (Client initiates connection)
  • FTP server's port 21 to ports > 1023 (Server responds to client's control port)
  • FTP server's ports > 1023 from anywhere (Client initiates data connection to random port specified by server)
  • FTP server's ports > 1023 to remote ports > 1023 (Server sends ACKs (and data) to client's data port)
When drawn, a passive mode FTP connection looks like this:
  • In step 1, the client contacts the server on the command port and issues the PASV command.
  • The server then replies in step 2 with PORT 2024, telling the client which port it is listening to for the data connection. 
  • In step 3 the client then initiates the data connection from its data port to the specified server data port. 
  • Finally, the server sends back an ACK in step 4 to the client's data port.
  • While passive mode FTP solves many of the problems from the client side, it opens up a whole range of problems on the server side. The biggest issue is the need to allow any remote connection to high numbered ports on the server. Fortunately, many FTP daemons, including the popular WU-FTPD allow the administrator to specify a range of ports which the FTP server will use.
  • The second issue involves supporting and troubleshooting clients which do (or do not) support passive mode. As an example, the command line FTP utility provided with Solaris does not support passive mode, necessitating a third-party FTP client, such as ncftp. 
  • NOTE: This is no longer the case--use the -p option with the Solaris FTP client to enable passive mode!
  • With the massive popularity of the World Wide Web, many people prefer to use their web browser as an FTP client. Most browsers only support passive mode when accessing ftp:// URLs. This can either be good or bad depending on what the servers and firewalls are configured to support.

DORA process in DHCP

DORA Process
1) D - Discover: Client makes a UDP Broadcast to the server with a DHCPDiscover, or Discover packet.

2) O - Offer: DHCP offers to the client. 
The server sends a DHCPOffer including other configuration parameters (DHCP Options) for the client per the servers configuration file

3) R - Request: In response to the offer Client requests the server. 
The client replies DHCPRequest, unicast to the server, requesting the offered address.


4) A - Acknowledgement: The server sends DHCPAck acknowledging the request which is the clients final permission to take the address as offered. Before sending the ack the server double checks that the offered address is still available, that the parameters match the clients request and (if so) marks the address taken. 

DHCP
  • Dynamic Host Configuration Protocol, a protocol for assigning dynamic IP addresses to devices on a network. 
  • With dynamic addressing, a device can have a different IP address every time it connects to the network. 
  • In some systems, the device's IP address can even change while it is still connected. DHCP also supports a mix of static and dynamic IP addresses. 
  • Dynamic addressing simplifies network administration because the software keeps track of IP addresses rather than requiring an administrator to manage the task. 
  • This means that a new computer can be added to a network without the hassle of manually assigning it a unique IP address. 
  • Many ISPs use dynamic IP addressing for dial-up users. 
  • The DHCP Server keeps all the information and data base about the DHCP Clients.
  • The default port of DHCP is 67, the server listens on port 67 for requests and responses to the client on port 68.


The Concept of Lease
  • With all the necessary information on how DHCP works, one should also know that the IP  address assigned by DHCP server to DHCP client is on a lease. 
  • After the lease expires the DHCP server is free to assign the same IP address to any other host or device requesting for the same. 
  • For example, keeping lease time 8-10 hours is helpful in case of PC’s that are shut down at the end of the day.  
  • So, lease has to be renewed from time to time. The DHCP client tries to renew the lease after half of the lease time has expired. 
  • This is done by the exchange of DHCPREQUEST and DHCPACK messages. 
  • While doing all this, the client enters the renewing stage.

How to send mail using telnet and mail command


MX Record
An MX record comprises a FQDN and a priority. The priority is simply a number which is used to choose which mail server to use if multiple MX records exist for a domain name. A mail server trying to send an email to you will always try the lowest number priority first.

e.g <priority> hostname

How to find the MX (mail exchanges) of a domain/host name
#dig tel.example.com MX

How to send mail via telnet
#telnet tel.example.com 25
HELO                                                              <starts>
MAIL from: <sender@example.com>
RCPT to: <recipient@example.com>
DATA                                                              <enter the contents of email after DATA>
Hi how are you??
.                                                                        <END the message with a period(.)>

Attachments can be embedded using base64 encoding

Sending mails using mailx/mail command.

Install mailx command. 
#sudo yum install mailx

1: Simple mail
Run the following command, and then mailx would wait for you to enter the message of the email. You can hit enter for new lines. When done typing the message, press Ctrl+D and mailx would display EOT. After that mailx automatically delivers the email to the destination.
$ mail -s "This is the subject" someone@example.com
Hi someone
How are you
I am fine
Bye
EOT

2: Take message from a file
The message body of the email can be taken from a file as well.
$ mail -s "This is Subject" someone@example.com < /path/to/file
The message can also be piped using the echo command.
$ echo "This is message body" | mail -s "This is Subject" someone@example.com

3: Multiple recipients
To send the mail to multiple recipients, specify all the emails separated by a comma
$ echo "This is message body" | mail -s "This is Subject" someone@example.com,someone2@example.com

4: CC and BCC
The "-c" and "-b" options can be used to add CC and BCC addresses respectively.
$ echo "This is message body" | mail -s "This is Subject" -c ccuser@example.com someone@example.com

5: Specify From name and address
To specify a "FROM" name and address, use the "-r" option. The name should be followed by the address wrapped in "<>".
$ echo "This is message body" | mail -s "This is Subject" -r "Harry<harry@gmail.com>" someone@example.com

6: Specify "Reply-To" address
The reply to address is set with the internal option variable "replyto" using the "-S" option.
# replyto email
$ echo "This is message" | mail -s "Testing replyto" -S replyto="mark@gmail.com" someone@example.com

# replyto email with a name
$ echo "This is message" | mail -s "Testing replyto" -S replyto="Mark<mark@gmail.com>" someone@example.com

7: Attachments
Attachments can be added with the "-a" option.
$ echo "This is message body" | mail -s "This is Subject" -r "Harry<harry@gmail.com>" -a /path/to/file someone@example.com

8: Verbose - watch smtp communication
Use -v option with mailx




Monday, 29 June 2015

Inode number

Inode is a “database” of all file information except the file contents and the file name.
In a file system, inodes consist roughly of 1% of the total disk space, whether it is a whole storage unit (hard disk,thumb drive, etc.) or a partition on a storage unit. The inode space is used to “track” the files stored on the hard disk. The inode entries store metadata about each file, directory or object, but only points to these structures rather than storing the data. Each entry is 128 bytes in size. The metadata contained about each structure can include the following:
  • Inode number
  • Access Control List (ACL)
  • Extended attribute
  • Direct/indirect disk blocks
  • Number of blocks
  • File access, change and modification time
  • File deletion time
  • File generation number
  • File size
  • File type
  • Group
  • Number of links
  • Owner
  • Permissions
  • Status flags
To find the inode numbers of the directories, you can use the command “tree -a -L 1 --inodes / 
To delete the file using the inode number, use the following command:
find ./ -inum number -exec rm -i {} \;


ls –i Journal.rtf
buse@Buse-PC:/media/buse/Norton$ ls -i ./Journal.rtf
160 ./Journal.rtf
stat –i Journal.rtf
buse@Buse-PC:/media/buse/Norton$ stat ./Journal.rtf
File: ‘./Journal.rtf’
Size: 22661 Blocks: 48 IO Block: 4096 regular file
Device: 811h/2065d Inode: 160 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 1000/ buse) Gid: ( 1000/ buse)
Access: 2013-05-26 00:00:00.000000000 -0400
Modify: 2013-05-26 17:58:04.000000000 -0400
Change: 2013-05-26 17:58:02.180000000 -0400
Birth: -

How traceroute works


TTL
The time to live value can be thought of as an upper bound on the time that an IP datagram can exist in an internet system.
The TTL field is set by the sender of the datagram, and reduced by every host on the route to its destination. If the TTL field reaches zero before the datagram arrives at its destination, then the datagram is discarded and an ICMP error datagram (Time Exceeded) is sent back to the sender.
The purpose of the TTL field is to avoid a situation in which an undeliverable datagram keeps circulating on an internet system, and such a system eventually becoming swamped by such immortal datagrams.
In IPv4, time to live (TTL) is an 8-bit field in the IP header.

TRACEROUTE
Traceroute works by sending packets with gradually increasing TTL value, starting with TTL value of one. The first router receives the packet, decrements the TTL value and drops the packet because it then has TTL value zero. The router sends an ICMP Time Exceeded message back to the source.

     +--------+                                          +--------+   
     | SENDER |                                          | TARGET |   
     +--------+                                          +--------+   
         |                                                   ^|     
      [============( Router )=====( Router )=====( Router )==|====]
                  ^              ^              ^            |  
                  | TTL=1        | TTL=2        | TTL=3      | TTL=4  
 Traceroute       |              |              |            |        
 shows these -----+--------------+--------------+------------/       

Traceroute works by increasing the "time-to-live" value of each successive batch of packets sent.
1. As shown in figure below the first three packets sent have a time-to-live (TTL) value of one (implying that they are not forwarded by the next router and make only a single hop).
2. The next three packets have a TTL value of 2, and so on. When a packet passes through a host, normally the host decrements the TTL value by one, and forwards the packet to the next host. When a packet with a TTL of one reaches a host, the host discards the packet and sends an "ICMP time exceeded" packet to the sender.
3. The traceroute utility uses these returning packets to produce a list of hosts that the packets have traversed en route to the destination. The three timestamp values returned for each host along the path are the delay (latency) values for each packet in the batch.
4. If a packet does not return within the expected timeout window, a star (asterisk) is traditionally printed. Traceroute may not list the real hosts. It indicates that the first host is at one hop, the second host at two hops, etc. IP does not guarantee that all the packets take the same route. Also note that if the host at hop number N does not reply, the hop will be skipped in the output.
5. On Linux, the traceroute utility by default uses UDP datagrams with destination ports number from 33434 to 33534.

How ping command works

    • The Internet Ping program works much like a sonar echo-location, sending a small packet of information containing an ICMP ECHO_REQUEST to a specified computer, which then sends an ECHO_REPLY packet in return. 
    • The IP address 127.0.0.1 is set by convention to always indicate your own computer.
    • Therefore, a ping to that address will always ping yourself and the delay should be very short. This provides the most basic test of your local communications.

    The ping command is a very common method for troubleshooting the accessibility of devices. It uses a series of Internet Control Message Protocol (ICMP) Echo messages to determine:
    • Whether a remote host is active or inactive.
    • The round-trip delay in communicating with the host.
    • Packet loss.
    The ping command first sends an echo request packet to an address, then waits for a reply. The ping is successful only if:
    • the echo request gets to the destination, and
    • the destination is able to get an echo reply back to the source within a predetermined time called a timeout. The default value of this timeout is two seconds on Cisco routers.


Friday, 26 June 2015

How SSH(secure shell) works

How SSH Works

When you connect through SSH, you will be dropped into a shell session, which is a text-based interface where you can interact with your server. For the duration of your SSH session, any commands that you type into your local terminal are sent through an encrypted SSH tunnel and executed on your server.
The SSH connection is implemented using a client-server model. This means that for an SSH connection to be established, the remote machine must be running a piece of software called an SSH daemon. This software listens for connections on a specific network port, authenticates connection requests, and spawns the appropriate environment if the user provides the correct credentials.
The user's computer must have an SSH client. This is a piece of software that knows how to communicate using the SSH protocol and can be given information about the remote host to connect to, the username to use, and the credentials that should be passed to authenticate. The client can also specify certain details about the connection type they would like to establish.

How SSH Authenticates Users

Clients generally authenticate either using passwords (less secure and not recommended) or SSH keys, which are very secure.
Password logins are encrypted and are easy to understand for new users. However, automated bots and malicious users will often repeatedly try to authenticate to accounts that allow password-based logins, which can lead to security compromises. For this reason, we recommend always setting up SSH-based authentication for most configurations.
SSH keys are a matching set of cryptographic keys which can be used for authentication. Each set contains a public and a private key. The public key can be shared freely without concern, while the private key must be vigilantly guarded and never exposed to anyone.
To authenticate using SSH keys, a user must have an SSH key pair on their local computer. On the remote server, the public key must be copied to a file within the user's home directory at~/.ssh/authorized_keys. This file contains a list of public keys, one-per-line, that are authorized to log into this account.
When a client connects to the host, wishing to use SSH key authentication, it will inform the server of this intent and will tell the server which public key to use. The server then check its authorized_keys file for the public key, generate a random string and encrypts it using the public key. This encrypted message can only be decrypted with the associated private key. The server will send this encrypted message to the client to test whether they actually have the associated private key.
Upon receipt of this message, the client will decrypt it using the private key and combine the random string that is revealed with a previously negotiated session ID. It then generates an MD5 hash of this value and transmits it back to the server. The server already had the original message and the session ID, so it can compare an MD5 hash generated by those values and determine that the client must have the private key.
Now that you know how SSH works, we can begin to discuss some examples to demonstrate different ways of working with SSH

Generating and Working with SSH Keys

This section will cover how to generate SSH keys on a client machine and distribute the public key to servers where they should be used. This is a good section to start with if you have not previously generated keys due to the increased security that it allows for future connections.

Generating an SSH Key Pair

Generating a new SSH public and private key pair on your local computer is the first step towards authenticating with a remote server without a password. Unless there is a good reason not to, you should always authenticate using SSH keys.
A number cryptographic algorithms can be used to generate SSH keys, including RSA, DSA, and ECDSA. RSA keys are generally preferred and are the default key type.
To generate an RSA key pair on your local computer, type:
ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/demo/.ssh/id_rsa):
This prompt allows you to choose the location to store your RSA private key. Press ENTER to leave this as the default, which will store them in the .ssh hidden directory in your user's home directory. Leaving the default location selected will allow your SSH client to find the keys automatically.

Copying your Public SSH Key to a Server Manually

If you do not have password-based SSH access available, you will have to add your public key to the remote server manually.
On your local machine, you can find the contents of your public key file by typing:
cat ~/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCqql6MzstZYh1TmWWv11q5O3pISj2ZFl9HgH1JLknLLx44+tXfJ7mIrKNxOOwxIxvcBF8PXSYvobFYEZjGIVCEAjrUzLiIxbyCoxVyle7Q+bqgZ8SeeM8wzytsY+dVGcBxF6N4JS+zVk5eMcV385gG3Y6ON3EG112n6d+SMXY0OEBIcO6x+PnUSGHrSgpBgX7Ks1r7xqFa7heJLLt2wWwkARptX7udSq05paBhcpB0pHtA1Rfz3K2B+ZVIpSDfki9UVKzT8JUmwW6NNzSgxUfQHGwnW7kj4jp4AT0VZk3ADw497M2G/12N0PPB5CnhHf7ovgy6nL1ikrygTKRFmNZISvAcywB9GVqNAVE+ZHDSCuURNsAInVzgYo9xgJDW8wUw2o8U77+xiFxgI5QSZX3Iq7YLMgeksaO4rBJEa54k8m5wEiEE1nUhLuJ0X/vh2xPff6SQ1BL/zkOhvJCACK6Vb15mDOeCSq54Cr7kvS46itMosi/uS66+PujOO+xt/2FWYepz6ZlN70bRly57Q06J+ZJoc9FfBCbCyYH7U/ASsmY095ywPsBo1XQ9PqhnN1/YOorJ068foQDNVpm146mUpILVxmq41Cj55YKHEazXGsdBIbXWhcrRf4G2fJLRcGUr9q8/lERo9oxRm5JFX6TCmj6kmiFqv+Ow9gI0x8GvaQ== demo@test
You can copy this value, and manually paste it into the appropriate location on the remote server. You will have to log into the remote server.
On the remote server, create the ~/.ssh directory if it does not already exist:

mkdir -p ~/.ssh

Afterwards, you can create or append the ~/.ssh/authorized_keys file by typing:
echo public_key_string >> ~/.ssh/authorized_keys

How SSL(Secure Socket Layer) Works

  • An SSL certificate is a file that binds a cryptographic key to a hostname (ex: google.com).
  • Data sent using an SSL certificate is scrambled and can only be deciphered with a matching decryption key.
  • An SSL certificate file contains two different keys; a private key, and a public key. (Called keys because they act like a key for a door) Public keys are used to encrypt files, like locking a digital door, and private keys let you decipher the encryption, like unlocking the same door.
  • An “SSL handshake” authenticates the website and the web browser.  This is an exchange of data that lets the client (your browser) and a server establish trust to share information.
  • SSL is a protocol that provides privacy and integrity between two communicating applications using TCP/IP. 
  • The Hypertext Transfer Protocol (HTTP) for the World Wide Web uses SSL for secure communications.
  • The data going back and forth between client and server is encrypted using a symmetric algorithm such as DES or RC4. 
  • A public-key algorithm-usually RSA-is used for the exchange of the encryption keys and for digital signatures. 
  • The algorithm uses the public key in the server's digital certificate. With the server's digital certificate, the client can also verify the server's identity. 
  • Versions 1 and 2 of the SSL protocol provide only server authentication. Version 3 adds client authentication, using both client and server digital certificates. 
  • SSL session always begins with an exchange of messages called the SSL handshake.
A simplified overview of how the SSL handshake is processed is shown in below diagram.



Explanation
  1. The client sends a client "hello" message that lists the cryptographic capabilities of the client (sorted in client preference order), such as the version of SSL, the cipher suites supported by the client, and the data compression methods supported by the client. The message also contains a 28-byte random number.
  2. The server responds with a server "hello" message that contains the cryptographic method (cipher suite) and the data compression method selected by the server, the session ID, and another random number.
    Note:
    The client and the server must support at least one common cipher suite, or else the handshake fails. The server generally chooses the strongest common cipher suite.
  3. The server sends its digital certificate. (The server uses X.509 V3 digital certificates with SSL.)
    If the server uses SSL V3, and if the server application (for example, the Web server) requires a digital certificate for client authentication, the server sends a "digital certificate request" message. In the "digital certificate request" message, the server sends a list of the types of digital certificates supported and the distinguished names of acceptable certificate authorities.
  4. The server sends a server "hello done" message and waits for a client response.
  5. Upon receipt of the server "hello done" message, the client (the Web browser) verifies the validity of the server's digital certificate and checks that the server's "hello" parameters are acceptable.
    If the server requested a client digital certificate, the client sends a digital certificate, or if no suitable digital certificate is available, the client sends a "no digital certificate" alert. This alert is only a warning, but the server application can fail the session if client authentication is mandatory.
  6. The client sends a "client key exchange" message. This message contains the pre-master secret, a 46-byte random number used in the generation of the symmetric encryption keys and the message authentication code(MAC) keys, encrypted with the public key of the server.
    If the client sent a digital certificate to the server, the client sends a "digital certificate verify" message signed with the client's private key. By verifying the signature of this message, the server can explicitly verify the ownership of the client digital certificate.
    Note:
    An additional process to verify the server digital certificate is not necessary. If the server does not have the private key that belongs to the digital certificate, it cannot decrypt the pre-master secret and create the correct keys for the symmetric encryption algorithm, and the handshake fails.
  7. The client uses a series of cryptographic operations to convert the pre-master secret into a master secret, from which all key material required for encryption and message authentication is derived. Then the client sends a "change cipher spec" message to make the server switch to the newly negotiated cipher suite. The next message sent by the client (the "finished" message) is the first message encrypted with this cipher method and keys.
  8. The server responds with a "change cipher spec" and a "finished" message of its own.
  9. The SSL handshake ends, and encrypted application data can be sent.
  10. SSL Certificate has following:
    1. CA Certificate
    2. Domain certificate
    3. Private key


Difference between ssh and telnet


  • SSH and Telnet commonly serves the same purpose
  • SSH is more secure compared to Telnet
  • SSH encrypts the data while Telnet sends data in plain text
  • SSH uses a public key for authentication while Telnet does not use any authentication
  • SSH adds a bit more overhead to the bandwidth compared to Telnet
  • Telnet has been all but replaced by SSH in almost all uses