Practical File CO 405 Internet Security Lab: Delhi Technological University
Practical File CO 405 Internet Security Lab: Delhi Technological University
CO 405
INTERNET SECURITY LAB
DELHI TECHNOLOGICAL
UNIVERSITY
Submitted by:
SHALINI
2K15/CO/113
INDEX
AIM:
DESCRIPTION:
Hacking generally refers to unauthorized intrusion into a computer or a network. The person engaged in
hacking activities is known as a hacker. This hacker may alter system or security features to accomplish a
goal that differs from the original purpose of the system.
Hacking can also refer to non-malicious activities, usually involving unusual or improvised alterations to
equipment or processes.
CASE STUDY:
ETHICAL HACKING
Ethical hacking and a ethical hacker are terms that describe hacking performed to help a company or
individual identify potential threats on the computer or network. An ethical hacker attempts to hack their
way past the system security, finding any weak points in the security that could be exploited by other
hackers. The organization uses what the ethical hacker finds to improve the system security, in an effort
to minimize, if not eliminate any potential hacker attacks.
In order for hacking to be deemed ethical, the hacker must obey the below rules.
1. You have permission to probe the network and attempt to identify potential security risks. It's
recommended that if you are the person performing the tests that you get written consent.
2. You respect the individual's or company's privacy and only go looking for security issues.
3. You report all security vulnerabilities you detect to the company, not leaving anything open for you or
someone else to come in at a later time.
4. You let the software developer or hardware manufacturer know of any security vulnerabilities you
locate in their software or hardware if not already known by the company.
The term "ethical hacker" has received criticism at times from people who say that there is no such thing
as an "ethical" hacker. Hacking is hacking, no matter how you look at it and those who do the hacking are
commonly referred to as computer criminals. However, the work that ethical hackers do for organizations
has helped improve system security and can be said to be quite successful. Individuals interested in
becoming an ethical hacker can work towards a certification to become a Certified Ethical Hacker. This
certification is provided by the International Council of E-Commerce Consultants (EC-Council).
Social engineering attacks are based on one thing – information. Without information about your
customers, social engineers aren’t able to use the elicitation and pretesting tactics that are described
below. This information is relatively simple to obtain. A good social engineer can spend a few hours
researching a target online and have enough information to make even the most seasoned contact center
agent believe the social engineer is someone they are not. The increasing amount of personal information
that’s available using search engines, who is databases, social media (Facebook, LinkedIn, MySpace,
Twitter, etc.), blogs, wikis, and photo sharing sites makes it very simple for them to find or determine:
Even social security numbers are available from some paid research services.
Once the social engineer has relevant information, they use it in these highly effective human hacking
tactics:
• Elicitation
• Pretexting
Certain corporations employ hackers as part of their support staff. These legitimate hackers use their
skills to find flaws in the company security system, thus preventing identity theft and other
computer-related crimes.
CONCLUSION:
We successfully presented a case study on network security fundamentals - Ethical Hacking, Social
Engineering practices.
EXPERIMENT : 02
AIM:
DESCRIPTION:
A denial-of-service attack is a security event that occurs when an attacker prevents legitimate users from
accessing specific computer systems, devices, services or other IT resources. Denial-of-service (DoS)
attacks typically flood servers, systems or networks with traffic in order to overwhelm the victim's
resources and make it difficult or impossible for legitimate users to access them.
While an attack that crashes a server can often be dealt with successfully by simply rebooting the system,
flooding attacks can be more difficult to recover from. Recovering from a distributed denial-of-service
(DDoS) attack, in which attack traffic comes from a large number of sources, can be even more difficult.
DoS and DDoS attacks often use vulnerabilities in the way networking protocols handle network traffic; for
example, by transmitting a large number of packets to a vulnerable network service from different Internet
Protocol (IP) addresses in order to overwhelm the service and make it unavailable to legitimate users.
CASE STUDY:
DENIAL OF SERVICE
The goal of a denial of service attack is to deny legitimate users access to a particular resource. An
incident is considered an attack if a malicious user intentionally disrupts service to a computer or network
resource. Denial of service (DoS) attacks has become a major threat to current computer networks. To
have a better understanding on DoS attacks, In particular, we network based and host based DoS attack
techniques to illustrate attack principles. DoS attacks are classified according to their major attack
characteristics. Current counterattack technologies are also reviewed, including major defense products
in deployment and representative defense approaches in research. Finally, DoS attacks and defenses in
802.11 based wireless networks are explored at physical, MAC and network layers.
In this section, we overview the common DDoS attack techniques and discuss why attacks succeed
fundamentally.
Attack Techniques
Many attack techniques can be used for DoS purpose as long as they can disable service, or downgrade
service performance by exhausting resources for providing services. Although it is Impossible to
enumerate all existing attack techniques, we describe several representatives network based and host
based attacks in this section to illustrate attack principles. Readers can also find complementary
information on DoS attacks in Handley et al. 2006 and Mirkovic et al. 2005.
UDP Flooding
By patching or redesigning the implementation of TCP and ICMP protocols, current networks and
systems have incorporated new security features to prevent TCP and ICMP attacks. Nevertheless,
attackers may simply send a large amount of UDP packets towards a victim. Since an intermediate
network can deliver higher traffic volume than the victim network can handle, the flooding traffic can
exhaust the victim's connection resources. Pure flooding can be done with any type of packets. Attackers
can also choose to flood service requests so that the victim cannot handle all requests with its
constrained resources (i.e., service memory or CPU cycles). Note that UDP flooding is similar to flash
crowds that occur when a large number of users try to access the same server simultaneously. However,
the intent and the triggering mechanisms for DDoS attacks and flash crowds are different.
Intermittent Flooding
Attackers can further tune their flooding actions to reduce the average flooding rate to a very low level
while achieving equivalent attack impacts on legitimate TCP connections. In shrew attacks (Kuzmanovic
et al. 2003), attacking hosts can flood packets in a burst to congest and disrupt existing TCP connections.
Since all disrupted TCP connections will wait a specific period (called retransmission-time-out (RTO)) to
retransmit lost packets, attacking hosts can flood packets at the next RTO to disrupt retransmission.
Thereby, attacking hosts can synchronize their flooding at the following RTOs and disable legitimate TCP
connections as depicted in Figure 2. Such collaboration among attacking hosts not only reduces overall
flooding traffic, but also helps avoid detection. Similar attack techniques targeting services with
congestion control mechanisms for Quality of Service (QoS) have been discovered by Guirguis et al.
(2005). When a QoS enabled server receives a burst of service requests, it will temporarily throttle
incoming requests for a period until previous requests have been processed. Thus, attackers can flood
requests at a pace to keep the server throttling the incoming requests and achieve the DoS effect.
Guirguis’s study showed that a burst of 800 requests can bring down a web server for 200 seconds, and
thereby the average flooding rate could be as low as 4 requests per second
.
FINDING AND LEARNING:
The United States Computer Emergency Readiness Team (US-CERT) provides some guidelines to
determine when a DoS attack may be underway. US-CERT states that the following may indicate such an
attack:
● degradation in network performance, especially when attempting to open files stored on the
network or when accessing websites;
● an inability to reach a particular website;
● difficulty accessing a website; and
● a higher than usual volume of spam email
CONCLUSION:
We successfully presented a case study on system threat attacks - Denial of Service Attacks.
EXPERIMENT : 03
AIM:
DESCRIPTION:
Sniffing and snooping should be synonyms. They refer to listening to a conversation. For example, if you
login to a website that uses no encryption, your username and password can be sniffed off the network by
someone who can capture the network traffic between you and the web site.
Spoofing refers to actively introducing network traffic pretending to be someone else. For example,
spoofing is sending a command to computer A pretending to be computer B. It is typically used in a
scenario where you generate network packets that say they originated by computer B while they really
originated by computer C. Spoofing in an email context means sending an email pretending to be
someone else.
CASE STUDY:
Packet sniffing and spoofing are the two important concepts in network security; they are two major
threats in network communication. Being able to understand these two threats is essential for
understanding security measures in networking. There are many packet sniffing and spoofing tools, such
as Wireshark, Tcpdump, Netwox, etc. Some of these tools are widely used by security experts, as well as
by attackers.Being able to use these tools is important for students, but what is more important for
students in a network security course is to understand how these tools work, i.e., how packet sniffing and
spoofing are implemented in software. The objective of this lab is for students to master the technologies
underlying most of the sniffing and spoofing tools. Students will play with some simple sniffer and
spoofing programs, read their source code, modify them, and eventually gain an in-depth understanding
on the technical aspects of these programs.
Spoofing is an active attack by one machine on another. A dishonest person with less-than honorable
motives represents himself as being someone else or coming from somewhere else. The spoofer appears
to be familiar. It’s a way of gaining access that is otherwise denied to the individual. Perhaps the person
intends to cause problems or perhaps the individual just wants to have a look around where he’s not
supposed to be.
Sniffing refers to the use of software or hardware to watch data as it travels over the Internet. There are
some legitimate uses for the process. It is then called network analysis and helps network administrators
diagnose problems. In the hands of the wrong person, however, a sniffing program can collect passwords
and read email. Sniffing is considered a passive security attack, according to TechiWarehouse.
Prevention
New data suggests that there is no way to detect when your computer has been sniffed. They also advise
that while people can take measures to make sniffing difficult, it may be almost impossible to totally
prevent being sniffed.
Encryption helps. Replacing the hub with a switch may also add protection. Taking care when using
public Wi-Fi may also help reduce exposure.
Consumer Fraud Reporting adds that you can help protect against spoofing by following these
suggestions:
● Don’t click on an email link that requests personal information, even if it looks like a legitimate
site.
● Be suspicious of anyone asking for personal information.
● Don’t send personal information or financial information through a Web site.
If you’ve been caught in a moment of carelessness and provided information you should not have, such
as passwords or personal identification, notify the companies you do business with right away to put a
fraud alert on your account. Also contact Consumer Fraud Reporting, a free service that helps protect
consumers against fraud.
CONCLUSION:
We successfully presented a case study on sniffing and spoofing attacks.
EXPERIMENT : 04
AIM:
DESCRIPTION:
Many people don’t understand how easy it is for attackers to take advantage of weak passwords, and
therefore don’t use a password manager or other means to make their passwords stronger. This post
describes 9 common ways passwords get captured, roughly ordered from most to least common. Proper
use of a password manager can thwart some of these attacks and limit damages from most other types of
attacks.
CASE STUDY:
4: Brute Force
Brute Force refers to discovering passwords through trial and error, similar to trying every possible
combination on a lock. The most well known form of brute force attack is for password cracking software
to methodically try millions of passwords on one specific user name on a specific account. A typically
weak password can be cracked in less than a day using this method. Security conscious online vendors
like banks or e-mail services provide some protection against such brute force attempts by denying
access if there are too many attempts per hour. However, different forms of brute force can be used to get
around these safeguards. A common example is software which automatically logs in to millions of
different accounts per day by combining popular user names, passwords, and web sites (i.e. try
password1 at Jsmith@gmail.com, 123456 at dj@facebook.com, qwerty at Mrodriguez@yahoo.com, etc.).
As such methods become more widely adopted, it would not be surprising if nearly all accounts with short
user names and short passwords get compromised. Brute force is also used as a supplementary attack
after a first password is captured. For example, if the password badpassword1 was captured by phishing,
brute force can be used to try similar passwords on other accounts.
Protection: Brute force attacks are highly unlikely to crack very strong passwords. So just use strong
passwords. I suggest randomized 15 character jumbles.
Damage Control: Your damages are limited to one account if you have a unique password for each
account. Immediately change the password of the affected account.
5: Eavesdropping:
Keystroke Logger on Your Browser Many people believe that nothing bad can happen to people who only
visit safe, well respected sites. They are wrong. Malicious JavaScript can be injected into any browser on
any system, visiting any web site. Keystroke logging is something that is done by some of these
JavaScript injections. In most browsers, malicious JavaScript can log keystrokes in all open tabs, until the
browser is closed. Usernames and passwords entered during the session can be captured this way.
Protection: Keystroke logging via browser is growing more common but is unfortunately one of the more
difficult threats to defend against. Defenses include: Use Firefox in conjunction with the No Script
extension. While this is a strong defense, the overall complication of using No Script (popup, white lists,
and blacklists) is more of a hassle than the average Joe wants to deal with. Some security suites attempt
to defend against this threat with browser plug-ins, but these can dramatically slow down browsing. A
simpler option is to only access the internet using the Google Chrome browser, which is designed so that
malicious JavaScript can be theoretically contained to a single tab. At least other tabs will be safe. Some
password managers such as RoboForm enter passwords and usernames in a way which most JavaScript
keystroke loggers cannot intercept. None of these suggestions are sure to stop browser-based keystroke
loggers, but if you implement one or more of these suggestions you’ll at least reduce your chances of
getting your usernames and passwords logged by malicious JavaScript. The only perfect defense is to not
connect to the internet at all.
Damage Control: Your damages are limited to logins captured while browsing, so long as you have a
unique password for each account. Immediately change the password of the affected accounts. If using a
browser-based or web-based password manager, you should also change your master password.
6: Eavesdropping:
Public Wi-Fi Monitoring Passwords are frequently stolen on public computers and over public Wi-Fi
connections, using free Wi-Fi traffic monitoring software that is simple to operate.
Protection: Never log in to online accounts using a public computer. When using open Wi-Fi hot spots,
you should only log in with your own notebook with services that enforce secure logins and sessions
(HTTPS), perhaps using the Firefox Add-on HTTPS Everywhere to help. It is far safer to access email
and other accounts using your phone data service, if you have one.
Damage Control: If you discover that this type of attack has occurred, then you will need to change the
password for all of your accounts as well as your master password. If you know exactly when the attack
occurred, you can change passwords only for the accounts you used during that session.
CONCLUSION:
We successfully presented a case study on techniques used for web-based password capturing.
EXPERIMENT : 05
AIM:
DESCRIPTION:
A computer virus attaches itself to a program or file enabling it to spread from one computer to another,
leaving infections as it travels. Like a human virus, a computer virus can range in severity: some may
cause only mildly annoying effects while others can damage your hardware, software or files. Almost all
viruses are attached to an executable file, which means the virus may exist on your computer but it
actually cannot infect your computer unless you run or open the malicious program.
It is important to note that a virus cannot be spread without a human action, (such as running an infected
program) to keep it going. Because a virus is spread by human action people will unknowingly continue
the spread of a computer virus by sharing infecting files or sending emails with viruses as attachments in
the email.
CASE STUDY:
Virus: The most potent and vulnerable threat of computer users is virus attacks. Virus attacks hampers
important work involved with data and documents. It is imperative for every computer user to be aware
about the software and programs that can help to protect the personal computers from attacks. One must
take every possible measure in order to keep the computer systems free from virus attacks. The top
sources of virus attacks are highlighted below:
● Downloadable Programs
● Cracked Software
● Email Attachments
● Internet
● Booting From CD
Trojans: Trojan horse attacks pose one of the most serious threats to computer security. If you were
referred here, you may have not only been attacked but may also be attacking others unknowingly. This
page will teach you how to avoid falling prey to them, and how to repair the damage if you already did.
According to legend, the Greeks won the Trojan war by hiding in a huge, hollow wooden horse to sneak
into the fortified city of Troy. In today’s computer world, a Trojan horse is defined as a “malicious,
security-breaking program that is disguised as something benign”. For example, you download what
appears to be a movie or music file, but when you click on it, you unleash a dangerous program that
erases your disk, sends your credit card numbers and passwords to a stranger, or lets that stranger hijack
your computer to commit illegal denial of service attacks.
The following general information applies to all operating systems, but by far most of the damage is done
to/with Windows users due to its vast popularity and many weaknesses. Linux, MacOS X, and other
operating systems are not as frequently infected, but they are far from immune.
1. Anti-Virus Software: Some of these can handle most of the well known trojans, but none are perfect,
no matter what their advertising claims. You absolutely MUST make sure you have the very latest update
files for your programs, or else they will miss the latest trojans. Compared to traditional viruses, today’s
trojans evolve much quicker and come in many seemingly innocuous forms, so anti-virus software is
always going to be playing catch up. Also, if they fail to find every trojan, anti-virus software can give you
a false sense of security, such that you go about your business not realizing that you are still dangerously
compromised. There are many products to choose from, but the following are generally effective: AVP,
PC-cillin, and McAfee Virus Scan. All are available for immediate downloading typically with a 30 day free
trial. For a more complete review of all major anti-virus programs, including specific configuration
suggestions for each, see the Hack Fix Project’s anti-virus software page .When you are done, make sure
you’ve updated Windows with all security patches .
2. Anti-Trojan Programs: These programs are the most effective against trojan horse attacks, because
they specialize in trojans instead of general viruses. A popular choice is The Cleaner, $30 commercial
software with a 30 day free trial. To use it effectively when you are done, make sure you’ve updated
Windows with all security patches, then change all your passwords because they may have been seen by
every “hacker” in the world.
CONCLUSION:
We successfully presented a case study on different attacks caused by virus and trojans.
EXPERIMENT : 06
AIM:
DESCRIPTION:
An intrusion detection system (IDS) is a device or software application that monitors a network or
systems for malicious activity or policy violations. Any malicious activity or violation is typically reported
either to an administrator or collected centrally using a security information and event management(SIEM)
system. A SIEM system combines outputs from multiple sources, and uses alarm filtering techniques to
distinguish malicious activity from false alarms.
IDS types range in scope from single computers to large networks.The most common classifications are
network intrusion detection systems (NIDS) and host-based intrusion detection systems (HIDS). A
system that monitors important operating system files is an example of an HIDS, while a system that
analyzes incoming network traffic is an example of an NIDS. It is also possible to classify IDS by detection
approach: the most well-known variants are signature-based detection (recognizing bad patterns, such as
malware); and anomaly-based detection(detecting deviations from a model of "good" traffic, which often
relies on machine learning). Some IDS products have the ability to respond to detected intrusions.
Systems with response capabilities are typically referred to as an intrusion prevention system.
CASE STUDY:
Anti-Intrusion Technique: The basic underlying principles of intrusion control and distill the universe of
anti-intrusion techniques into six high-level, mutually supportive approaches. System and network
intrusions may be prevented, preempted, deflected, deterred, detected, and/or autonomously countered.
This Anti-Intrusion Taxonomy (AINT) of anti-intrusion techniques considers less explored approaches on
the periphery of "intrusion detection" which are independent of the availability of a rich audit trail, as well
as better known intrusion detection techniques. Much like the Open Systems Reference Model supports
understanding of communications protocols by identifying their layer and purpose, the authors believe this
anti intrusion taxonomy and associated methods and techniques help clarify the relationship between
anti-intrusion techniques described in the literature and those implemented by commercially available
products. The taxonomy may be used to assess computing environments which perhaps already support
Intrusion Detection System (IDS) implementations to help identify useful complementary intrusion defense
approaches.
Honey pot: In computer terminology, a honey pot is a trap set to detect, deflect, or, in some manner,
counteract attempts at unauthorized use of information systems. Generally, a honey pot consists of a
computer, data, or a network site that appears to be part of a network, but is actually isolated and
monitored, and which seems to contain information or a resource of value to attackers. This is similar to
the police baiting a criminal and then conducting undercover surveillance.
Honeypots can be classified based on their deployment and based on their level of involvement.
Based on deployment, honeypots may be classified as:
1. Production honeypots
2. Research honeypots
Production honeypots are easy to use, capture only limited information, and are used primarily by
companies or corporations; Production honeypots are placed inside the production network with other
production servers by an organization to improve their overall state of security. Normally, production
honeypots are low-interaction honeypots, which are easier to deploy. They give less information about the
attacks or attackers than research honeypots do.
Research honeypots are run to gather information about the motives and tactics of the Blackhat
community targeting different networks. These honeypots do not add direct value to a specific
organization; instead, they are used to research the threats organizations face and to learn how to better
protect against those threats.Research honeypots are complex to deploy and maintain, capture extensive
information, and are used primarily by research, military, or government organizations.
The metaphor of a bear being attracted to and stealing honey is common in many traditions, including
Germanic and Slavic. A common Slavic word for the bear is medved "honeyeater". The tradition of bears
stealing honey has been passed down through stories and folklore, especially the well known Winnie the
Pooh.The Brazilian folk tale "Boneca de pixe" tells of a stealing monkey being trapped by a puppet made
of pitch.
The earliest honeypot techniques are described in Clifford Stoll's 1989 book The Cuckoo's Egg.
In 2017, Dutch police used honeypot techniques to track down users of the darknet market Hansa.
CONCLUSION:
We successfully presented a case study on anti - intrusion technique - Honey Pot.
EXPERIMENT : 07
AIM:
DESCRIPTION:
In cryptography, RC4 (Rivest Cipher 4 also known as ARC4 or ARCFOUR meaning Alleged RC4, see
below) is a stream cipher. While remarkable for its simplicity and speed in software, multiple
vulnerabilities have been discovered in RC4, rendering it insecure. It is especially vulnerable when the
beginning of the output keystream is not discarded, or when nonrandom or related keys are used.
Particularly problematic uses of RC4 have led to very insecure protocols such as WEP.
As of 2015, there is speculation that some state cryptologic agencies may possess the capability to break
RC4 when used in the TLS protocol.[6]
IETF has published RFC 7465 to prohibit the use of RC4 in TLS;
Mozilla and Microsoft have issued similar recommendations.
A number of attempts have been made to strengthen RC4, notably Spritz, RC4A, VMPC, and RC4+.
ALGORITHM:
Initialization of S ---
To begin, the entries of S are set equal to the values from 0 through 255 in ascending order; that is; S[0]
= 0, S[1] = 1, …, S[255] = 255. A temporary vector, T, is also created. If the length of the key K is 256
bytes, then K is transferred to T. Otherwise, for a key of length keylen bytes, the first keylen elements of T
are copied from K and then K is repeated as many times as necessary to fill out T. These preliminary
operations can be summarized as follows:
/* Initialization */
for i = 0 to 255 do
S[i] = i;
T[i] = K[i mod keylen];
Next we use T to produce the initial permutation of S. This involves starting with S[0] and going through to
S[255], and, for each S[i], swapping S[i] with another byte in S according to a scheme dictated by T[i]:
/* Initial Permutation of S */
j = 0;
for i = 0 to 255 do
j = (j + S[i] + T[i]) mod 256;
Swap (S[i], S[j]);
Because the only operation on S is a swap, the only effect is a permutation. S still contains all the
numbers from 0 through 255.
Stream Generation : Once the S vector is initialized, the input key is no longer used. Stream generation
involves starting with S[0] and going through to S[255], and, for each S[i], swapping S[i] with another byte
in S according to a scheme dictated by the current configuration of S. After S[255] is reached, the process
continues, starting over again at S[0]:
/* Stream Generation */
i, j = 0;
while (true)
i = (i + 1) mod 256;
j = (j + S[i]) mod 256;
Swap (S[i], S[j]);
t = (S[i] + S[j]) mod 256;
k = S[t];
To encrypt, XOR the value k with the next byte of plaintext. To decrypt, XOR the value k with the next
byte of ciphertext.
Many stream ciphers are based on linear-feedback shift registers (LFSRs), which, while efficient in
hardware, are less so in software. The design of RC4 avoids the use of LFSRs and is ideal for software
implementation, as it requires only byte manipulations. It uses 256 bytes of memory for the state array,
S[0] through S[255], k bytes of memory for the key, key[0] through key[k-1], and integer variables, i, j, and
K. Performing a modular reduction of some value modulo 256 can be done with a bitwise AND with 255
(which is equivalent to taking the low-order byte of the value in question).
CONCLUSION:
AIM:
DESCRIPTION:
S-DES
The overall structure of the simplified DES. The S-DES encryption algorithm takes an 8-bit block of
plaintext (example: 10111101) and a 10-bit key as input and produces an 8-bit block of ciphertext as
output. The S-DES decryption algorithm takes an 8-bit block of ciphertext and the same 10-bit key used to
produce that ciphertext as input and produces the original 8-bit block of plaintext.
3-DES
In cryptography, Triple DES (3DES), officially the Triple Data Encryption Algorithm (TDEAor Triple DEA),
is a symmetric-key block cipher, which applies the DES cipher algorithm three times to each data block.
While the government and industry standards abbreviate the algorithm's name as TDES (Triple DES) and
TDEA (Triple Data Encryption Algorithm),RFC 1851 referred to it as 3DES from the time it first
promulgated the idea, and this namesake has since come into wide use by most vendors, users, and
cryptographers.
ALGORITHM:
S-DES
KEY GENERATION:
As there are two rounds we have to generate two keys from the given 10-bit key
1: Apply permutation function P10 to 10 bit key
2: Divide the result into two part each containing 5-bit L0 and L1
3: Apply Circular Left Shift to both L0 and L1
4: Combine both L0 and L1 which will form out 10-bit number
5: Apply permutation function P8 on result to select 8 out of 10 bits for key K1 (for the first round)
6: Again apply second Circular Left Shift to L0 and L1
7: Combine the result, which will form out 10-bit number
8: Apply permutation function P8 on result to select 8 out of 10 bits for key K2 (for the second round)
ENCRYPTION:
1: Get 8 bit message text (M) applied it to Initial permutation function (IP)
2: Divide IP(M) into nibbles M0 and M1
3: Apply function Fk on M0
4: XOR the result with M1 (M1 (+) Fk(M0))
5: Swap the result with M1 (i.e. make M1 as lower nibble (M0) and result as higher nibble (M1))
6: Repeat the step 1 to 4 (go for the next round)
7: Apply (IP-1) on the result to get the encrypted data
FUNCTION Fk
1: Give the 4-bit input to EP (Expansion function) the result will be a 8-bit expanded data
2: XOR the 8-bit expanded data with 8-bit key (K1 for the first round and K2 for the second round)
3: divide result into upper (P1) and lower (P2) nibble
4: Apply compression function S0 to P0 and S1 to P1, which will compress the 4-bit input to 2-bit output
5: Combine 2-bit output from S0 and S1 to form a 4-bit digit
6: Apply permutation function P4 to 4-bit result
Functions
P10 = 3 5 2 7 4 10 1 9 8 6
P8 = 6 3 7 4 8 5 10 9
P4 = 2 4 3 1
IP = 2 6 3 1 4 8 5 7
IP-1 = 4 1 3 5 7 2 8 6
3-DES
1. Encrypt the plaintext blocks using single DES with key K1.
2. Now decrypt the output of step 1 using single DES with key K2.
3. Finally, encrypt the output of step 2 using single DES with key K3.
4. The output of step 3 is the ciphertext.
5. Decryption of a ciphertext is a reverse process. User first decrypt using K3, then encrypt with K2,
and finally decrypt with K1.
The electronic payment industry uses Triple DES and continues to develop and promulgate standards
based upon it (e.g. EMV).
Microsoft OneNote, Microsoft Outlook 2007 and Microsoft System Center Configuration Manager 2012
use Triple DES to password protect user content and system data. Firefox and Mozilla Thunderbird use
Triple DES in CBC mode to encrypt website authentication login credentials when using a master
password.
CONCLUSION:
AIM:
DESCRIPTION:
The RSA algorithm was invented by Ronald L. Rivest, Adi Shamir, and Leonard Adleman in 1977 and
released into the public domain on September 6, 2000. Public-key systems–or asymmetric
cryptography–use two different keys with a mathematical relationship to each other. Their protection relies
on the premise that knowing one key will not help you figure out the other. The RSA algorithm uses the
fact that it’s easy to multiply two large prime numbers together and get a product. But you can’t take that
product and reasonably guess the two original numbers, or guess one of the original primes if only the
other is known. The public key and private keys are carefully generated using the RSA algorithm; they
can be used to encrypt information or sign it.
ALGORITHM:
Key generation
1) Pick two large prime numbers p and q, p != q;
2) Calculate n = p × q;
3) Calculate ø (n) = (p − 1)(q − 1);
4) Pick e, so that gcd(e, ø (n)) = 1, 1 < e < ø (n);
5) Calculate d, so that d · e mod ø (n) = 1, i.e., d is the multiplicative inverse of e in mod ø (n);
6) Get public key as KU = {e, n};
7) Get private key as KR = {d, n}.
Encryption
For plaintext block P < n, its ciphertext C = P^e (mod n).
Decryption
For ciphertext block C, its plaintext is P = C^d (mod n).
CODE:
#include<stdio.h>
#include<math.h>
#include <algorithm>
#include <iostream>
bool checkFraction(double x)
{
if(x == (long long int)x)return 0;
else return 1;
}
int main()
{
hile(checkFraction(d))
w
{
k++;
d = (1 + ((double)k*(double)phi))/(double)e;
}
cout<<"d = "<<d<<endl;
// Message to be encrypted
long long int msg;
cout<<"Enter Message to be sent\n";
cin>>msg;
cout<<"Message data = "<<msg;
// Decryption m = (c ^ d) % n
long long int m = pow(c, d);
m = fmod(m, n);
cout<<"\nOriginal Message Sent = "<<m<<endl;
return 0;
}
OUTPUT:
The idea of RSA is based on the fact that it is difficult to factorize a large integer. The public key consists
of two numbers where one number is multiplication of two large prime numbers. And private key is also
derived from the same two prime numbers. So if somebody can factorize the large number, the private
key is compromised. Therefore encryption strength totally lies on the key size and if we double or triple
the key size, the strength of encryption increases exponentially. RSA keys can be typically 1024 or 2048
bits long, but experts believe that 1024 bit keys could be broken in the near future. But till now it seems to
be an infeasible task.
CONCLUSION:
AIM:
DESCRIPTION:
IP address authentication is the traditional method of identifying users requesting access to vendor
databases. Users gain access based on their computer or site IP address (numerical address),
eliminating the need for user IDs and passwords. Since only the library's EBSCO host administrator can
add a user's IP address using EBSCO admin, this ensures that access is limited. Proxy servers are being
increasingly deployed at organizations for performance benefits; however, there still exists drawbacks in
ease of client authentication in interception proxy mode mainly for Open Source Proxy Servers.
Technically, an interception mode is not designed for client authentication, but implementation in certain
organizations does require this feature. In this paper, we focus on the World Wide Web, highlight the
existing transparent proxy authentication mechanisms, its drawbacks and propose an authentication
scheme for transparent proxy users by using external scripts based on the clients Internet Protocol
Address. This authentication mechanism has been implemented and verified on Squid-one of the most
widely used HTTP Open Source Proxy Server. The reach of internet connectivity has penetrated
tremendously over the last decade, and with such a growing demand, there has been an increase in the
access and response time of the World Wide Web. Increase in bandwidth has not necessarily helped in
decreasing the access time. This has prompted organizations to deploy proxy servers which would cache
Internet resources for re-use by the set of clients connected to a network.
CASE STUDY:
IP security refers to security mechanisms implemented at the IP (Internet Protocol) Layer to ensure
integrity, authentication and confidentiality of data during transmission in the open Internet environment.
The primary objective of recent work in this area, mainly by members in the IETF IP Security (IPsec)
working group is to improve the robustness of the cryptographic key-based security mechanisms at IP
layer for users who request security.
CONCLUSION:
We successfully presented a case study on IP based authentication.