Which of the following technique is used to verify the integrity of the message? *

PC Magazine Digital Edition (Opens in a new window)

PC Magazine Digital Edition

Read Great Stories Offline on Your Favorite Device!

PCMag Newsletters

PCMag Newsletters

Our Best Stories in Your Inbox

Honest, Objective, Lab-Tested Reviews

PCMag.com is a leading authority on technology, delivering lab-based, independent reviews of the latest products and services. Our expert industry analysis and practical solutions help you make better buying decisions and get more from technology.

How We Test Editorial Principles

View Discussion

Improve Article

Save Article

  • Read
  • Discuss
  • View Discussion

    Improve Article

    Save Article

    Message Digest is used to ensure the integrity of a message transmitted over an insecure channel (where the content of the message can be changed). The message is passed through a Cryptographic hash function. This function creates a compressed image of the message called Digest.

    Lets assume, Alice sent a message and digest pair to Bob. To check the integrity of the message Bob runs the cryptographic hash function on the received message and gets a new digest. Now, Bob will compare the new digest and the digest sent by Alice. If, both are same then Bob is sure that the original message is not changed.



    Which of the following technique is used to verify the integrity of the message? *


    This message and digest pair is equivalent to a physical document and fingerprint of a person on that document. Unlike the physical document and the fingerprint, the message and the digest can be sent separately.

    • Most importantly, the digest should be unchanged during the transmission.
    • The cryptographic hash function is a one way function, that is, a function which is practically infeasible to invert. This cryptographic hash function takes a message of variable length as input and creates a digest / hash / fingerprint of fixed length, which is used to verify the integrity of the message.
    • Message digest ensures the integrity of the document. To provide authenticity of the message, digest is encrypted with sender’s private key. Now this digest is called digital signature, which can be only decrypted by the receiver who has sender’s public key. Now the receiver can authenticate the sender and also verify the integrity of the sent message.

    Example:
    The hash algorithm MD5 is widely used to check the integrity of messages. MD5 divides the message into blocks of 512 bits and creates a 128 bit digest(typically, 32 Hexadecimal digits). It is no longer considered reliable for use as researchers have demonstrated techniques capable of easily generating MD5 collisions on commercial computers.
    The weaknesses of MD5 have been exploited by the Flame malware in 2012.

    In response to the insecurities of MD5 hash algorithms, the Secure Hash Algorithm (SHA) was invented.

    Implementation:
    MD5 hash in Java

    Related GATE Questions:
    GATE-CS-2014-(Set-1)
    GATE-CS-2016 (Set 1)

    Encrypting Private Data

    In Hacking the Code, 2004

    Working with Hashing Algorithms

    Summary: Hashing algorithms are one-way functions used to verify integrity of data
    Threats: Information leakage, data corruption, man-in-the-middle attacks, brute-force attacks

    Even though encryption is important for protecting data, sometimes it is important to be able to prove that no one has modified the data. This you can do with hashing algorithms. A hash is a one-way function that transforms data in such a way that, given a hash result (sometimes called a digest), it is computationally infeasible to produce the original message. Besides being one-way, hash functions have some other basic properties:

    They take an input of any length and produce an output of a fixed length.

    They should be efficient and fast to compute.

    They should be computationally infeasible to invert.

    They should be strongly collision free.

    A hash function takes input of any length and produces a fixed-length string. That means that you can use hashes on something as small as a password or as large as an entire document. The hashing algorithms the .NET Framework provides are very efficient and fast, making them useful for many applications. The most important property of hash functions is the size of the hash. A larger hash makes it more difficult to invert the function, and it ensures that the function is collision free.

    Because hash functions have a fixed output but unlimited inputs, multiple values can produce the same hash. However, because there are so many possible hash values, it is extremely difficult to find two inputs that do produce hashes that match. For that reason, hashes are like a fingerprint for the original data. If the data changes, the fingerprint will no longer match, and it is unlikely that any other useful data will produce the same fingerprint. Therefore, you can store these small fingerprints, or hashes, to later verify your data's integrity.

    Another common use for a hash is for someone to demonstrate knowledge of a piece of information without actually disclosing that information. For example, to prove you know a password, you could send the actual password, or you could produce and send the hash of that password.This is useful for Web site authentication, because the server does not have to store the actual password—it needs only the hash.

    The .NET Framework supports the hashing algorithms shown in Table 4.3.

    Table 4.3. Hashing Algorithms Available in the .NET Framework

    NameClassHash Length
    MD5 MD5CryptoServiceProvider 128 bits
    SHA-1 SHA1CryptoServiceProvider SHA1Managed 160 bits
    SHA-256 SHA256Managed 256 bits
    SHA-384 SHA384Managed 384 bits
    SHA-512 SHA512Managed 512 bits

    The MD5 algorithm, defined in RFC 1321, is probably the most well-known and widely used hash function. It is the fastest of all the .NET hashing algorithms, but it uses a smaller 128-bit hash value, making it the most vulnerable to attack over the long term. MD5 has been shown to have some partial collisions and is not likely to be able to withstand future attacks as hardware capabilities increase. Nevertheless, for now it the most commonly used hashing algorithm.

    SHA is an algorithm designed by the National Security Agency (NSA) and published by NIST as FIPS PUB 180. Designed for use with the Digital Signature Standard (DSS), SHA produces a 160-bit hash value.

    The original SHA specification published in 1993 was shortly withdrawn by the NSA and superceded by the revised FIPS PUB 180-1, commonly referred to as SHA-1.The NSA's reason for withdrawing the original specification was to correct a flaw in the original algorithm that reduced its cryptographic security. However, the NSA never gave details of this flaw, prompting researchers to closely examine both algorithms. Because of this close scrutiny, SHA-1 is widely considered to be quite secure.

    The NIST has since published three variants of SHA-1 that produce larger hashes: SHA-256, SHA-384, and SHA-512. Although with the larger hash sizes these algorithms should be more secure, they have not undergone as much analysis as SHA-1. Nevertheless, the hash length is important to protect from brute-force and birthday attacks.

    Hacking the Code …

    About Birthday Attacks

    Birthday attacks are based on a unique problem with hashing algorithms based on a concept called the Birthday Paradox. This puzzle is based on the fact that in a room of 183 people, there would be a 50 percent chance of one of them sharing your birthday. However, if you wanted a 50 percent chance of finding any two people who had matching birthdays, you would surprisingly only need 23 people in the room. For hashing functions, this means that it is much easier to find any two matches if you don't care which two they are. It is possible to precompute hashes for a given password length to determine if any collisions occur.

    Verifying Integrity

    You can use hashes to verify integrity, but many developers use them incorrectly, undoing their effectiveness. For example, many Web sites allow you to download a file as well as the MD5 checksum for that file. They do this so that you can verify the integrity of the file, but you are downloading the checksum from the same location and over the same connection as the file itself. If you don't trust the file enough to actually need to verify the hash, how can you trust the hash that came from the same location? If someone is able to modify the file, they could just as easily compute and save a new hash.

    TIP

    To verify the integrity of file downloads, many Web sites provide an MD5 sum as well as a PGP signature of the sum. The MD5 sum verifies integrity, and the PGP signature proves that the MD5 sum is authentic.

    Hashes are useful if you keep them private to verify data such as a cookie. For example, suppose you write a cookie to the client's browser and store the hash of that cookie in your database. When the client returns that cookie at a later time, you can compute the hash and compare that to the one stored in the database to verify that it has not changed. Since ASP.NET stores session and authentication tokens entirely in the cookie and not on the server, it computes a hash of the cookie data and encrypts both the data and the hash. This encrypted result is encoded and saved in a cookie on the client side. When the client returns the cookie data, the server decrypts the string and verifies the hash. In this way, ASP.NET protects the hash and protects the privacy of the data.

    Another way to make hashes more secure is to use a keyed hash algorithm. Keyed hashes are similar to regular hashes except that the hash is based on a secret key. To verify the hash or to create a fake hash, you need to know that key. The .NET Framework provides two keyed hashing algorithms:

    HMACSHA1 This function produces a hash-based message authentication code based on the SHA-1 hashing algorithm. HMACSHA1 combines the original message and the secret key and uses SHA-1 to create a hash. It then combines that hash again with the secret key and creates a second SHA-1 hash. Like SHA-1, the HMACSHA1 algorithm produces a 160-bit hash.

    MACTripleDES This algorithm uses TripleDES to encrypt the message, discarding all but the final 64 bits of the ciphertext.

    With keyed hashing algorithms, you can send the hash with the data, but you must keep the key secret. Note that this method does have limitations similar to the key exchange issues of symmetric cryptography. Figures 4.17 and 4.18 demonstrate using the HMACSHA1 function.

    Which of the following technique is used to verify the integrity of the message? *

    Which of the following technique is used to verify the integrity of the message? *

    Figure 4.17. Keyed Hashing Using HMACSHA1: C#

    Which of the following technique is used to verify the integrity of the message? *

    Figure 4.18. Keyed Hashing Using HMACSHA1: VB.NET

    Hashing Passwords

    Another important use for hashes is storing passwords. As described in Chapter 1, you should not store actual passwords in your database. Using hashing algorithms, you can store the hash and use that to authenticate the user. Because it is highly unlikely that two passwords would produce the same hash, you can compare the stored hash with a hash of the password submitted by the user. If the two match, you can be sure that the user has the correct password.

    Protecting passwords with hashes has some unique problems. First, although hashes are not reversible, they are crackable using a brute-force method. You cannot produce the password from the hash, but you can create hashes of millions of passwords until you find one that matches. For this reason, the hash's strength isn't based so much on the key length of the hashing algorithm, but on the length of the password itself. And because passwords have such low entropy, are predictable, and are often too short, this usually is not a difficult task.

    Another problem with hashes is that the same data will always produce the same hash. This can be a problem if someone ever obtains the hashes, because they can use a precomputed dictionary of hashes to instantly discover common passwords. To prevent this situation, we can add a salt to the password to ensure a different hash each time. The salt should be a large random number uniquely generated for that purpose. You do not need to keep the salt private, so you can save the salt with the hash itself.

    When you use a salt, there are as many possible hashes for any given piece of data as there are bits in the salt. Of course, if the intruder has access to the hashes, they also have access to the salts, but the key here is to force the attacker to compute each hash individually and not gain any benefit from passwords he or she has already cracked. Figures 4.19 and 4.20 show hashing algorithms that include salts.

    Which of the following technique is used to verify the integrity of the message? *

    Which of the following technique is used to verify the integrity of the message? *

    Figure 4.19. Hashing with a Salt: C#

    Which of the following technique is used to verify the integrity of the message? *

    Which of the following technique is used to verify the integrity of the message? *

    Figure 4.20. Hashing with a Salt: VB.NET

    You might think that a salt is similar to an IV. In fact, it is essentially the same technique that accomplishes the same purpose. Note that it is also similar in function to a keyed hash algorithm, and a keyed function such as HMACSHA1 is an excellent replacement for the code in Figure 4.20. To use a keyed hash, simply use the salt in place of the key, and otherwise follow the sample code in Figure 4.19.

    Security Policy

    Use hashing algorithms to verify integrity and store passwords.

    For data verification, you can allow others to view a hash, but you must protect it from being modified.

    Use keyed hashing algorithms to protect the hash from being modified.

    For password authentication, keep the hashes secret to prevent brute-force attacks.

    Add salt to a hash to ensure randomness.

    Read full chapter

    URL: https://www.sciencedirect.com/science/article/pii/B9781932266658500370

    Security Controls and Services

    Evan Wheeler, in Security Risk Management, 2011

    Cryptography

    Typically, encryption is the most readily associated implementation of cryptography, but really, this discipline encompasses mechanisms to verify integrity, provide nonrepudiation, and validate identity strongly. Seen in a broad sense, cryptography provides many essential tools for providing security controls that preserve the integrity, accountability, and confidentiality of data. The two most basic functions in cryptography are encryption and hashing. All the other complex tools and mechanisms in modern security controls leverage these functions either individually or in combination. For instance, a standard digital certificate object in a Public Key Infrastructure (PKI) is really just a combination of public and private keys. The private key is used to decrypt incoming communications and digitally sign outgoing communications, whereas the public key is used by other parties to encrypt communications to you. There are, of course, additional attributes of certificates beyond just these two functions, but the basic functionalities are encryption and hashing. Hashing is basically a one-way encryption function, meaning that it cannot be decrypted. This is typically used to protect sensitive data that doesn't need to be viewed in its raw form, like a password or Social Security Number in a database.

    The basic process of encryption begins with a plaintext (or cleartext) value, which is run through an algorithm that renders it unrecognizable from its original form. The result of the encryption function is the cipher text. Depending on the methods used for encryption, the length of the cipher text may vary from the original or it may use a different representation of the values. In a simple example, characters might be converted to numeric digits. The function of decryption would then take the cipher text as input and result in the recovered plaintext value. For both the encryption and decryption functions to work properly, a key value must also be used to differentiate between instances of the encryption algorithm. Access to the key(s) is limited to the parties involved in the communications, and along with the identity of the algorithm being used, the key serves as the unique value needed to unlock the cipher text. Many commonly used methods for achieving encryption systems in enterprise environments rely on symmetric and asymmetric models.

    The symmetric model predated the asymmetric model, and in some ways, it is simpler and faster because the same key is used for both encryption and decryption. However, distribution of the secret key can be complicated because each end of the communication must be given the key without interception. There are easy solutions between two parties; but as information is shared with three (or 50!) people, the challenge becomes evident. In fact, key distribution and management are the primary challenges for any encryption system. This model is best used when there are large volumes of data or high performance is needed.

    The asymmetric model requires two distinct keys: the public and private keys. The public key is freely distributed, whereas the private key must be highly protected and is the essence of identity assertion. For the most part, the mutual creation of keys is easier with asymmetric encryption; however, the distribution systems themselves, like a PKI environment, can be just as complicated. This model is commonly used in e-mail and SSL technology.

    Read full chapter

    URL: https://www.sciencedirect.com/science/article/pii/B9781597496155000074

    Security component fundamentals for assessment

    Leighton Johnson, in Security Controls Evaluation, Testing, and Assessment Handbook (Second Edition), 2020

    Audit and accounting

    Most, if not all, of the guidance for the Audit and Accountability family of controls can be found in the SP 800-92, Guide to Log Management.

    Log management

    A log is a record of the events occurring within an organization's systems and networks. Logs are composed of log entries; each entry contains information related to a specific event that has occurred within a system or network. Many logs within an organization contain records related to computer security. These computer security logs are generated by many sources, including security software, such as antivirus software, firewalls, and intrusion detection and prevention systems; operating systems on servers, workstations, and networking equipment; and applications. Logs are emitted by network devices, operating systems, applications, and all manner of intelligent or programmable device. A stream of messages in time-sequence often comprises the entries in a log. Logs may be directed to files and stored on disk or directed as a network stream to a log collector. Log messages must usually be interpreted with respect to the internal state of its source (e.g., application) and announce security-relevant or operations-relevant events (e.g., a user login or a systems error).

    A fundamental problem with log management that occurs in many organizations is effectively balancing a limited quantity of log management resources with a continuous supply of log data. Log generation and storage can be complicated by several factors, including a high number of log sources; inconsistent log content, formats, and timestamps among sources; and increasingly large volumes of log data. Log management also involves protecting the confidentiality, integrity, and availability of logs. Another problem with log management is ensuring that security, system, and network administrators regularly perform effective analysis of log data. This publication provides guidance for meeting these log management challenges.

    Originally, logs were used primarily for troubleshooting problems, but logs now serve many functions within most organizations, such as optimizing system and network performance, recording the actions of users, and providing data useful for investigating malicious activity. Logs have evolved to contain information related to many different types of events occurring within networks and systems. Within an organization, many logs contain records related to computer security; common examples of these computer security logs are audit logs that track user authentication attempts and security device logs that record possible attacks.

    The Special Publication 800-92 defines the criteria for logs, log management, and log maintenance in the following control areas:

    Auditable Events

    Content of Audit Records

    Audit Storage Capacity

    Response to Audit Processing Failures

    Audit Review, Analysis, and Reporting

    Audit Reduction and Report Generation

    Time Stamps

    Audit Record Retention

    Audit Generation

    The SP defines the four parts of log management as:

    1.

    Log Management

    a.

    Log Sources

    b.

    Analyze Log Data

    c.

    Respond to Identified Events

    d.

    Manage Long-Term Log Data Storage

    2.

    Log Sources

    a.

    Log Generation

    b.

    Log Storage and Disposal

    c.

    Log Security

    3.

    Analyzing Log Data

    a.

    Gaining an Understanding of Logs

    b.

    Prioritizing Log Entries

    c.

    Comparing System-Level and Infrastructure-Level Analysis

    d.

    Respond to Identified Events

    4.

    Manage Long-Term Log Data Storage

    a.

    Choose Log Format for Data to Be Archived

    b.

    Archive the Log Data

    c.

    Verify Integrity of Transferred Logs

    d.

    Store Media Securely

    To address AU-10, nonrepudiation, the information system protects against an individual falsely denying having performed a particular action. Nonrepudiation protects individuals against later claims by an author of not having authored a particular document, a sender of not having transmitted a message, a receiver of not having received a message, or a signatory of not having signed a document. Nonrepudiation services are obtained by employing various techniques or mechanisms (e.g., digital signatures, digital message receipts).

    The Digital Signature Standard defines methods for digital signature generation that can be used for the protection of binary data (commonly called a message) and for the verification and validation of those digital signatures.

    There are three techniques that are approved for this process.

    1.

    The Digital Signature Algorithm (DSA) is specified in this Standard. The specification includes criteria for the generation of domain parameters, for the generation of public and private key pairs, and for the generation and verification of digital signatures.

    2.

    The RSA Digital Signature Algorithm is specified in American National Standard (ANS) X9.31 and Public Key Cryptography Standard (PKCS) #1. FIPS 186-3 approves the use of implementations of either or both of these standards but specifies additional requirements.

    3.

    The Elliptic Curve Digital Signature Algorithm (ECDSA) is specified in ANS X9.62. FIPS 186-3 approves the use of ECDSA but specifies additional requirements.

    When assessing logs, look for the following areas:

    Connections should be logged and monitored.

    What events are logged

    Inbound services

    Outbound services

    Access attempts that violate policy

    How frequent are logs monitored

    Differentiate from automated and manual procedures

    Alarming

    Security breach response

    Are the responsible parties experienced?

    Monitoring of privileged accounts

    SIEM

    Security information and event management (SIEM) is a term for software products and services combining security information management (SIM) and security event management (SEM). The segment of security management that deals with real-time monitoring, correlation of events, notifications, and console views is commonly known as security event management (SEM). The second area provides long-term storage, analysis, and reporting of log data and is known as security information management (SIM).

    SIEM technology provides real-time analysis of security alerts generated by network hardware and applications. SIEM is sold as software, appliances, or managed services and is also used to log security data and generate reports for compliance purposes. The term security information event management (SIEM), coined by Mark Nicolett and Amrit Williams of Gartner in 2005, describes the product capabilities of gathering, analyzing, and presenting information from network and security devices; identity and access management applications; vulnerability management and policy compliance tools; operating system, database, and application logs; and external threat data. A key focus is to monitor and help manage user and service privileges, directory services, and other system configuration changes; as well as providing log auditing and review and incident response.

    Read full chapter

    URL: https://www.sciencedirect.com/science/article/pii/B9780128184271000112

    UNIX and Linux Security

    Gerald Beuchelt, in Computer and Information Security Handbook (Third Edition), 2017

    9 Improving the Security of Linux and UNIX Systems

    A security checklist should be structured to follow the life cycle of Linux and UNIX systems, from planning and installation to recovery and maintenance. The checklist is best applied to a system before it is connected to the network for the first time. In addition, the checklist can be reapplied on a regular basis, to audit conformance (see checklist: “An Agenda for Action for Linux and UNIX Security Activities”).

    An Agenda for Action for Linux and UNIX Security Activities

    No two organizations are the same, so in applying the checklist, consideration should be given to the appropriateness of each action to your particular situation. Rather than enforcing a single configuration, the following checklist will identify the specific choices and possible security controls that should be considered at each stage, which includes the following key activities (check all tasks completed):

    Determine appropriate security:

    _____1.

    computer role

    _____2.

    assess security needs of each kind of data handled

    _____3.

    trust relationships

    _____4.

    uptime requirements and impact if these are not met

    _____5.

    minimal software packages required for role

    _____6.

    minimal net access required for role

    Installation:

    _____7.

    install from trusted media

    _____8.

    install while not connected to the Internet

    _____9.

    use separate partitions

    _____10.

    install minimal software

    Apply all patches and updates:

    _____11.

    initially apply patches while offline

    _____12.

    verify integrity of all patches and updates

    _____13.

    subscribe to mailing lists to keep up to date

    Minimize:

    _____14.

    network services

    _____15.

    disable all unnecessary startup scripts

    _____16.

    SetUID/SetGID programs

    _____17.

    other

    Secure base OS:

    _____18.

    physical, console, and boot security

    _____19.

    user logons

    _____20.

    authentication

    _____21.

    access control

    _____22.

    other: include vendor configuration settings and industry best practices such as CIS Benchmarks

    Secure major services:

    _____23.

    confinement

    _____24.

    tcp_wrappers

    _____25.

    other general advice for services

    _____26.

    SSH

    _____27.

    printing

    _____28.

    RPC/portmapper

    _____29.

    file services NFS/AFS/Samba

    _____30.

    the X Window system

    _____31.

    DNS service

    _____32.

    WWW service

    _____33.

    Squid proxy

    _____34.

    Concurrent Versions System

    _____35.

    Web browsers

    _____36.

    FTP service

    Add monitoring capability:

    _____37.

    syslog configuration

    _____40.

    monitoring of logs

    _____41.

    enable trusted audit subsystem if available

    _____42.

    monitor running processes

    _____43.

    host-based intrusion detection

    _____44.

    network intrusion detection

    Connect to the net:

    _____45.

    first put in place a host firewall

    _____46.

    position the computer behind a border firewall

    _____47.

    network stack hardening/sysctls

    _____48.

    connect to network for the first time

    Test backup/rebuild strategy:

    _____49.

    backup/rebuild strategy

    _____50.

    test backup and restore

    _____51.

    allow separate restore of software and data

    _____52.

    repatch after restoring

    _____53.

    process for intrusion response

    Maintain:

    _____54.

    mailing lists

    _____55.

    software inventory

    _____56.

    rapid patching

    _____57.

    secure administrative access

    _____58.

    log book for all sysadmin work

    _____59.

    configuration change control

    _____60.

    regular audit

    Read full chapter

    URL: https://www.sciencedirect.com/science/article/pii/B9780128038437000119

    A Systematic Review of Quality of Service in Wireless Sensor Networks using Machine Learning: Recent Trend and Future Vision

    Meena Pundir, Jasminder Kaur Sandhu, in Journal of Network and Computer Applications, 2021

    3.3.3 Integrity

    Integrity is a significant factor in terms of QoS and is associated with security of WSN. It is defined as the ability to ensure that the data transmitted in a network is not modified by an intruder or an unauthorized user. The term protection of integrity means to protect the data, software applications, hardware parts and operating system from the unauthorized users. Cyclic Redundancy Check (CRC) mechanism is used to protect the data and provide protection of integrity from error bits when data is transmitted from sender to receiver. But this approach is not suggested for intentional modification of data. Cryptographic checksum technique is also used for sensitive information to verify integrity.

    In paper (Haseeb et al., 2020), SASC approach is proposed for IoT applications which maintain the data integrity. Sensor cloud infrastructure protects the data from malicious nodes using mathematical encryption scheme based on the unbreakable One-Time Pad (OTP). It is used to store the data and hence make it consistent. Unsupervised ML approach is used as a framework in SASC. SASC is compared with other existing algorithms such as SEER and SecLEACH. SASC outperforms and provides the best integrity to WSN. In paper (Elhoseny and Hassanien, 2019a), real-time data is acquired using the SCADA system which is again a critical issue for WSN. The reason behind this is that the data changes continuously and also there is risk of various threats or attacks on the system. This is a novel approach which maintains file integrity and acquires data with minimum threat. In paper (Singh and Kaur, 2017), link cost estimation is done by various ML techniques such as Naive Bayes, Decision Tree, Multilayer Perceptron, Neural Network and Bayes Net. This cost estimation of a link is helpful for the optimization of routing route in the network. C4.5 Decision Tree yields the maximum accuracy of 94% as compared to other ML algorithms. On the other hand, Multilayer Perceptron shows minimum accuracy of 56%.

    Read full article

    URL: https://www.sciencedirect.com/science/article/pii/S1084804521001065

    Cybersecurity challenges in vehicular communications

    Zeinab El-Rewini, ... Prakash Ranganathan, in Vehicular Communications, 2020

    5.2 Secure communication in VANETS using Blockchain

    Traditional cyber security mitigation approaches are not adequate and robust enough in offering reliable solutions in vehicular networks. Increased connectedness, larger discrepancy in random arrival and departure times of vehicles, and mobility in wireless networks, make VANETs more vulnerable to cyber-attacks. In order to realize secure and efficient communication in vehicular networks, the following challenges need to be addressed:

    1.

    Centralized communication models: Here, all vehicles are identified and connected through central cloud servers. Thus, any center point of failure can throw the entire network into disarray.

    2.

    Lack of privacy: Most of the existing communication networks reveal the user interface data to the requester and thus elevate privacy concerns.

    3.

    Safety: The security breach in terms of data or processes in any of the vehicular functionalities can result in fatal accidents.

    5.2.1 Blockchain

    Blockchain is an emerging technology and it has the potential to overcome the security challenges of existing VA-NETs and help combat cyber-attacks. A blockchain is a distributed data structure containing blocks that are chained together cryptographically in chronological order [294], [295]. Each block has time-stamped transactions with associated data that are encrypted for secure data transport. More importantly, blockchains execute smart contracts in peer-to-peer (P2P) networks and any data across the members (nodes) are updated using a consensus mechanism. Here, a smart contract is defined as a “collection of code and data that is deployed using cryptographically signed transaction on the blockchain network” and are executed by nodes within the network [296]. All nodes in a blockchain rely on consensus (e.g., rule-based learning) to ensure the consistency of data storage. The commonly used consensus algorithms are Proof of Work (PoW), Proof of Stake (PoS), Byzantine Fault Tolerance (BFT) etc. Thus, the blockchain relies on four major components: distributed ledger platform, an encryption algorithm, a consensus mechanism and smart contract. The important features of a block-chain network are presented in Table 11, [295], [297].

    Table 11. Features of a blockchain network.

    FeaturesSignificance
    Immutability-Data can never be tampered by any means after validation and storage.

    Distributed environment-Operates on a peer-to-peer basis.
    -No single point of failure as there is no central control.

    Security-Data is secure and tamper-proof.
    -Uses the asymmetric cryptographic algorithms and consensus mechanism.
    -Defends against the cyber-attacks and prevents fraudulent transactions.

    Transparency-Stores information about every transaction (or event) in a network.
    -Transparency in accessing information for all members.

    Privacy and anonymity-Identity of the parties involved in the transaction are not revealed.
    -Information is private and secure.

    5.2.1.1 Types of blockchain

    The blockchain can be broadly classified into two categories based on its construction, access, and verification methods, namely: Permissionless (Public) and Permissioned blockchain (Consortium/Private).

    Permissionless blockchains can have anyone add a new block in the network while permissioned blockchains are deployed for a particular group of users typically referred to as a consortium or an organization [298]. Permissioned blockchains can be decentralized or centralized and have an authority that authorizes the publishing of blocks while permissionless blockchains are decentralized [296]. A comparison of these blockchain technologies is presented in Table 12, [295], [298].

    Table 12. Types of blockchain.

    Operational characteristicsPermissionless blockchainPermissioned blockchain
    PublicConsortiumPrivate
    Read transactionAny member Any member Any member
    Write transactionAny member Only pre-selected members Only one member (or organization)
    Number of untrusted writersHigh Low Low
    ThroughputLow High High
    LatencySlow Medium Medium
    Consensus mechanismPoW and PoS BFT BFT
    ScenariosGlobal decentralized scenarios Among selected organizations Information sharing within an organization
    ExampleBitcoin and ethereum Quorum Hyperledger fabric

    5.2.2 Implementation of blockchain in intra-vehicular networks

    In [299], the authors propose blockchain for secure data communication between intra-vehicular ECUs. In this scheme, blockchain is implemented in identity based access controllers called MECUs (Mother ECUs) such that all other ECUs have to relay their data to MECUs prior to broadcasting that data on the blockchain network. The authors suggest that a vehicle has multiple MECUs and they all relay their data to the leader MECU which can add blocks to the network given that a consensus mechanism and some security checks are passed. Each MECU verifies (integrity and authenticity checks) data from each ECU before relaying this data to the leader. However, the authors note that some properties of the blockchain are resource intensive and thus render on-board hardware incapable of handling such loads. For this reason, they enable the blockchain to run on MECUs and not on ECUs as they are significantly more powerful in terms of processing power, storage and data speed. The limitations of this scheme are limited storage, and susceptibility to replay attacks as the ECUs/MECUs do not check the timestamp of a transaction prior to sending it to the leader and the possibility of corrupted data being sent to an immutable blockchain.

    5.2.3 Implementation of blockchain in V2V/V2X communications

    Blockchain technology can be realized in V2V and V2X communication systems to facilitate the secure distribution of basic safety messages or co-operative awareness messages between vehicles and RSUs and/or the cloud platform [297]. In [300], the authors proposed a blockchain framework focusing on an Intelligent Transport Systems (ITS) infrastructure that contains a wireless module following a Wireless Access in Vehicular Environments (WAVE) or IEEE 802.11p standard. On the hardware side, the OBUs (Onboard Units) are equipped to support two-way communication which is enabled between infrastructure to vehicle and/or vehicle to vehicle. The connected vehicles periodically transfer safety messages such as speed (s), position (p) and direction (d), to the network. The ITS infrastructure contains Security Managers (SMs), which aid in message broadcast between vehicles and associated units in blockchain network. These SMs are typically available on the upper layer of the system, and responsible for the timely transfer of data to the neighboring SMs, when a vehicle passes the cross-domain border. The significance of blockchain in VANETs is possibly most evident at this step, as the nodes (e.g., vehicles, RSUs) can share information securely without the need for a central party. On the other hand, in a traditional communication structure, a trusted third-party authority manages all the cryptographic data sent by the participating nodes. This necessitates the need for a complex, and series of exchangeable handshakes via handover methods. This creates significant delay causing latency issues, and thus considered inefficient for real-time applications. Such a delay is easily mitigated in block chain via “transport keys”, as every SM is connected to other SMs in the network.

    In [297], the authors describe a blockchain scheme for secure V2V communication as shown in Figs. 11 and 12. Here, all vehicles broadcast their position through beacon messages (e.g.driving status and position of vehicles), where a location certificate (LC) is generated as digital proof. Such a blockchain scheme has two modules: 1) broadcasting module and 2) mining module. In the broadcasting module, a vehicle broadcasts event messages to its neighboring vehicles during an event. The event message has data attributes such as a type of event, pseudo ID, Proof of location and level of trust. The peer vehicles evaluate the trust level of sender vehicle by validating the event message in the mining module. These vehicles use message verification policies to validate the message trustworthiness. Therefore, it is evident that the use of blockchain in VANET ensures the authenticity of the broadcasted messages and at the same time, it satisfies the conditional anonymity. Its distributed data structure offers faster access to information than the central cloud servers.

    Which of the following technique is used to verify the integrity of the message? *

    Fig. 11. Broadcast module for blockchain in VANET.

    Which of the following technique is used to verify the integrity of the message? *

    Fig. 12. Mining module for blockchain in VANET.

    Read full article

    URL: https://www.sciencedirect.com/science/article/pii/S221420961930261X

    Which technique is used for verifying the integrity of the message?

    ICSF provides several methods to verify the integrity of transmitted messages and stored data: Message authentication code (MAC) Hash functions, including modification detection code (MDC) processing and one-way hash generation.

    Which of the following can ensure message integrity?

    In the world of secured communications, Message Integrity describes the concept of ensuring that data has not been modified in transit. This is typically accomplished with the use of a Hashing algorithm.

    What are the methods to ensure message integrity in IOT?

    Error-correcting codes are an excellent method to guarantee the integrity of these communication links.

    Which process is used to maintain integrity of message in network?

    Message digest ensures the integrity of the document. To provide authenticity of the message, digest is encrypted with sender's private key. Now this digest is called digital signature, which can be only decrypted by the receiver who has sender's public key.