The transposition process is needed in cryptography to create a diffusion effect on data encryption standard (DES) and advanced encryption standard (AES) algorithms as standard information security algorithms by the National Institute of Standards and Technology. The problem with DES and AES algorithms is that their transposition index values form patterns and do not form random values. This condition will certainly make it easier for a cryptanalyst to look for a relationship between ciphertexts because some processes are predictable. This research designs a transposition algorithm called square transposition. Each process uses square 8 × 8 as a place to insert and retrieve 64-bits. The determination of the pairing of the input scheme and the retrieval scheme that have unequal flow is an important factor in producing a good transposition. The square transposition can generate random and non-pattern indices so that transposition can be done better than DES and AES.
Algorithm And analysis Lecture 03& 04-time complexity.
This document discusses algorithm efficiency and complexity analysis. It defines key terms like algorithms, asymptotic complexity, Big O notation, and different complexity classes. It provides examples of analyzing time complexity for different algorithms like loops, nested loops, and recursive functions. The document explains that Big O notation allows analyzing algorithms independent of machine or input by focusing on the highest order term as the problem size increases. Overall, the document introduces methods for measuring an algorithm's efficiency and analyzing its time and space complexity asymptotically.
IRJET- Bidirectional Graph Search Techniques for Finding Shortest Path in Ima...
This document presents a study comparing different graph search algorithms for solving mazes represented as images. The paper implements bidirectional versions of breadth-first search (BFS) and A* search and compares their performance on 8x8 and 16x16 mazes to the traditional unidirectional algorithms. For smaller 8x8 mazes, BFS performed best but for larger 16x16 mazes, bidirectional BFS was most efficient at finding the shortest path. Bidirectional search improves results but uses more space. The key aspect is finding the meeting point where the two searches meet, guaranteeing a solution if one exists.
The document provides an overview of algorithms, including definitions, types, characteristics, and analysis. It begins with step-by-step algorithms to add two numbers and describes the difference between algorithms and pseudocode. It then covers algorithm design approaches, characteristics, classification based on implementation and logic, and analysis methods like a priori and posteriori. The document emphasizes that algorithm analysis estimates resource needs like time and space complexity based on input size.
The document discusses analyzing the running time of algorithms using Big-O notation. It begins by introducing Big-O notation and how it is used to generalize the running time of algorithms as input size grows. It then provides examples of calculating the Big-O running time of simple programs and algorithms with loops but no subprogram calls or recursion. Key concepts covered include analyzing worst-case and average-case running times, and rules for analyzing the running time of programs with basic operations and loops.
Bidirectional graph search techniques for finding shortest path in image base...Navin Kumar
The intriguing problem of solving a maze comes
under the territory of algorithms and artificial intelligence.
The maze solving using computers is quite of interest for many
researchers, hence, there had been many previous attempts to
come up with a solution which is optimum in terms of time and
space. Some of the best performing algorithms suitable for the
problem are breadth-first search, A* algorithm, best-first
search and many others which ultimately are the
enhancement of these basic algorithms. The images are
converted into graph data structures after which an algorithm
is applied eventually pointing the trace of the solution on the
maze image. This paper is an attempt to do the same by
implementing the bidirectional version of these well-known
algorithms and study their performance with the former. The
bidirectional approach is indeed capable of providing
improved results at an expense of space. The vital part of the
approach is to find the meeting point of the two bidirectional
searches which will be guaranteed to meet if there exists any
solution.
This document discusses algorithm analysis and complexity. It introduces algorithm analysis as a way to predict and compare algorithm performance. Different algorithms for computing factorials and finding the maximum subsequence sum are presented, along with their time complexities. The importance of efficient algorithms for problems involving large datasets is discussed.
This document discusses the complexity of algorithms and the tradeoff between algorithm cost and time. It defines algorithm complexity as a function of input size that measures the time and space used by an algorithm. Different complexity classes are described such as polynomial, sub-linear, and exponential time. Examples are given to find the complexity of bubble sort and linear search algorithms. The concept of space-time tradeoffs is introduced, where using more space can reduce computation time. Genetic algorithms are proposed to efficiently solve large-scale construction time-cost tradeoff problems.
Algorithm And analysis Lecture 03& 04-time complexity.Tariq Khan
This document discusses algorithm efficiency and complexity analysis. It defines key terms like algorithms, asymptotic complexity, Big O notation, and different complexity classes. It provides examples of analyzing time complexity for different algorithms like loops, nested loops, and recursive functions. The document explains that Big O notation allows analyzing algorithms independent of machine or input by focusing on the highest order term as the problem size increases. Overall, the document introduces methods for measuring an algorithm's efficiency and analyzing its time and space complexity asymptotically.
IRJET- Bidirectional Graph Search Techniques for Finding Shortest Path in Ima...IRJET Journal
This document presents a study comparing different graph search algorithms for solving mazes represented as images. The paper implements bidirectional versions of breadth-first search (BFS) and A* search and compares their performance on 8x8 and 16x16 mazes to the traditional unidirectional algorithms. For smaller 8x8 mazes, BFS performed best but for larger 16x16 mazes, bidirectional BFS was most efficient at finding the shortest path. Bidirectional search improves results but uses more space. The key aspect is finding the meeting point where the two searches meet, guaranteeing a solution if one exists.
The document provides an overview of algorithms, including definitions, types, characteristics, and analysis. It begins with step-by-step algorithms to add two numbers and describes the difference between algorithms and pseudocode. It then covers algorithm design approaches, characteristics, classification based on implementation and logic, and analysis methods like a priori and posteriori. The document emphasizes that algorithm analysis estimates resource needs like time and space complexity based on input size.
The document discusses analyzing the running time of algorithms using Big-O notation. It begins by introducing Big-O notation and how it is used to generalize the running time of algorithms as input size grows. It then provides examples of calculating the Big-O running time of simple programs and algorithms with loops but no subprogram calls or recursion. Key concepts covered include analyzing worst-case and average-case running times, and rules for analyzing the running time of programs with basic operations and loops.
The document discusses algorithms and their complexity. It defines an algorithm as a well-defined computational procedure that takes inputs and produces outputs. Algorithms have properties like definiteness, correctness, finiteness, and effectiveness. While faster computers make any method viable, analyzing algorithms' complexity is still important because computing resources are finite. Algorithm complexity is analyzed asymptotically for large inputs, focusing on growth rates like constant, logarithmic, linear, quadratic, and exponential. Common notations like Big-O describe upper algorithm complexity bounds.
Unit 1: Fundamentals of the Analysis of Algorithmic Efficiency, Units for Measuring Running Time, PROPERTIES OF AN ALGORITHM, Growth of Functions, Algorithm - Analysis, Asymptotic Notations, Recurrence Relation and problems
A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...IJECEIAES
In this work, a Neuro-Fuzzy Controller network, called NFC that implements a Mamdani fuzzy inference system is proposed. This network includes neurons able to perform fundamental fuzzy operations. Connections between neurons are weighted through binary and real weights. Then a mixed binaryreal Non dominated Sorting Genetic Algorithm II (NSGA II) is used to perform both accuracy and interpretability of the NFC by minimizing two objective functions; one objective relates to the number of rules, for compactness, while the second is the mean square error, for accuracy. In order to preserve interpretability of fuzzy rules during the optimization process, some constraints are imposed. The approach is tested on two control examples: a single input single output (SISO) system and a multivariable (MIMO) system.
This slide explain complexity of an algorithm. Explain from theory perspective. At the end of slide, I also show the test result to prove the theory. Pleas, read this slide to improve your code quality .
This slide is exported from Ms. Power
Point to PDF.
Breadth first algorithm for solving Image based maze problemNavin Kumar
The project includes maze solving using the breadth-first search algorithm. The maze given is of predefined dimensions so that the image processing can be done easily.
The Implementation can be found at http://navin.live/maze
The Project is submitted in partial fulfillment of the requirements for the award of the degree of master of technology for the following Subjects
1) PROJECT MTCS-308
2) SEMINAR MTCS-307
at I.K.GUJRAL PUNJAB TECHNICAL UNIVERSITY
KAPURTHALA
By
Navin Kumar
Roll No. 1710965
Enhancement and Analysis of Chaotic Image Encryption Algorithms cscpconf
The focus of this paper is to improve the level of security and secrecy provided by the chaotic
map based image encryption.An encryption algorithm based on the Logistic and the Henon
maps is proposed. The algorithm uses chaotic iteration to generate the encryption keys, and
then carries out the XOR and cyclic shift operations on the plain text to change the values of
image pixels. Chaotic Map Lattice based image encryption algorithm suggested by Pisarchik is
also examined which is based on Logistic map alone. In experiments, the corresponding results
showed the proposed method is a promising scheme for image encryption in terms of security
and secrecy. At the end, we show the results of a security analysis and a comparison of both
schemes
Introduction to datastructure and algorithmPratik Mota
Introduction to data structure and algorithm
-Basics of Data Structure and Algorithm
-Practical Examples of where Data Structure Algorithms is used
-Asymptotic Notations [ O(n), o(n), θ(n), Ω(n), ω(n) ]
-Calculation of Time and Space Complexity
-GNU gprof basic
This document outlines the units and questions for the Design & Analysis of Algorithm course. It covers topics like asymptotic notation, the master's theorem, binary search trees, dynamic programming, graph algorithms including BFS, MST, Dijkstra's algorithm and Floyd-Warshall algorithm, NP-completeness, approximation algorithms, and algorithm design techniques. The last submission date for the course is November 25th, 2014.
This document discusses various algorithms and problems including:
1) The Towers of Hanoi puzzle and its solution requiring 2n-1 moves for n disks.
2) Permutations and two methods for generating all possible permutations.
3) The n-queens problem of placing n queens on an n×n chessboard without any queens threatening each other, solved using backtracking.
4) Backtracking as a general method for constraint satisfaction problems.
Domain Examination of Chaos Logistics Function As A Key Generator in Cryptogr...IJECEIAES
The use of logistics functions as a random number generator in a cryptography algo- rithm is capable of accommodating the diffusion properties of the Shannon principle. The problem that occurs is initialization x was static and was not affected by changes in the key, so that the algorithm will generate a random number that is always the same. This study design three schemes that can providing the flexibility of the input keys in conducting the examination of the value of the domain logistics function. The results of each schemes do not show a pattern that is directly proportional or inverse with the value of x 0 and relative error x 0 0 and successfully fulfill the properties of the butterfly effect. Thus, the existence of logistics functions in generating chaos numbers can be accommodated based on key inputs. In addition, the resulting random numbers are distributed evenly over the chaos range, thus reinforcing the algorithm when used as a key in cryptography.
An improved spfa algorithm for single source shortest path problem using forw...IJMIT JOURNAL
We present an improved SPFA algorithm for the single source shortest path problem. For a random graph,
the empirical average time complexity is O(|E|), where |E| is the number of edges of the input network.
SPFA maintains a queue of candidate vertices and add a vertex to the queue only if that vertex is relaxed.
In the improved SPFA, MinPoP principle is employed to improve the quality of the queue. We theoretically
analyse the advantage of this new algorithm and experimentally demonstrate that the algorithm is efficient.
An improved spfa algorithm for single source shortest path problem using forw...IJMIT JOURNAL
We present an improved SPFA algorithm for the single source shortest path problem. For a random graph,
the empirical average time complexity is O(|E|), where |E| is the number of edges of the input network.
SPFA maintains a queue of candidate vertices and add a vertex to the queue only if that vertex is relaxed.
In the improved SPFA, MinPoP principle is employed to improve the quality of the queue. We theoretically
analyse the advantage of this new algorithm and experimentally demonstrate that the algorithm is efficient.
International Journal of Managing Information Technology (IJMIT)IJMIT JOURNAL
We present an improved SPFA algorithm for the single source shortest path problem. For a random graph, the empirical average time complexity is O(|E|), where |E| is the number of edges of the input network. SPFA maintains a queue of candidate vertices and add a vertex to the queue only if that vertex is relaxed. In the improved SPFA, MinPoP principle is employed to improve the quality of the queue. We theoretically analyse the advantage of this new algorithm and experimentally demonstrate that the algorithm is efficient
EFFICIENT DIGITAL ENCRYPTION ALGORITHM BASED ON MATRIX SCRAMBLING TECHNIQUEIJNSA Journal
This paper puts forward a safe mechanism of data transmission to tackle the security problem of information which is transmitted in Internet. We propose a new technique on matrix scrambling which is based on random function, shifting and reversing techniques of circular queue. We give statistical analysis, sequence random analysis, and sensitivity analysis to plaintext and key on the proposed scheme. The experimental results show that the new scheme has a very fast encryption speed and the key space is expanded and it can resist all kinds of cryptanalytic, statistical attacks, and especially, our new method can be also used to solve the problem that is easily exposed to chosen plaintext attack. We give our detailed report to this algorithm, and reveal the characteristic of this algorithm by utilizing an example.
This document describes the design and implementation of a hybrid cryptosystem using the AES and SHA-2 algorithms. The system integrates AES, a symmetric encryption algorithm, with SHA-2, a cryptographic hash function, to improve data security. AES encrypts data using a 128-bit key generated by hashing the input message with SHA-2. The combined system was synthesized using Xilinx ISE software and implemented on a Virtex-5 FPGA, utilizing under 2% of slice registers and 5% of slice LUTs. This provides higher security than AES alone through increased algorithm complexity.
A Cryptographic Hardware Revolution in Communication Systems using Verilog HDLidescitation
Advanced Encryption Standard (AES), is an
advancement of Federal Information Processing Standard
(FIPS) which is an initiated Process Standard of NIST. The
AES specifies the Rijndael algorithm, in which a symmetric
block cipher that processes fixed 128 bit data blocks using
cipher keys with different lengths of 128, 192 and 256 bits.
The earliest Rijndael algorithm had the advantage of
combining both data block sizes of 128, 192 and 256 bits with
any key lengths. AES can be programmed in pure hardware
Verilog HDL, Which includes Multiplexer to enhance more
secure to Cipher text. The results indicate that the hardware
implementation proposed in this project is Decrementing
Utilization of resource and power consumption of 113 mW
than other implementation. Using FPGA lead to reliability on
source modulations. This project presents the AES algorithm
with regard to FPGA and Verilog HDL. The software used for
Simulation is ModelSim-Altera 6.3g_p1 (Quartus II 8.1).
Synthesis and implementation of the code is carried out on
Xilinx ISE 13.4 (XC6VCX240T) device is used for hardware
evaluation.
This document summarizes a research paper that proposes implementing the Advanced Encryption Standard (AES) cryptographic algorithm using Verilog HDL for hardware implementation on FPGAs. The paper describes the AES algorithm, its encryption and decryption processes, and a hardware design for AES that was tested on a Xilinx FPGA. The results showed the hardware implementation utilized less resources and had lower power consumption compared to other AES FPGA designs.
A new text encryption algorithm which is based upon a combination between Self-Synchronizing Stream Cipher and chaotic map has been proposed in this paper. The new algorithm encrypts and decrypts text files of different sizes. First of all, the corresponding ASCII values of the plain text are served as input to the permutation operation which diffuses the positions of these values by using hyper-chaotic map. Secondly, the result values are input to substitution operation via1D Bernoulli map. Finally, the resultant vales are XOR feedback with the key.The proposed algorithm has been analyzed using a number of tests and the results show that it has large key space, a uniform histogram, low correlation and it is very sensitive to any change in the plain text or key.
Symmetric cryptography will always produce the same ciphertext if the plaintext and the given key are the same repeatedly. This condition will make it easier for cryptanalysts to perform cryptanalysis. This research introduces a one-to-many cryptography scheme, which can produce different ciphertexts even if the input given is the same repeatedly. The one-to-many encryption scheme can produce several ciphertexts with differences of up to 50%. The avalanche effect test obtained an average of 52.20%, better than modern cryptography Blowfish by 25.46% and 6% better than advanced encryption standard (AES). One-to-many can produce different n-ciphertexts, which will certainly make it more difficult for cryptanalysts to perform cryptanalysis and require n-times longer to break than other symmetric cryptography.
This document discusses a proposed design for secure military communications using AES encryption with Vedic mathematics, OFDM modulation, and QPSK. Specifically, it proposes using AES to encrypt data, applying Vedic math techniques to improve efficiency during the MixColumns step. The encrypted data would then be modulated using OFDM and QPSK to provide high throughput communication. Key aspects of the design include AES encryption/decryption, OFDM using QPSK and an IFFT/FFT, and applying Vedic math during AES encryption to reduce complexity and power consumption for military applications.
1) The document proposes a hybrid 128-bit key AES-DES algorithm to enhance data security and transmission security for next generation networks.
2) It discusses some weaknesses in the AES encryption algorithm against algebraic cryptanalysis and outlines a hybrid approach that combines AES and DES algorithms.
3) The hybrid approach integrates the AES encryption process within the Feistel network structure of DES, using AES transformations like byte substitution and shift rows within each round of the DES Feistel network. This is intended to strengthen security by combining the advantages of both algorithms while reducing individual weaknesses.
The Journal of MC Square Scientific Research is published by MC Square Publication on the monthly basis. It aims to publish original research papers devoted to wide areas in various disciplines of science and engineering and their applications in industry. This journal is basically devoted to interdisciplinary research in Science, Engineering and Technology, which can improve the technology being used in industry. The real-life problems involve multi-disciplinary knowledge, and thus strong inter-disciplinary approach is the need of the research.
A novel technique for speech encryption based on k-means clustering and quant...journalBEEI
This document proposes a new algorithm for speech encryption that uses quantum chaotic maps, k-means clustering, and two stages of scrambling. The first stage uses a tent map to scramble bits in the binary representation of the signal. The second stage uses k-means clustering to scramble blocks of the signal. A quantum logistic map is used to generate an encryption key. The proposed method is evaluated using statistical quality metrics and is shown to provide secure and efficient speech encryption while maintaining high quality of recovered speech.
This document discusses techniques for optimizing the area usage of a masked Advanced Encryption Standard (AES) engine implemented on an FPGA. It proposes mapping operations from GF(28) to GF(24) to reduce the number of mapping and inverse mapping operations in the SubBytes step. It also describes moving the mapping and inverse mapping operations outside the round function to further reduce area by 20%. The document outlines the key steps of a standard AES implementation and describes how the proposed optimizations are applied to the masked SubBytes, MixColumns, and AddRoundKey transformations to implement them efficiently over GF(24).
FPGA Implementation of A New Chien Search Block for Reed-Solomon Codes RS (25...IJERA Editor
The Reed-Solomon codes RS are widely used in communication systems, in particular forming part of the specification for the ETSI digital terrestrial television standard. In this paper a simple algorithm for error detection in the Chien Search block is proposed. This algorithm is based on a simple factorization of the error locator polynomial, which allows reducing the number of components required to implement the proposed algorithm on FPGA board. Consequently, it reduces the power consumption with a percentage which can reach 50 % compared to the basic RS decoder. First, we developed the design of Chien Search Block Second, we generated and simulated the hardware description language source code using Quartus software tools,finally we implemented the proposed algorithm of Chien search block for Reed-Solomon codesRS (255, 239) on FPGA board to show both the reduced hardware resources and low complexity compared to the basic algorithm.
The document discusses algorithms complexity and data structures efficiency, explaining that algorithm complexity can be measured using asymptotic notation like O(n) or O(n^2) to represent operations scaling linearly or quadratically with input size, and different data structures have varying time efficiency for operations like add, find, and delete.
Investigation on the Pattern Synthesis of Subarray Weights for Low EMI Applic...IOSRJECE
In modern radar applications, it is frequently required to produce sum and difference patterns sequentially. The sum pattern amplitude coefficients are obtained by using Dolph-Chebyshev synthesis method where as the difference pattern excitation coefficients will be optimized in this present work. For this purpose optimal group weights will be introduced to the different array elements to obtain any type of beam depending on the application. Optimization of excitation to the array elements is the main objective so in this process a subarray configuration is adopted. However, Differential Evolution Algorithm is applied for optimization method. The proposed method is reliable and accurate. It is superior to other methods in terms of convergence speed and robustness. Numerical and simulation results are presented.
The key is the important part at any security system because it determines whether the system is strength or weakness. This paper aimed to proposed new way to generate keystream based on a combination between 3D Henoun map and 3D Cat map. The principle of the method consists in generating random numbers by using 3D Henon map and these numbers will transform to binary sequence. These sequence positions is permuted and Xoredusing 3D Cat map. The new key stream generator has successfully passed theNIST statistical test suite. The security analysisshows that it has large key space and its very sensitive initial conditions.
Area efficient parallel LFSR for cyclic redundancy check IJECEIAES
Cyclic Redundancy Check (CRC), code for error detection finds many applications in the field of digital communication, data storage, control system and data compression. CRC encoding operation is carried out by using a Linear Feedback Shift Register (LFSR). Serial implementation of CRC requires more clock cycles which is equal to data message length plus generator polynomial degree but in parallel implementation of CRC one clock cycle is required if a whole data message is applied at a time. In previous work related to parallel LFSR, hardware complexity of the architecture reduced using a technique named state space transformation. This paper presents detailed explaination of search algorithm implementation and technique to find number of XOR gates required for different CRC algorithms. This paper presents a searching algorithm and new technique to find the number of XOR gates required for different CRC algorithms. The comparison between proposed and previous architectures shows that the number of XOR gates are reduced for CRC algorithms which improve the hardware efficiency. Searching algorithm and all the matrix computations have been performed using MATLAB simulations.
Similar to Square transposition: an approach to the transposition process in block cipher (20)
Hyper-parameter optimization of convolutional neural network based on particl...journalBEEI
The document proposes using a particle swarm optimization (PSO) algorithm to optimize the hyperparameters of a convolutional neural network (CNN) for image classification. The PSO algorithm is used to find optimal values for CNN hyperparameters like the number and size of convolutional filters. In experiments on the MNIST handwritten digit dataset, the optimized CNN achieved a testing error rate of 0.87%, which is competitive with state-of-the-art models. The proposed approach finds optimized CNN architectures automatically without requiring manual design or encoding strategies during training.
Supervised machine learning based liver disease prediction approach with LASS...journalBEEI
In this contemporary era, the uses of machine learning techniques are increasing rapidly in the field of medical science for detecting various diseases such as liver disease (LD). Around the globe, a large number of people die because of this deadly disease. By diagnosing the disease in a primary stage, early treatment can be helpful to cure the patient. In this research paper, a method is proposed to diagnose the LD using supervised machine learning classification algorithms, namely logistic regression, decision tree, random forest, AdaBoost, KNN, linear discriminant analysis, gradient boosting and support vector machine (SVM). We also deployed a least absolute shrinkage and selection operator (LASSO) feature selection technique on our taken dataset to suggest the most highly correlated attributes of LD. The predictions with 10 fold cross-validation (CV) made by the algorithms are tested in terms of accuracy, sensitivity, precision and f1-score values to forecast the disease. It is observed that the decision tree algorithm has the best performance score where accuracy, precision, sensitivity and f1-score values are 94.295%, 92%, 99% and 96% respectively with the inclusion of LASSO. Furthermore, a comparison with recent studies is shown to prove the significance of the proposed system.
A secure and energy saving protocol for wireless sensor networksjournalBEEI
The research domain for wireless sensor networks (WSN) has been extensively conducted due to innovative technologies and research directions that have come up addressing the usability of WSN under various schemes. This domain permits dependable tracking of a diversity of environments for both military and civil applications. The key management mechanism is a primary protocol for keeping the privacy and confidentiality of the data transmitted among different sensor nodes in WSNs. Since node's size is small; they are intrinsically limited by inadequate resources such as battery life-time and memory capacity. The proposed secure and energy saving protocol (SESP) for wireless sensor networks) has a significant impact on the overall network life-time and energy dissipation. To encrypt sent messsages, the SESP uses the public-key cryptography’s concept. It depends on sensor nodes' identities (IDs) to prevent the messages repeated; making security goals- authentication, confidentiality, integrity, availability, and freshness to be achieved. Finally, simulation results show that the proposed approach produced better energy consumption and network life-time compared to LEACH protocol; sensors are dead after 900 rounds in the proposed SESP protocol. While, in the low-energy adaptive clustering hierarchy (LEACH) scheme, the sensors are dead after 750 rounds.
Plant leaf identification system using convolutional neural networkjournalBEEI
This paper proposes a leaf identification system using convolutional neural network (CNN). This proposed system can identify five types of local Malaysia leaf which were acacia, papaya, cherry, mango and rambutan. By using CNN from deep learning, the network is trained from the database that acquired from leaf images captured by mobile phone for image classification. ResNet-50 was the architecture has been used for neural networks image classification and training the network for leaf identification. The recognition of photographs leaves requested several numbers of steps, starting with image pre-processing, feature extraction, plant identification, matching and testing, and finally extracting the results achieved in MATLAB. Testing sets of the system consists of 3 types of images which were white background, and noise added and random background images. Finally, interfaces for the leaf identification system have developed as the end software product using MATLAB app designer. As a result, the accuracy achieved for each training sets on five leaf classes are recorded above 98%, thus recognition process was successfully implemented.
Customized moodle-based learning management system for socially disadvantaged...journalBEEI
This study aims to develop Moodle-based LMS with customized learning content and modified user interface to facilitate pedagogical processes during covid-19 pandemic and investigate how teachers of socially disadvantaged schools perceived usability and technology acceptance. Co-design process was conducted with two activities: 1) need assessment phase using an online survey and interview session with the teachers and 2) the development phase of the LMS. The system was evaluated by 30 teachers from socially disadvantaged schools for relevance to their distance learning activities. We employed computer software usability questionnaire (CSUQ) to measure perceived usability and the technology acceptance model (TAM) with insertion of 3 original variables (i.e., perceived usefulness, perceived ease of use, and intention to use) and 5 external variables (i.e., attitude toward the system, perceived interaction, self-efficacy, user interface design, and course design). The average CSUQ rating exceeded 5.0 of 7 point-scale, indicated that teachers agreed that the information quality, interaction quality, and user interface quality were clear and easy to understand. TAM results concluded that the LMS design was judged to be usable, interactive, and well-developed. Teachers reported an effective user interface that allows effective teaching operations and lead to the system adoption in immediate time.
Understanding the role of individual learner in adaptive and personalized e-l...journalBEEI
Dynamic learning environment has emerged as a powerful platform in a modern e-learning system. The learning situation that constantly changing has forced the learning platform to adapt and personalize its learning resources for students. Evidence suggested that adaptation and personalization of e-learning systems (APLS) can be achieved by utilizing learner modeling, domain modeling, and instructional modeling. In the literature of APLS, questions have been raised about the role of individual characteristics that are relevant for adaptation. With several options, a new problem has been raised where the attributes of students in APLS often overlap and are not related between studies. Therefore, this study proposed a list of learner model attributes in dynamic learning to support adaptation and personalization. The study was conducted by exploring concepts from the literature selected based on the best criteria. Then, we described the results of important concepts in student modeling and provided definitions and examples of data values that researchers have used. Besides, we also discussed the implementation of the selected learner model in providing adaptation in dynamic learning.
Prototype mobile contactless transaction system in traditional markets to sup...journalBEEI
1) Researchers developed a prototype contactless transaction system using QR codes and digital payments to support physical distancing during the COVID-19 pandemic in traditional markets.
2) The system allows sellers and buyers in traditional markets to conduct fast, secure transactions via smartphones without direct cash exchange. Buyers scan sellers' QR codes to view product details and make e-wallet payments.
3) Testing showed the system's functions worked properly and users found it easy to use and useful for supporting contactless transactions and digital transformation of traditional markets. However, further development is needed to increase trust in digital payments for users unfamiliar with the technology.
Wireless HART stack using multiprocessor technique with laxity algorithmjournalBEEI
The use of a real-time operating system is required for the demarcation of industrial wireless sensor network (IWSN) stacks (RTOS). In the industrial world, a vast number of sensors are utilised to gather various types of data. The data gathered by the sensors cannot be prioritised ahead of time. Because all of the information is equally essential. As a result, a protocol stack is employed to guarantee that data is acquired and processed fairly. In IWSN, the protocol stack is implemented using RTOS. The data collected from IWSN sensor nodes is processed using non-preemptive scheduling and the protocol stack, and then sent in parallel to the IWSN's central controller. The real-time operating system (RTOS) is a process that occurs between hardware and software. Packets must be sent at a certain time. It's possible that some packets may collide during transmission. We're going to undertake this project to get around this collision. As a prototype, this project is divided into two parts. The first uses RTOS and the LPC2148 as a master node, while the second serves as a standard data collection node to which sensors are attached. Any controller may be used in the second part, depending on the situation. Wireless HART allows two nodes to communicate with each other.
Implementation of double-layer loaded on octagon microstrip yagi antennajournalBEEI
This document describes the implementation of a double-layer structure on an octagon microstrip yagi antenna (OMYA) to improve its performance at 5.8 GHz. The double-layer consists of two double positive (DPS) substrates placed above the OMYA. Simulation and experimental results show that the double-layer configuration increases the gain of the OMYA by 2.5 dB compared to without the double-layer. The measured bandwidth of the OMYA with double-layer is 14.6%, indicating the double-layer can increase both the gain and bandwidth of the OMYA.
The calculation of the field of an antenna located near the human headjournalBEEI
In this work, a numerical calculation was carried out in one of the universal programs for automatic electro-dynamic design. The calculation is aimed at obtaining numerical values for specific absorbed power (SAR). It is the SAR value that can be used to determine the effect of the antenna of a wireless device on biological objects; the dipole parameters will be selected for GSM1800. Investigation of the influence of distance to a cell phone on radiation shows that absorbed in the head of a person the effect of electromagnetic radiation on the brain decreases by three times this is a very important result the SAR value has decreased by almost three times it is acceptable results.
Exact secure outage probability performance of uplinkdownlink multiple access...journalBEEI
In this paper, we study uplink-downlink non-orthogonal multiple access (NOMA) systems by considering the secure performance at the physical layer. In the considered system model, the base station acts a relay to allow two users at the left side communicate with two users at the right side. By considering imperfect channel state information (CSI), the secure performance need be studied since an eavesdropper wants to overhear signals processed at the downlink. To provide secure performance metric, we derive exact expressions of secrecy outage probability (SOP) and and evaluating the impacts of main parameters on SOP metric. The important finding is that we can achieve the higher secrecy performance at high signal to noise ratio (SNR). Moreover, the numerical results demonstrate that the SOP tends to a constant at high SNR. Finally, our results show that the power allocation factors, target rates are main factors affecting to the secrecy performance of considered uplink-downlink NOMA systems.
Design of a dual-band antenna for energy harvesting applicationjournalBEEI
This report presents an investigation on how to improve the current dual-band antenna to enhance the better result of the antenna parameters for energy harvesting application. Besides that, to develop a new design and validate the antenna frequencies that will operate at 2.4 GHz and 5.4 GHz. At 5.4 GHz, more data can be transmitted compare to 2.4 GHz. However, 2.4 GHz has long distance of radiation, so it can be used when far away from the antenna module compare to 5 GHz that has short distance in radiation. The development of this project includes the scope of designing and testing of antenna using computer simulation technology (CST) 2018 software and vector network analyzer (VNA) equipment. In the process of designing, fundamental parameters of antenna are being measured and validated, in purpose to identify the better antenna performance.
Transforming data-centric eXtensible markup language into relational database...journalBEEI
eXtensible markup language (XML) appeared internationally as the format for data representation over the web. Yet, most organizations are still utilising relational databases as their database solutions. As such, it is crucial to provide seamless integration via effective transformation between these database infrastructures. In this paper, we propose XML-REG to bridge these two technologies based on node-based and path-based approaches. The node-based approach is good to annotate each positional node uniquely, while the path-based approach provides summarised path information to join the nodes. On top of that, a new range labelling is also proposed to annotate nodes uniquely by ensuring the structural relationships are maintained between nodes. If a new node is to be added to the document, re-labelling is not required as the new label will be assigned to the node via the new proposed labelling scheme. Experimental evaluations indicated that the performance of XML-REG exceeded XMap, XRecursive, XAncestor and Mini-XML concerning storing time, query retrieval time and scalability. This research produces a core framework for XML to relational databases (RDB) mapping, which could be adopted in various industries.
Key performance requirement of future next wireless networks (6G)journalBEEI
The document provides an overview of the key performance indicators (KPIs) for 6G wireless networks compared to 5G networks. Some of the major KPIs discussed for 6G include: achieving data rates of up to 1 Tbps and individual user data rates up to 100 Gbps; reducing latency below 10 milliseconds; supporting up to 10 million connected devices per square kilometer; improving spectral efficiency by up to 100 times through technologies like terahertz communications and smart surfaces; and achieving an energy efficiency of 1 pico-joule per bit transmitted through techniques like wireless power transmission and energy harvesting. The document outlines how 6G aims to integrate terrestrial, aerial and maritime communications into a single network to provide ubiquitous connectivity with higher
Noise resistance territorial intensity-based optical flow using inverse confi...journalBEEI
This paper presents the use of the inverse confidential technique on bilateral function with the territorial intensity-based optical flow to prove the effectiveness in noise resistance environment. In general, the image’s motion vector is coded by the technique called optical flow where the sequences of the image are used to determine the motion vector. But, the accuracy rate of the motion vector is reduced when the source of image sequences is interfered by noises. This work proved that the inverse confidential technique on bilateral function can increase the percentage of accuracy in the motion vector determination by the territorial intensity-based optical flow under the noisy environment. We performed the testing with several kinds of non-Gaussian noises at several patterns of standard image sequences by analyzing the result of the motion vector in a form of the error vector magnitude (EVM) and compared it with several noise resistance techniques in territorial intensity-based optical flow method.
Modeling climate phenomenon with software grids analysis and display system i...journalBEEI
This study aims to model climate change based on rainfall, air temperature, pressure, humidity and wind with grADS software and create a global warming module. This research uses 3D model, define, design, and develop. The results of the modeling of the five climate elements consist of the annual average temperature in Indonesia in 2009-2015 which is between 29oC to 30.1oC, the horizontal distribution of the annual average pressure in Indonesia in 2009-2018 is between 800 mBar to 1000 mBar, the horizontal distribution the average annual humidity in Indonesia in 2009 and 2011 ranged between 27-57, in 2012-2015, 2017 and 2018 it ranged between 30-60, during the East Monsoon, the wind circulation moved from northern Indonesia to the southern region Indonesia. During the west monsoon, the wind circulation moves from the southern part of Indonesia to the northern part of Indonesia. The global warming module for SMA/MA produced is feasible to use, this is in accordance with the value given by the validate of 69 which is in the appropriate category and the response of teachers and students through a 91% questionnaire.
An approach of re-organizing input dataset to enhance the quality of emotion ...journalBEEI
The purpose of this paper is to propose an approach of re-organizing input data to recognize emotion based on short signal segments and increase the quality of emotional recognition using physiological signals. MIT's long physiological signal set was divided into two new datasets, with shorter and overlapped segments. Three different classification methods (support vector machine, random forest, and multilayer perceptron) were implemented to identify eight emotional states based on statistical features of each segment in these two datasets. By re-organizing the input dataset, the quality of recognition results was enhanced. The random forest shows the best classification result among three implemented classification methods, with an accuracy of 97.72% for eight emotional states, on the overlapped dataset. This approach shows that, by re-organizing the input dataset, the high accuracy of recognition results can be achieved without the use of EEG and ECG signals.
Parking detection system using background subtraction and HSV color segmentationjournalBEEI
Manual system vehicle parking makes finding vacant parking lots difficult, so it has to check directly to the vacant space. If many people do parking, then the time needed for it is very much or requires many people to handle it. This research develops a real-time parking system to detect parking. The system is designed using the HSV color segmentation method in determining the background image. In addition, the detection process uses the background subtraction method. Applying these two methods requires image preprocessing using several methods such as grayscaling, blurring (low-pass filter). In addition, it is followed by a thresholding and filtering process to get the best image in the detection process. In the process, there is a determination of the ROI to determine the focus area of the object identified as empty parking. The parking detection process produces the best average accuracy of 95.76%. The minimum threshold value of 255 pixels is 0.4. This value is the best value from 33 test data in several criteria, such as the time of capture, composition and color of the vehicle, the shape of the shadow of the object’s environment, and the intensity of light. This parking detection system can be implemented in real-time to determine the position of an empty place.
Quality of service performances of video and voice transmission in universal ...journalBEEI
The universal mobile telecommunications system (UMTS) has distinct benefits in that it supports a wide range of quality of service (QoS) criteria that users require in order to fulfill their requirements. The transmission of video and audio in real-time applications places a high demand on the cellular network, therefore QoS is a major problem in these applications. The ability to provide QoS in the UMTS backbone network necessitates an active QoS mechanism in order to maintain the necessary level of convenience on UMTS networks. For UMTS networks, investigation models for end-to-end QoS, total transmitted and received data, packet loss, and throughput providing techniques are run and assessed and the simulation results are examined. According to the results, appropriate QoS adaption allows for specific voice and video transmission. Finally, by analyzing existing QoS parameters, the QoS performance of 4G/UMTS networks may be improved.
A multi-task learning based hybrid prediction algorithm for privacy preservin...journalBEEI
There is ever increasing need to use computer vision devices to capture videos as part of many real-world applications. However, invading privacy of people is the cause of concern. There is need for protecting privacy of people while videos are used purposefully based on objective functions. One such use case is human activity recognition without disclosing human identity. In this paper, we proposed a multi-task learning based hybrid prediction algorithm (MTL-HPA) towards realising privacy preserving human activity recognition framework (PPHARF). It serves the purpose by recognizing human activities from videos while preserving identity of humans present in the multimedia object. Face of any person in the video is anonymized to preserve privacy while the actions of the person are exposed to get them extracted. Without losing utility of human activity recognition, anonymization is achieved. Humans and face detection methods file to reveal identity of the persons in video. We experimentally confirm with joint-annotated human motion data base (JHMDB) and daily action localization in YouTube (DALY) datasets that the framework recognises human activities and ensures non-disclosure of privacy information. Our approach is better than many traditional anonymization techniques such as noise adding, blurring, and masking.
A brief introduction to quadcopter (drone) working. It provides an overview of flight stability, dynamics, general control system block diagram, and the electronic hardware.
Development of Chatbot Using AI/ML Technologiesmaisnampibarel
The rapid advancements in artificial intelligence and natural language processing have significantly transformed human-computer interactions. This thesis presents the design, development, and evaluation of an intelligent chatbot capable of engaging in natural and meaningful conversations with users. The chatbot leverages state-of-the-art deep learning techniques, including transformer-based architectures, to understand and generate human-like responses.
Key contributions of this research include the implementation of a context- aware conversational model that can maintain coherent dialogue over extended interactions. The chatbot's performance is evaluated through both automated metrics and user studies, demonstrating its effectiveness in various applications such as customer service, mental health support, and educational assistance. Additionally, ethical considerations and potential biases in chatbot responses are examined to ensure the responsible deployment of this technology.
The findings of this thesis highlight the potential of intelligent chatbots to enhance user experience and provide valuable insights for future developments in conversational AI.
Profiling of Cafe Business in Talavera, Nueva Ecija: A Basis for Development ...IJAEMSJORNAL
This study aimed to profile the coffee shops in Talavera, Nueva Ecija, to develop a standardized checklist for aspiring entrepreneurs. The researchers surveyed 10 coffee shop owners in the municipality of Talavera. Through surveys, the researchers delved into the Owner's Demographic, Business details, Financial Requirements, and other requirements needed to consider starting up a coffee shop. Furthermore, through accurate analysis, the data obtained from the coffee shop owners are arranged to derive key insights. By analyzing this data, the study identifies best practices associated with start-up coffee shops’ profitability in Talavera. These findings were translated into a standardized checklist outlining essential procedures including the lists of equipment needed, financial requirements, and the Traditional and Social Media Marketing techniques. This standardized checklist served as a valuable tool for aspiring and existing coffee shop owners in Talavera, streamlining operations, ensuring consistency, and contributing to business success.
In May 2024, globally renowned natural diamond crafting company Shree Ramkrishna Exports Pvt. Ltd. (SRK) became the first company in the world to achieve GNFZ’s final net zero certification for existing buildings, for its two two flagship crafting facilities SRK House and SRK Empire. Initially targeting 2030 to reach net zero, SRK joined forces with the Global Network for Zero (GNFZ) to accelerate its target to 2024 — a trailblazing achievement toward emissions elimination.
Social media management system project report.pdfKamal Acharya
The project "Social Media Platform in Object-Oriented Modeling" aims to design
and model a robust and scalable social media platform using object-oriented
modeling principles. In the age of digital communication, social media platforms
have become indispensable for connecting people, sharing content, and fostering
online communities. However, their complex nature requires meticulous planning
and organization.This project addresses the challenge of creating a feature-rich and
user-friendly social media platform by applying key object-oriented modeling
concepts. It entails the identification and definition of essential objects such as
"User," "Post," "Comment," and "Notification," each encapsulating specific
attributes and behaviors. Relationships between these objects, such as friendships,
content interactions, and notifications, are meticulously established.The project
emphasizes encapsulation to maintain data integrity, inheritance for shared behaviors
among objects, and polymorphism for flexible content handling. Use case diagrams
depict user interactions, while sequence diagrams showcase the flow of interactions
during critical scenarios. Class diagrams provide an overarching view of the system's
architecture, including classes, attributes, and methods .By undertaking this project,
we aim to create a modular, maintainable, and user-centric social media platform that
adheres to best practices in object-oriented modeling. Such a platform will offer users
a seamless and secure online social experience while facilitating future enhancements
and adaptability to changing user needs.
Exploring Deep Learning Models for Image Recognition: A Comparative Reviewsipij
Image recognition, which comes under Artificial Intelligence (AI) is a critical aspect of computer vision,
enabling computers or other computing devices to identify and categorize objects within images. Among
numerous fields of life, food processing is an important area, in which image processing plays a vital role,
both for producers and consumers. This study focuses on the binary classification of strawberries, where
images are sorted into one of two categories. We Utilized a dataset of strawberry images for this study; we
aim to determine the effectiveness of different models in identifying whether an image contains
strawberries. This research has practical applications in fields such as agriculture and quality control. We
compared various popular deep learning models, including MobileNetV2, Convolutional Neural Networks
(CNN), and DenseNet121, for binary classification of strawberry images. The accuracy achieved by
MobileNetV2 is 96.7%, CNN is 99.8%, and DenseNet121 is 93.6%. Through rigorous testing and analysis,
our results demonstrate that CNN outperforms the other models in this task. In the future, the deep
learning models can be evaluated on a richer and larger number of images (datasets) for better/improved
results.
OCS Training Institute is pleased to co-operate with
a Global provider of Rig Inspection/Audits,
Commission-ing, Compliance & Acceptance as well as
& Engineering for Offshore Drilling Rigs, to deliver
Drilling Rig Inspec-tion Workshops (RIW) which
teaches the inspection & maintenance procedures
required to ensure equipment integrity. Candidates
learn to implement the relevant standards &
understand industry requirements so that they can
verify the condition of a rig’s equipment & improve
safety, thus reducing the number of accidents and
protecting the asset.
Online music portal management system project report.pdfKamal Acharya
The iMMS is a unique application that is synchronizing both user
experience and copyrights while providing services like online music
management, legal downloads, artists’ management. There are several
other applications available in the market that either provides some
specific services or large scale integrated solutions. Our product differs
from the rest in a way that we give more power to the users remaining
within the copyrights circle.
Best Practices of Clothing Businesses in Talavera, Nueva Ecija, A Foundation ...IJAEMSJORNAL
This study primarily aimed to determine the best practices of clothing businesses to use it as a foundation of strategic business advancements. Moreover, the frequency with which the business's best practices are tracked, which best practices are the most targeted of the apparel firms to be retained, and how does best practices can be used as strategic business advancement. The respondents of the study is the owners of clothing businesses in Talavera, Nueva Ecija. Data were collected and analyzed using a quantitative approach and utilizing a descriptive research design. Unveiling best practices of clothing businesses as a foundation for strategic business advancement through statistical analysis: frequency and percentage, and weighted means analyzing the data in terms of identifying the most to the least important performance indicators of the businesses among all of the variables. Based on the survey conducted on clothing businesses in Talavera, Nueva Ecija, several best practices emerge across different areas of business operations. These practices are categorized into three main sections, section one being the Business Profile and Legal Requirements, followed by the tracking of indicators in terms of Product, Place, Promotion, and Price, and Key Performance Indicators (KPIs) covering finance, marketing, production, technical, and distribution aspects. The research study delved into identifying the core best practices of clothing businesses, serving as a strategic guide for their advancement. Through meticulous analysis, several key findings emerged. Firstly, prioritizing product factors, such as maintaining optimal stock levels and maximizing customer satisfaction, was deemed essential for driving sales and fostering loyalty. Additionally, selecting the right store location was crucial for visibility and accessibility, directly impacting footfall and sales. Vigilance towards competitors and demographic shifts was highlighted as essential for maintaining relevance. Understanding the relationship between marketing spend and customer acquisition proved pivotal for optimizing budgets and achieving a higher ROI. Strategic analysis of profit margins across clothing items emerged as crucial for maximizing profitability and revenue. Creating a positive customer experience, investing in employee training, and implementing effective inventory management practices were also identified as critical success factors. In essence, these findings underscored the holistic approach needed for sustainable growth in the clothing business, emphasizing the importance of product management, marketing strategies, customer experience, and operational efficiency.
Unblocking The Main Thread - Solving ANRs and Frozen FramesSinan KOZAK
In the realm of Android development, the main thread is our stage, but too often, it becomes a battleground where performance issues arise, leading to ANRS, frozen frames, and sluggish Uls. As we strive for excellence in user experience, understanding and optimizing the main thread becomes essential to prevent these common perforrmance bottlenecks. We have strategies and best practices for keeping the main thread uncluttered. We'll examine the root causes of performance issues and techniques for monitoring and improving main thread health as wel as app performance. In this talk, participants will walk away with practical knowledge on enhancing app performance by mastering the main thread. We'll share proven approaches to eliminate real-life ANRS and frozen frames to build apps that deliver butter smooth experience.
Unblocking The Main Thread - Solving ANRs and Frozen Frames
Square transposition: an approach to the transposition process in block cipher
1. Bulletin of Electrical Engineering and Informatics
Vol. 10, No. 6, December 2021, pp. 3385∼3392
ISSN: 2302-9285, DOI: 10.11591/eei.v10i6.3129 r 3385
Square transposition: an approach to the transposition
process in block cipher
Magdalena A. Ineke Pekereng, Alz Danny Wowor
Department of Informatic Engineering, Universitas Kristen Satya Wacana, Salatiga, Indonesia
Article Info
Article history:
Received Dec 26, 2020
Revised Mar 30, 2021
Accepted Oct 7, 2021
Keywords:
AES
DES
Input scheme
Retrieval scheme
Square transposotion
ABSTRACT
The transposition process is needed in cryptography to create a diffusion effect on
data encryption standard (DES) and advanced encryption standard (AES) algorithms
as standard information security algorithms by the National Institute of Standards and
Technology. The problem with DES and AES algorithms is that their transposition in-
dex values form patterns and do not form random values. This condition will certainly
make it easier for a cryptanalyst to look for a relationship between ciphertexts because
some processes are predictable. This research designs a transposition algorithm called
square transposition. Each process uses square 8 × 8 as a place to insert and retrieve
64-bits. The determination of the pairing of the input scheme and the retrieval scheme
that have unequal flow is an important factor in producing a good transposition. The
square transposition can generate random and non-pattern indices so that transposition
can be done better than DES and AES.
This is an open access article under the CC BY-SA license.
Corresponding Author:
Magdalena A. Ineke Pekereng
Department of Informatics Engineering
Universitas Kristen Satya Wacana
Jl. Notohamidjojo 1-10, Salatiga 50718, Indonesia
Email: ineke.pakereng@uksw.edu
1. INTRODUCTION
Diffusional transposition process is useful for the spread of plaintext redundancy in a ciphertext. Mod-
ern cryptography such as data encryption standard (DES)and advanced encryption standard (AES) as informa-
tion security standards used by National Institute of Standards and Technology (NIST) also contain transposi-
tion as one of the important processes in the algorithm [1], [2]. DES with initial permutation (IP) and inverse
initial permutation (IP)−1
are truly essential in the transposition process [3]. Whereas, AES with shift rows
capability is simpler in transposition [4]. The two algorithms use index values to determine the shift of each
object. The excess diffusion in the algorithm is one of the factors that make DES and AES still attractive and
feasible to use, which makes both of them are chosen by researchers as their information security methods
[5]-[21].
The DES transposition index value in Figure 1 shows patterned results. 64-bit outputs in DES always
form 8-bit groups. Each eight index values produce the same pattern, starting from the highest value that
gradually decreases. For example ai as the index value where (i = 1, 2, 3, · · · , 8) in the first group, the value
of the same position in the next group always becomes ai + 1 (mod 64).
The transposition on AES also has a patterned index value as shown in Figure 2. The AES index
value forms a group of 4 characters. The first group (4-0) consists of four upward histograms and 0 different
Journal homepage: http://beei.org
2. 3386 r ISSN: 2302-9285
histograms. The second group: (3-1) consists of three upward histograms and 1 histogram that has different
values. The same condition applies to the third group (2-2) and the fourth group (1-3).
The index value is expected as a new position to make the sequence of each character to be more
irregular so that the diffusion factor will increasingly appear in the ciphertext. The moving of objects based
on index values in DES and AES indicates a problem because it forms a certain pattern. The problem in DES
and AES is that when the sequence or pattern of data {1, 2, 3, · · · , n} is known, the probability for finding the
{n+1, n+2, · · · } data is great. This will weaken the algorithm and the patterned condition will certainly make
it easier for cryptanalysts to find a plaintext-ciphertext relationship because part of the process is predictable.
A study related to the transposition process was also carried out by [22]-[25], who dismissed the shift
row operation as a transposition operation in AES cryptography. Thus, XOR as an additional operation can
be performed repeatedly up to three times. The research done by [26], [27] adds various processes to correct
the shortcomings of transposition in the algorithm. Although the improvement of the transposition process by
adding the algorithm in parallel will certainly obtain a good result, the adding of the algorithm takes more time,
and space. In terms of efficiency, the algorithm is less elegant to be used as information security.
Figure 1. Index values of DES transposition Figure 2. Index values of AES transposition
This research designs a transposition algorithm called Square Transposition. A square of n × n size
is used as the medium to hold (m = n × n)-bit. Each bit input is entered into a square using certain rules and
taking of bit is also done with certain rules. Determination of whether the designed algorithm is good or not is
seen based on its statistical testing. Statistical testing is done to determine the randomness of each index value.
In addition, correlation testing is used to measure the algorithm’s ability to disguise the relationship between
input and output. Finally, DES and AES are compared to find out the power of Square Transposition in the
algorithm for the transposition process.
2. PROPOSED RESEARCH
2.1. Square transposition
Square transposition consists of two processes namely bit-input into a square and bit-retrieval with a
certain predetermined size. Suppose T = text input, ti = i-th text character and ai = i-th binary character, then:
T = {t1, t2, · · · , tn}; n|8, n ∈ Z+
(1)
Where, t1 = {a01, a02, a03, · · · , a08}, t2 = {a09, a10, a11, · · · , a16}, t3 = {a17, a18, a19, · · · , a24}, · · · ,
tn = {a8n−7, a8n−6, a8n−5, · · · , a8n}. If n6 | 8, then padding is done as many as k, so that it will result in (2).
With (n + k)|8; k = 1, 2, · · · , 7.
T = {t1, t2, · · · , tn, tn+1, tn+2, · · · , tn+k} (2)
The square that is used as the transposition media can be adjusted to the bit size of the text input. This research
chooses 64-bit text input, so it will be a square size of 8 × 8 shown in Figure 3.
Figure 3. Square transposition 8 × 8
Bulletin of Electr Eng & Inf, Vol. 10, No. 6, December 2021 : 3385 – 3392
3. Bulletin of Electr Eng & Inf ISSN: 2302-9285 r 3387
The entry scheme is a way to place each bit of ai; i ∈ Z+
64 in the entry of square with certain rules.
For example, every bit after entering into a square is the order of bits given in (3).
Tsq = {a∗
1, a∗
2, a∗
3, · · · , a∗
64} (3)
A retrieval scheme is a way to take every bit of a∗
i , i ∈ Z+
64 from the square with a certain rule. Notation for
each bit taken from square (a∗
i (j)); ∃ i, j ∈ Z+
64 where i is the entry index and j is the retrieval index. In (4) is
a schema collection dataset L = {l1, l2, l3, · · · , l8}, where ∃ x ∈ Z+
64.
l1 ={a∗
x (01), a∗
x (02), a∗
x (03), · · · , a∗
x (08)},
l2 ={a∗
x (09), a∗
x (10), a∗
x (11), · · · , a∗
x (16)},
.
.
.
.
.
.
l8 ={a∗
x (57), a∗
x (58), a∗
x (59), · · · , a∗
x (64)}.
(4)
2.2. Square transposition schematic testing
Every combination of input and output schemes in the square transposition will result in a transposition
method, and each combination must produce a random order of index values. All users can design their input
and output schemes. Therefore, random testing needs to be done to ensure that every designed scheme will
produce a good transposition method.
Figure 4 shows a testing scheme, in which if each pair of schemes has not yet reached randomness, a
scheme can be replaced by another scheme. This research uses three tests of randomness (Frequency Monobit
Test, Frequency Test within a Block, and Runs Test) so that if two or three methods are random, the combination
of those schemes can be used as a method of transposition.
Figure 4. Testing of input and output schemes
3. RESULT AND DISCUSSION
3.1. Square transposition entry scheme
Based on (1), 64-bit is used as input and square size 8 × 8. Two input schemes are selected with
randomly selected index values, the two schemes are given in succession in Figures 5 and 6, respectively.
Figure 5. Input scheme 1 Figure 6. Input scheme 2
3.2. Retrieval scheme design
The retrieval scheme is a rule that takes every bit from a square that previously had a bit from the bit
entry process. Here are several retrieval schemes used as pairs of input schemes.
3.2.1. Horizontal retrieval scheme
This design uses the Entry-1 Scheme to insert bits into a square, as shown in Figure 7. The horizontal
retrieval process is carried out from the top left corner to the right corner of the square. The order of each bit
a8i+1 for i = 0, 1, · · · , 7 is always to the left of the first entry of every line to square (i + 1).
Square transposition: an approach to the transposition process in block ... (Magdalena A. Ineke Pekereng)
4. 3388 r ISSN: 2302-9285
The horizontal retrieval scheme results starts from a37 based on the index j = 1 to j = 64 for a55.
Thus, square transposition output is obtained which is based on byte, as shown previously in (4), is a schema
collection dataset L = {l1, l2, l3, · · · , l8} where l1 = {a37, a29, · · · , a48}, l2 = {a30, a39, a41, · · · , a43},
· · · , l8 = {a61, a13, a02, · · · , a55}. Transposition results from the retrieval-1 scheme and the horizontal entry
schema can be visualized in Cartesian coordinates, where each takes index (i) is abscissa and index enter (j)
as ordinate. The results of complete bit retrieval are shown in Figure 8.
Figure 7. Horizontal retrieval scheme
Figure 8. Graphic of the index values
3.2.2. Vertical retrieval scheme
Square transposition also uses an input-1 scheme to input each entry from the square. Retrieval is
done vertically from top to bottom, starting at the top right corner entry to the bottom right of the square. In
general, every bit of ai (j) and the retrieval index j = (8z + 1); z ∈ {0, 1, · · · , 7}. If z is even, the retrieval
is done vertically top-down, and if z is odd, the retrieval will be done from the bottom up. The retrieval results
are based on bits shown in Figure 9.
The vertical retrieval scheme starts from a48 based on the index j = 1 to j = 64 for a37 bits. So
the square transposition output is based on bytes L = {l1, l2, l3, · · · , l8}, where l1 = {a48, a43, a06, · · · , a55},
l2 = {a52, a33, a15, · · · , a27}, · · · , l8 = {a61, a17, a54, · · · , a37}. The visualization of transposition index of
the input-1 scheme and the vertical input scheme is shown in Figure 10.
Figure 9. Vertical Retrieval Scheme
Figure 10. Graphic of the index values
3.2.3. Zigzag retrieval scheme
The input-2 scheme is used in Figure 6 which the retrieval scheme is done in zigzag form from the
lower left to the upper right. The retrieval plots are based on index values j = 1 to j = 64, which the
Bulletin of Electr Eng & Inf, Vol. 10, No. 6, December 2021 : 3385 – 3392
5. Bulletin of Electr Eng & Inf ISSN: 2302-9285 r 3389
complete plots are shown in Figure 11. Retrieval starts from a46 to a25, so that the square transposition
output can be obtained based on byte L = {l1, l2, l3, · · · , l8} where l1 = {a46, a42, a16, · · · , a63}, l2 =
{a40, a17, a64, · · · , a29}, · · · l8 = {a5, a11, a09, · · · , a25}. The geometric interpretation of the value of the
zigzag input-2 and zigzag retrieval scheme is shown in Figure 12.
Figure 11. Zigzag retrieval scheme
Figure 12. Graphic of the index values
3.2.4. Rice plow retrieval scheme
The transposition technique by adopting the rice plow process can be done with the assumption of a
square as a rice field plot. Each bit plot is adjusted to the rice plow process starting from the outside point
towards the midpoint, which the complete plots are shown in Figure 13. The input-2 scheme is used to fill in
each input from the square so that the retrieval using rice plow plot can be carried out.
The retrieval process starts from the lower right corner (a08) with a rotating plot around the square
towards the center (a04). The value of the transposition output index of the input-2 scheme and rice scheme
retrieval is L = {l1, l2, · · · , l8}; where l1 = {a08, a21, a56, · · · , a46}, l2 = {a42, a28, a47, · · · , a24}, · · ·
l8 = {a43, a43, a27, · · · , a04}. The visualization of the transposition index value is shown in Figure 14.
Figure 13. Rice plow retrieval scheme
Figure 14. Graphic of the index values
3.3. Testing of randomness on index values
The method used in randomness testing is Mono Bit frequency Test, Bit Block frequency Test, and
Run Test, with α = 0.01. Each transposition index value is declared as random if two or three test results
Square transposition: an approach to the transposition process in block ... (Magdalena A. Ineke Pekereng)
6. 3390 r ISSN: 2302-9285
have p-value > α. The complete test results are shown in Table 1. The combination of each input scheme
and retrieval scheme is carried out to see how well the pair of schemes are designed or selected so that square
transposition can produce a random index value.
The schemes of input 1 & vertical output obtain the highest p-value with the average value of 0.273,
and the ones with the lowest score are the schemes of input 1 & zigzag scheme with a smaller average of
p-value, which is 0.101. Overall, all pairs of input and output schemes can maintain the p-value that results
in a random index value and the pair of schemes can produce a better index value when compared to the
transposition method in AES and DES algorithms.
Table 1. Randomness test result for each scheme
2*No 2*Input & Retrieval Scheme p-value 2*Result
Mono Bit Block Bit Run-Test
1 Input-1 & Horizontal Scheme 0.1341 0.1230 0.1853 random
2 Input-1 & Vertical Scheme 0.2727 0.2194 0.1432 random
3 Input-2 & Zigzag Scheme 0.1204 0.2282 0.2031 random
4 Input-2 & Rice Plow Scheme 0.1950 0.1917 0.1981 random
5 Input-1 & Zigzag Scheme 0.1012 0.1118 0.2116 random
6 Input-1 & Rice Plow Scheme 0.1018 0.2014 0.1102 random
7 Input-2 & Horizontal Scheme 0.1210 0.1052 0.2056 random
8 Input-2 & Vertical Scheme 0.2044 0.2015 0.1186 random
9 DES 2.569 ×10−8 0.0539 9.172 ×10−6 non random
10 AES 4.251 ×10−8 0.0042 1.456 ×10−4 non random
The use of output-1 and output-2 schemes plays an important role in yielding the output of random
index values. Selection of a pair of schemes using a combination of horizontal, vertical, zigzag, and plow or
others that have a patterned index will generate poor transposition index value. It happens because the input
and output scheme has the same or similar line direction.
3.4. Correlation testing
Correlation value (r) can be used to see the magnitude of the relationship between input (x) and
output (y) of statistically related algorithms. The correlation interval is −1 ≤ r ≤ 1, and if r approaches 0,
then the algorithm is able to make the input and output not statistically related. In this condition, if r < 0, the
absolute value |r| can be used to find out the distance r from 0.
Correlation testing uses three plaintext inputs which it is expected to represent text variations that
might be used by users. Input “fti uksw” is to represent traditional text input because usually, users use it. The
second more extreme test is the use of the same input, which is “xyyyyyyyy” (not “yyyyyyyyy” because this
correlation formula is undefined). The third test is “$aL4t1G4” which also represents a variety of symbols,
numbers, and letters that are used as input.
The results obtained in Table 2 show that the output of each scheme of the square transposition has
an average correlation value close to 0. Thus, it indicates that the relationship between input and output is not
related statistically. Consequently, the square transposition succeeds in disguising the information, so that the
distribution of redundancies occurs well and will certainly increase the diffusion effect on the cryptography
algorithm.
Table 2. Testing result of input-output correlation
2*No 2*Transposition Method Correlation Value |r| 2*Average
fti uksw xyyyyyyy $aL4t1G4
1 Input-1 & Horizontal Retrieval 0.249 0.331 0.217 0.266
2 Input-1 & Vertical Retrieval 0.162 0.127 0.142 0.162
3 Input-2 & Zigzag Retrieval 0.254 0.267 0.324 0.254
4 Input-2 & Rice Plow Retrieval 0.313 0.375 0.252 0.313
5 Input-1 & Zigzag Retrieval 0.112 0.009 0.018 0.112
6 Input-1 & Rice Plow Retrieval 0.016 0.090 0.040 0.016
7 Input-2 & Horizontal Retrieval 0.138 0.268 0.265 0.138
8 Input-2 & Vertical Retrieval 0.076 0.098 0.184 0.076
9 DES 0.342 0.126 0.374 0.342
10 AES 0.376 0.429 0.277 0.376
Bulletin of Electr Eng & Inf, Vol. 10, No. 6, December 2021 : 3385 – 3392
7. Bulletin of Electr Eng & Inf ISSN: 2302-9285 r 3391
The transposition of DES and AES algorithms has resulted in higher average correlation values than
the value from the schematic combination of square transposition so that it can be said that each pair of schemes
can generate a better transposition algorithm. Of course, the use of square transposition in cryptography will
increase the strength of overall cryptographic algorithms. Optimization of the transposition process using
square transposition is a part that needs to be done by cryptographers to improve or modify the weak parts of
the algorithm.
4. CONCLUSION
The determination of a pair of input and output schemes in square transposition should be based on
schemes that have different lines to obtain a good transposition process. Combination of schemes that were
carried out produced less patterned geometric visualization that oscillates irregularly, so that the transposition
method could generate random index values. This result is also seen in randomness testing in which the overall
obtained p-value is greater which α = 1% so that the square transposition can produce better a transposition
method when it is compared to AES and DES values which the index is not random. Square transposition
produces an average correlation value closer to 0 for testing the text input when compared to AES and DES
transpositions. Thus, the square transposition manages to disguise the information on the input so that it is not
visible in the output. Besides, the square transposition can spread the distributed redundancies well, so that
it will increase the diffusion effect on the cryptographic algorithm. The result shows that the algorithm in the
square transposition optimizes the transposition process that previously has non-random index values. This
design optimizes algorithm processes by concentrating on the diffusion effect and by not giving a burden on
the complexity of time and space. Algorithm modification is a process that every cryptographer needs to do to
produce a more efficient algorithm in cryptography to secure information.
ACKNOWLEDGEMENT
The researcher would like to thank the Bureau of Research, Publication and Community Service
(BP3M) of Universitas Kristen Satya Wacana Salatiga for providing the funding assistance through the Funda-
mental Internal Research Scheme in 2018/2019.
REFERENCES
[1] W. M. Daley, “Federal Information Processing Standards Publication,” Data Encryption Standard (DES), U.S. De-
partment of Commerce: National Institute of Standards and Technology (NIST), 1979, pp. 1-22.
[2] NIST, “Federal Information Processing Standards Publication,” Advanced Encryption Standard (AES), U.S. Depart-
ment of Commerce: National Institute of Standards and Technology (NIST), November 2001, pp. 1-47.
[3] A. Biryukov and C. De Cannière, “Data Encryption Standard (DES),” IBM Journal of Research and Development,
Springer, 2011, pp. 243-250.
[4] J. Daemen and V. Rijmen, The Design of Rijndael: AES The Advanced Encryption Standard, Springer-Verlag, 2001.
[5] M. Yang, B. Xiao and Q. Meng, “New AES Dual Ciphers Based on Rotation of Columns,” Wuhan University Journal
of Natural Sciences, Springer, Vol. 24, pp. 93-97, March 2019, doi: 10.1007/s11859-019-1373-y.
[6] A. Arab, M. J. Rostami and B. Ghavami, “An image encryption method based on chaos system and AES Algorithm,”
The Journal of Supercomputing, springer, vol. 75, pp. 6663-6682, May 2019, doi: 10.1007/s11227-019-02878-7.
[7] A. A. Thinn and M. M. S. Thwin, “Modification of AES Algorithm by Using Second Key and Modified SubBytes Op-
eration for Text Encryption,” Computational Science and Technology part of Lecture Notes in Electrical Engineering,
Springer, vol. 481, pp. 435-444, August 2018, doi: 10.1007/978-981-13-2622-6-42.
[8] C. R. Dongarsane, D. Maheshkumar and S. V. Sankpal, “Performance Analysis of AES Implementation on a Wireless
Sensor Network”, Tech. Soc. Springer, pp. 87-93, November 2019, doi: 10.1007/978-3-030-164843-3-9.
[9] C. Ashokkumar, R. M. Bholanath, S. V. Bhargav and B. L. Menezes, “S-Box Implementation of AES Is Not Side
Channel Resistant,” Journal of Hardware and Systems Security, Springer, vol. 4, no. 2, pp. 86-97, December 2019,
doi: 10.1007/s41635-019-00082-w.
[10] T. Manojkumar, P. Karthigaikumar and V. Ramachandran, “An Optimized S-Box Circuit for High Speed AES Design
with Enhanced PPRM Architecture to Secure Mammographic Images,” Journal of Medical Systems, Springer, vol.
43, no. 31, p. 31, January 2019, doi: 10.1007/s10916-018-1145-9.
[11] S. D. Putra, M. Yudhiprawira, S. Sutikno, Y. Kurniawan and A. D. Ahmad, “Power Analysis Attack Against Encryp-
tion Devices: A Comprehensive Analysis of AES, DES, and BC3,” TELKOMNIKA Telecommunication, Computing,
Electronics and Control, vol. 17, no. 3, pp. 2182-1289, June 2019, doi: 10.12928/TELKOMNIKA.v17i3.9384.
Square transposition: an approach to the transposition process in block ... (Magdalena A. Ineke Pekereng)
8. 3392 r ISSN: 2302-9285
[12] C. S. Sari, G. Ardiansyah, D. R. I. M. Setiadi and E. H. Rachmawanto, “An improved security and message capac-
ity using AES and Huffman Coding on Image Steganography,” TELKOMNIKA Telecommunication, Computing,
Electronics and Control, vol. 17, no. 5, pp. 2400-2409, October 2019, doi: 10.12928/TELKOMNIKA.v17i5.9570.
[13] G. C. Prasetyadi, R. Refianti and A. B. Mutiara, “File Encryption and Hiding Application Based on AES and Append
Insertion Steganography,” TELKOMNIKA Telecommunication, Computing, Electronics and Control, vol. 16, no.1,
pp. 361-367, February 2018, doi: 10.12928/TELKOMNIKA.v16i1.6409.
[14] B. F. Cruz, K.N. Dominggo, E. Froilan, J. B. Cotiangco and C. B. Hilario, “Expanded 128-bit Data Encryption
Standard,” International Journal of Computer Science and Mobile Computing, vol. 6, no. 8, pp. 133-142, August
2017.
[15] C. A. Sari, E. H. Rachmawanto and C. A. Haryanto, “Cryptography Triple Data Encryption Standard (3DES) for
Digital Image Security,” Scientific Journal of Informatics, vol. 5, no. 2, pp. 105-117, November 2018.
[16] S. Pavithra, P. Muthukannan and V. Prabhakaran, “An Enhanced Cryptographic Algorithm Using Bi-Modal Bio-
metrics,” International Journal of Innovative Technology and Exploring Engineering (IJITEE), vol. 8, no. 11, pp.
2575-2582, September 2019, doi: 10.35940/ijitee.K1870.0981119.
[17] E. R. Arboleda, J. L. Balaba and J. L. Espineli, “Chaotic Rivest-Shamir-Adlerman Algorithm with Data Encryption
Standard Scheduling,” Bulletin of Electrical Engineering and Informatics , vol. 6, no. 3, pp. 219–227, September
2017, doi: 10.11591/eei.v6i3.627.
[18] P. B. Mane and A. O. Mulani, “High Speed Area Efficient FPGA Implementation of AES Algorithm,” Interna-
tional Journal of Reconfigurable and Embedded Systems (IJRES), vol. 7, no. 3, pp. 157-165, November 2018, doi:
10.11591/ijres.v7.i3.pp157-165.
[19] S. D. Putra, A. S. Ahmad, S. Sutikno, Y. Kurniawan and A. D. W. Sumari, “Revealing AES Encryption Device
Key on 328P Microcontrollers with Differential Power Analysis,” International Journal of Electrical and Computer
Engineering (IJECE), vol. 8, no. 6, pp. 5144-5152, December 2018, doi: 10.11591/ijece.v8i6.pp.5144-5152.
[20] R. Srividya and B. Ramesh, “Implementation of AES using Biometric,” International Journal of Electrical and Com-
puter Engineering (IJECE), vol. 9, no. 5, pp. 4266-4276, October 2019, doi: 10.11591/ijece.v9i5.pp4266-4276.
[21] J. M. B. Espalmado and E. R. Arboleda, “DARE Algorithm: A New Security Protocol by Integration of Different
Cryptographic Techniques,” International Journal of Electrical and Computer Engineering (IJECE), vol. 7, no. 2, pp.
1032–1041, April 2017, doi: 0.11591/ijece.v7i2.pp1032-1041.
[22] R. Rizky, Rojali and A Kurniawan, “Improvement of Advanced Encryption Standard Algorithm with Shift row and
S.box Modification Mapping in Mix Column,” The 2nd International Conference on Computer Science and Compu-
tational Intelligence, vol. 116, pp. 401–407, Feb 2017, doi: 10.1016/j.procs.2017.10.079.
[23] H. V. Gamido, A. M. Sison and R. P. Medina, “Implementation of Modified AES as Image Encryption Scheme,”
Indonesian Journal of Electrical Engineering and Informatics (IJEEI), vol. 6, no. 3, pp. 301-308, September 2018,
doi: 10.52549/ijeei.v6i3.490.
[24] M. Aledhari, A. Marhoon, A. Hamad and F. Saeed, “A New Cryptography Algorithm to Protect Cloud-Based Health-
care Services,” IEEE/ACM International Conference on Connected Health: Applications, Systems and Engineering
Technologies (CHASE), 2017, pp. 37-43, doi: 10.1109/CHASE.2017.57.
[25] P. Jindal, A. Kaushik and K. Kumar, “Design and Implementation of Advanced Encryption Standard Algorithm on
7th Series Field Programmable Gate Array,” 2020 7th International Conference on Smart Structures and Systems
(ICSSS), 2020, pp. 1-3, doi: 10.1109/ICSSS49621.2020.9202114.
[26] E. M. D. Reyes, A. M. Sison and R. Medina, “Modified AES Cipher Round and Key Schedule,” Indonesian Journal
of Electrical Engineering and Informatics (IJEEI), vol. 7, no. 1, 29-36, 2018, doi: 10.52549/ijeei.v7i1.652.
[27] P. Sharma, “A New Image Encryption using Modified AES Algorithm and its Comparision with AES,” International
Journal of Engineering Research Technology (IJERT), vol. 9, no. 8, pp. 194-197, August 2020
Bulletin of Electr Eng & Inf, Vol. 10, No. 6, December 2021 : 3385 – 3392