Demand for data storage is growing exponentially, but the capacity of existing storage media is not keeping up, there emerges a requirement for a storage medium with high capacity, high storage density, and possibility to face up to extreme environmental conditions. According to a research in 2018, every minute Google conducted 3.88 million searches, other people posted 49,000 photos on Instagram, sent 159,362,760 e-mails, tweeted 473,000 times and watched 4.33 million videos on YouTube. In 2020 it estimated a creation of 1.7 megabytes of knowledge per second per person globally, which translates to about 418 zettabytes during a single year. The magnetic or optical data-storage systems that currently hold this volume of 0s and 1s typically cannot last for quite a century. Running data centres takes vast amounts of energy. In short, we are close to have a substantial data-storage problem which will only become more severe over time. Deoxyribonucleic acid (DNA) are often potentially used for these purposes because it isn't much different from the traditional method utilized in a computer. DNA’s information density is notable, 215 petabytes or 215 million gigabytes of data can be stored in just one gram of DNA. First we can encode all data at a molecular level and then store it in a medium that will last for a while and not become out-dated just like floppy disks. Due to the improved techniques for reading and writing DNA, a rapid increase is observed in the amount of possible data storage in DNA.
The document discusses plans to establish a high-bandwidth optical network connection between the California Institute for Telecommunications and Information Technology (Calit2) in the United States and the Center for Scientific Research and Higher Education of Ensenada (CICESE) in Mexico. It describes several visits and collaborations between the institutions over recent years to develop the connection. The goal is to integrate CICESE into Calit2's global OptIPuter network to enable bandwidth-intensive international research collaborations over dedicated optical lambdas.
06.12.13
Panelist
Panel on Issues, Challenges, and Future Directions of Multimedia Research
IEEE International Symposium on Multimedia (ISM 2006)
Title: Towards GigaPixel Displays
La Jolla, CA
The Energy Efficient Cyberinfrastructure in Slowing Climate Change
10.04.28
Invited Speaker
Community Alliance for Distributed Energy Resources
Scripps Forum, UCSD
Title: The Energy Efficient Cyberinfrastructure in Slowing Climate Change
La Jolla, CA
Mateo Valero - Big data: de la investigación científica a la gestión empresarial
El 3 de julio de 2014, organizamos en la Fundación Ramón Areces una jornada con el lema 'Big Data: de la investigación científica a la gestión empresarial'. En ella estudiamos los retos y oportunidades del Big data en las ciencias sociales, en la economía y en la gestión empresarial. Entre otros ponentes, acudieron expertos de la London School of Economics, BBVA, Deloite, Universidades de Valencia y Oviedo, el Centro Nacional de Supercomputación...
The document discusses the limits of information and communication technologies (ICT) such as computing power, data storage, and network bandwidth. It proposes that future networks will need to scale in both size and functionality through approaches like federation of multiple networks. Cloud computing is presented as a potential approach to tackle these limits by providing on-demand access to shared computing resources over a network in a scalable and elastic manner. However, cloud computing is still associated with many marketing hype and open questions remain regarding its impact and how it can integrate with existing technologies.
Project StarGate An End-to-End 10Gbps HPC to User Cyberinfrastructure ANL * C...Larry Smarr
09.11.03
Report to the
Dept. of Energy Advanced Scientific Computing Advisory Committee
Title: Project StarGate An End-to-End 10Gbps HPC to User Cyberinfrastructure ANL * Calit2 * LBNL * NICS * ORNL * SDSC
Oak Ridge, TN
High Performance Cyberinfrastructure is Needed to Enable Data-Intensive Scien...Larry Smarr
11.03.28
Remote Luncheon Presentation from Calit2@UCSD
National Science Board
Expert Panel Discussion on Data Policies
National Science Foundation
Title: High Performance Cyberinfrastructure is Needed to Enable Data-Intensive Science and Engineering
Arlington, Virginia
Bringing Mexico Into the Global LambdaGridLarry Smarr
The document discusses plans to establish a high-bandwidth optical network connection between the California Institute for Telecommunications and Information Technology (Calit2) in the United States and the Center for Scientific Research and Higher Education of Ensenada (CICESE) in Mexico. It describes several visits and collaborations between the institutions over recent years to develop the connection. The goal is to integrate CICESE into Calit2's global OptIPuter network to enable bandwidth-intensive international research collaborations over dedicated optical lambdas.
06.12.13
Panelist
Panel on Issues, Challenges, and Future Directions of Multimedia Research
IEEE International Symposium on Multimedia (ISM 2006)
Title: Towards GigaPixel Displays
La Jolla, CA
The Energy Efficient Cyberinfrastructure in Slowing Climate ChangeLarry Smarr
10.04.28
Invited Speaker
Community Alliance for Distributed Energy Resources
Scripps Forum, UCSD
Title: The Energy Efficient Cyberinfrastructure in Slowing Climate Change
La Jolla, CA
Mateo Valero - Big data: de la investigación científica a la gestión empresarialFundación Ramón Areces
El 3 de julio de 2014, organizamos en la Fundación Ramón Areces una jornada con el lema 'Big Data: de la investigación científica a la gestión empresarial'. En ella estudiamos los retos y oportunidades del Big data en las ciencias sociales, en la economía y en la gestión empresarial. Entre otros ponentes, acudieron expertos de la London School of Economics, BBVA, Deloite, Universidades de Valencia y Oviedo, el Centro Nacional de Supercomputación...
The document discusses the limits of information and communication technologies (ICT) such as computing power, data storage, and network bandwidth. It proposes that future networks will need to scale in both size and functionality through approaches like federation of multiple networks. Cloud computing is presented as a potential approach to tackle these limits by providing on-demand access to shared computing resources over a network in a scalable and elastic manner. However, cloud computing is still associated with many marketing hype and open questions remain regarding its impact and how it can integrate with existing technologies.
This document discusses DNA digital data storage. It explains that DNA can store vastly more data than current technologies in a smaller space and last much longer. However, writing and reading DNA data is currently much slower and more expensive than modern storage methods. The document outlines how binary data can be converted to DNA nucleotide sequences and provides examples. It also reviews recent developments that aim to improve DNA data storage methods and decrease costs.
The document discusses the evolution of the Internet of Things (IoT), which represents the next stage in the development of the Internet. As devices become embedded with sensors and connectivity, it is estimated that there will be over 50 billion connected devices by 2020, far surpassing the world's human population. This growth will be driven by the integration of sensors into everyday objects and the standardization of IP protocols. The IoT will generate unprecedented amounts of data traffic and transform how people and machines interact online.
E Science As A Lens On The World Lazowskaguest43b4df3
The document summarizes a presentation about eScience and its implications. It discusses how eScience is driven by massive amounts of sensor data and requires analysis of large datasets. It also describes how technologies like cloud computing, databases, data mining and machine learning enable eScience. Finally, it argues that eScience capabilities will be essential for any organization to remain competitive in the future.
The document summarizes a presentation about eScience and its implications. It discusses how eScience is driven by massive amounts of sensor data and requires analysis of large datasets using technologies like databases, data mining, machine learning and data visualization on cluster computing systems at enormous scales. It states that eScience capabilities will be required for organizations to remain competitive in the future. It also discusses how technologies like Amazon EC2 enable scalable computing resources for any organization and how broadband access and networks like Internet2 played an important role in enabling eScience.
1. The document discusses the gap between increasing broadband access and the need for true broadband connections of 1-10 gigabits per second to support new applications.
2. Calit2 is working on various projects to explore using persistent high-speed optical connections for applications in science, medicine, entertainment and emergency response.
3. Examples are given of using very high resolution displays and streaming for digital cinema, global scientific collaborations, and interactive exploration of massive genomic and brain imaging datasets.
A Pocket Dictionary of Tomorrow’s Electronics_Franz_IPC-TLP2021.pdfRoger L. Franz
Here is a concise interactive dictionary of terms that are about to become the new buzzwords in electronics and relating fields. Each page includes a summary of the term, graphic illustration, and a literature reference.
Building a Global Collaboration System for Data-Intensive DiscoveryLarry Smarr
11.01.06
Distinguished Lecture
Hawaii International Conference on System Sciences (HICSS-44)
Title: Building a Global Collaboration System for Data-Intensive Discovery
Kauai, HI
Roberto Saracco discusses technology evolution and its impact on telecommunications. Key trends include infrastructure becoming more user-driven rather than terminal-driven, and telecommunications becoming more creative and innovative. Storage capacity is projected to increase dramatically, reaching tens of terabytes by 2020. Processing power is also projected to greatly increase, reaching petaflop scales and 100 teraflops for $100 by 2020. Display resolution will reach 32 megapixels by 2020 with ultra-high definition becoming mainstream. Home internet bandwidth is projected to reach gigabit speeds by 2020 with very high-speed digital subscriber line technology. Wireless infrastructures will also continue to densify and new paradigms like load sharing, data sharing, broadcasting, peer
DNA could help address growing data storage needs as digital data production increases exponentially. Current storage devices have problems with capacity, durability, performance in extreme environments, and security from viruses and hackers. DNA is well-suited for long-term, high-density data storage as it can encode and preserve digital information in its nucleotide sequence and last for thousands of years without degradation. However, DNA data storage faces challenges with high costs and slow retrieval speeds currently, though technological advances may help address these issues over time. DNA could store vast amounts of data in a very small physical space and help preserve digital information for future generations.
Computational science and engineering (CSE) utilizes high performance computing, large-scale simulations, and scientific applications to enable data-driven discovery. The speaker discusses CSE initiatives at UC Berkeley and Lawrence Berkeley National Lab focused on areas like health, freshwater, food security, ecosystems, and urban metabolism using exascale computing and big data analytics.
This document discusses using DNA as a digital storage medium. It provides an introduction to DNA digital storage and outlines the structure and coding methods used to encode digital data in DNA. The document explains how source data is converted to a tertiary code and then mapped to DNA nucleobases to encode the information. It describes some potential applications of DNA storage such as archiving and discusses how companies are developing DNA storage technologies that could store massive amounts of data in very small physical volumes.
Holographic optical data storage jyoti-225Charu Tyagi
Holographic Optical Data Storage (HODS) is a revolutionary data storage technology that uses holograms rather than bits to store large volumes of data. It works by using lasers and optical materials to record images as interference patterns in a photosensitive medium. This allows for massive storage capacities - a 1cm3 cube could store the equivalent of thousands of DVDs or hard drives. While researched since the 1960s, HODS is now gaining momentum as a solution to handle growing storage needs. It promises faster access and greater densities than existing magnetic and optical storage, positioning it to potentially replace those methods altogether in the future.
Volumetric Density Paper -- Journal of Applied Physics -- Fontana_Decad_Hetzl...Gary Decad
The document summarizes volumetric density trends for various storage components from 2013 to projected 2024 densities. It finds that while areal density (bits per square inch) has historically doubled every two years, physical limitations are restricting this trend. As a result, components are adopting volumetric strategies to increase capacity. Hard disk drives are adding more platters. Tape cartridges aim to increase tape length. NAND flash is moving to 3D structures with multiple memory layers. Blu-ray disks plan to utilize both disk surfaces and higher track and bit densities through technical advances. Overall, volumetric enhancements are needed to sustain capacity growth as areal density improvements alone may no longer be sufficient due to physical limits of shrinking components.
The document discusses trends and technologies for addressing climate change in the construction and transportation sectors over the next 25 years. Some key points discussed include increasing carbon sequestration efforts, a shift to more electric transportation options like electric vehicles, the potential for solar power satellites, and integrating fields like urban ecology and cybernetics to develop more sustainable built environments through collective intelligence approaches. The document also briefly touches on trends in energy issues, potential solar and space-based renewable options, and retrofitting existing structures with solar technologies.
08.02.02
Kenote Presentation
15th Mardi Gras Conference
Center for Computation and Technology
Louisiana State University
Title: 2008—The Year of Global Telepresence
Baton Rouge, LA
Similar to An Alternative to Hard Drives in the Coming Future:DNA-BASED DATA STORAGE (20)
Understanding the Impact and Challenges of Corona Crisis on Education Sector...vivatechijri
n the second week of March 2020, governments of all states in a country suddenly declared
shutting down of all colleges and schools for a temporary period of time as an immediate measure to stop the
spread of pandemic that is of novel corona virus. As the days pass by almost close to a month with no certainty
when they will again reopen. Due to pandemic like this an alarm bells have started sounding in the field of
education where a huge impact can be seen on teaching and learning process as well as on the entire education
sector in turn. The pandemic disruption like this is actually gave time to educators of today to really think about
the sector. Through the present research article, the author is highlighting on the possible impact of
coronavirus on education sector with the future challenges for education sector with possible suggestions.
LEADERSHIP ONLY CAN LEAD THE ORGANIZATION TOWARDS IMPROVEMENT AND DEVELOPMENT vivatechijri
This document discusses the importance of leadership in leading an organization towards improvement and development. It states that leadership is responsible for providing a clear vision and strategy to successfully achieve that vision. Effective leadership can impact the success of an organization by controlling its direction and motivating employees. Leadership is different from traditional management in that it guides employees towards organizational goals through open communication and motivation, rather than simply directing work. The paper concludes that only leadership can lead an organization to change according to its evolving environment, while management may simply follow old rules. Leadership is key to adapting to new market needs and trends.
The topic of assignment is a critical problem in mathematics and is further explored in the real
physical world. We try to implement a replacement method during this paper to solve assignment problems with
algorithm and solution steps. By using new method and computing by existing two methods, we analyse a
numerical example, also we compare the optimal solutions between this new method and two current methods. A
standardized technique, simple to use to solve assignment problems, may be the proposed method
Structural and Morphological Studies of Nano Composite Polymer Gel Electroly...vivatechijri
The document summarizes research on a nano composite polymer gel electrolyte containing SiO2 nanoparticles. Key points:
1. Polyvinylidene fluoride-co-hexafluoropropylene polymer was used as the base polymer mixed with propylene carbonate, magnesium perchlorate, and SiO2 nanoparticles to synthesize the nano composite polymer gel electrolyte.
2. The electrolyte was characterized using XRD, SEM, and FTIR which confirmed the homogeneous dispersion of SiO2 nanoparticles and increased amorphous nature of the electrolyte, enhancing its ion conductivity.
3. XRD showed decreased crystallinity and disappearance of polymer peaks upon addition of SiO2. SEM revealed
Theoretical study of two dimensional Nano sheet for gas sensing applicationvivatechijri
This study is focus on various two dimensional material for sensing various gases with theoretical
view for new research in gas sensing application. In this paper we review various two dimensional sheet such as
Graphene, Boron Nitride nanosheet, Mxene and their application in sensing various gases present in the
atmosphere.
METHODS FOR DETECTION OF COMMON ADULTERANTS IN FOODvivatechijri
Food is essential forliving. Food adulteration deceives consumers and can endanger their health. The
purpose of this document is to list common food adulterant methods commonly found in India. An adulterant is
a substance found in other substances such as food, cosmetics, pharmaceuticals, fuels, or other chemicals that
compromise the safety or effectiveness of that substance. The addition of adulterants is called adulteration. The
most common reason for adulteration is the use of undeclared materials by manufacturers that are cheaper than
the correct and declared ones. The adulterants can be harmful or reduce the effectiveness of the product, or
they can be harmless.
The novel ideas of being a entrepreneur is a key for everyone to get in the hustle, but developing a
idea from core requires a systematic plan, time management, time investment and most importantly client
attention. The Time required for developing may vary from idea to idea and strength of the team. Leadership to
build a team and manage the same throughout the peak of development is the main quality. Innovations and
Techniques to qualify the huddles is another aspect of Business Development and client Retention.
Innovation for supporting prosperity has for quite some time been a focus on numerous orders, including PC science, brain research, and human-PC connection. In any case, the meaning of prosperity isn't continuously clear and this has suggestions for how we plan for and evaluate advances that intend to cultivate it. Here, we talk about current meanings of prosperity and how it relates with and now and then is a result of self-amazing quality. We at that point center around how innovations can uphold prosperity through encounters of self-amazing quality, finishing with conceivable future bearings.
The usage of chatbots has increased tremendously since past few years. A conversational interface is an interface that the user can interact with by means of a conversation. The conversation can occur by speech but also by text input. When a chatty interface uses text, it is also described as a chatbot or a conversational medium. During this study, the user experience factors of these so called chatbots were investigated. The prime objective is “to spot the state of the art in chatbot usability and applied human-computer interaction methodologies, to research the way to assess chatbots usability". Two sorts of chatbots are formulated, one with and one without personalisation factors. the planning of this research may be a two-by-two factorial design. The independent variables are the two chatbots (unpersonalised versus personalised) and thus the speci?c task or goal the user are ready to do with the chatbot within the ?nancial ?eld (a simple versus a posh task). The results are that there was no noteworthy interaction effect between personalisation and task on the user experience of chatbots. A signi?cant di?erence was found between the two tasks with regard to the user experience of chatbots, however this variation wasn't because of personalisation.
The Smart glasses Technology of wearable computing aims to identify the computing devices into today’s world.(SGT) are wearable Computer glasses that is used to add the information alongside or what the wearer sees. They are also able to change their optical properties at runtime.(SGT) is used to be one of the modern computing devices that amalgamate the humans and machines with the help of information and communication technology. Smart glasses is mainly made up of an optical head-mounted display or embedded wireless glasses with transparent heads- up display or augmented reality (AR) overlay in it. In recent years, it is been used in the medical and gaming applications, and also in the education sector. This report basically focuses on smart glasses, one of the categories of wearable computing which is very popular presently in the media and expected to be a big market in the next coming years. It Evaluate the differences from smart glasses to other smart devices. It introduces many possible different applications from the different companies for the different types of audience and gives an overview of the different smart glasses which are available presently and will be available after the next few years.
Future Applications of Smart Iot Devicesvivatechijri
With the Internet of Things (IoT) bit by bit creating as the resulting time of the headway of the Internet, it gets critical to see the diverse expected zones for the utilization of IoT and the research challenges that are connected with these applications going from splendid savvy urban areas, to medical care administrations, shrewd farming, collaborations and retail. IoT is needed to attack into for all expectations and purposes for all pieces of our day-to-day life. Despite the fact that the current IoT enabling advancements have immensely improved in the continuous years, there are so far different issues that require attention. Since the IoT ideas results from heterogeneous advancements, many examination difficulties will arise. In like manner, IoT is planning for new components of exploration to be finished. This paper presents the progressing headway of IoT advancements and inspects future applications.
Cross Platform Development Using Fluttervivatechijri
Today the development of cross-platform mobile application has under the state of compromise. The developers are not willing to choose an alternative of either building the similar app many times for many operating systems or to accept a lowest common denominator and optimal solution that will going to trade the native speed, accuracy for portability. The Flutter is an open-source SDK for creating high-performance, high fidelity mobile apps for the development of iOS and Android. Few significant features of flutter are - Just-in-time compilation (JIT), Ahead- of-time compilation (AOT compilation) into a native (system-dependent) machine code so that the resulting binary file can execute natively. The Flutter’s hot reload functionality helps us to understand quickly and easily experiment, build UIs, add features, and fix bugs. Hot reload works by injecting updated source code files into the running Dart Virtual Machine (VM). With the help of Flutter, we believe that we would be having a solution that gives us the best of both worlds: hardware accelerated graphics and UI, powered by native ARM code, targeting both popular mobile operating systems.
The Internet, today, has become an important part of our lives. The World Wide Web that was once a small and inaccessible data storage service is now large and valuable. Current activities partially or completely integrated into the physical world can be made to a higher standard. All activities related to our daily life are mapped and linked to another business in the digital world. The world has seen great strides in the Internet and in 3D stereoscopic displays. The time has come to unite the two to bring a new level of experience to the users. 3D Internet is a concept that is yet to be used and requires browsers to be equipped with in-depth visualization and artificial intelligence. When this material is included, the Internet concept of material may become a reality discussed in this paper. In this paper we have discussed the features, possible setting methods, applications, and advantages and disadvantages of using the Internet. With this paper we aim to provide a clear view of 3D Internet and the potential benefits associated with this obviously cost the amount of investment needed to be used.
Recommender System (RS) has emerged as a significant research interest that aims to assist users to seek out items online by providing suggestions that closely match their interests. Recommender system, an information filtering technology employed in many items is presented in internet sites as per the interest of users, and is implemented in applications like movies, music, venue, books, research articles, tourism and social media normally. Recommender systems research is usually supported comparisons of predictive accuracy: the higher the evaluation scores, the higher the recommender. One amongst the leading approaches was the utilization of advice systems to proactively recommend scholarly papers to individual researchers. In today's world, time has more value and therefore the researchers haven't any much time to spend on trying to find the proper articles in line with their research domain. Recommender Systems are designed to suggest users the things that best fit the user needs and preferences. Recommender systems typically produce an inventory of recommendations in one among two ways -through collaborative or content-based filtering. Additionally, both the general public and also the non-public used descriptive metadata are used. The scope of the advice is therefore limited to variety of documents which are either publicly available or which are granted copyright permits. Recommendation systems (RS) support users and developers of varied computer and software systems to beat information overload, perform information discovery tasks and approximate computation, among others.
The study LiFi (Light Fidelity) demonstrates about how can we use this technology as a medium of communication similar to Wifi . This is the latest technology proposed by Harold Haas in 2011. It explains about the process of transmitting data with the help of illumination of an Led bulb and about its speed intensity to transmit data. Basically in this paper, author will discuss about the technology and also explain that how we can replace from WiFi to LiFi . WiFi generally used for wireless coverage within the buildings while LiFi is capable for high intensity wireless data coverage in limited areas with no obstacles .This research paper represents introduction of the Lifi technology,performance,modulation and challenges. This research paper can be used as a reference and knowledge to develop some of LiFitechnology.
Social media platform and Our right to privacyvivatechijri
The advancement of Information Technology has hastened the ability to disseminate information across the globe. In particular, the recent trends in ‘Social Networking’ have led to a spark in personally sensitive information being published on the World Wide Web. While such socially active websites are creative tools for expressing one’s personality it also entails serious privacy concerns. Thus, Social Networking websites could be termed a double edged sword. It is important for the law to keep abreast of these developments in technology. The purpose of this paper is to demonstrate the limits of extending existing laws to battle privacy intrusions in the Internet especially in the context of social networking. It is suggested that privacy specific legislation is the most appropriate means of protecting online privacy. In doing so it is important to maintain a balance between the competing right of expression, the failure of which may hinder the reaping of benefits offered by Internet technology
THE USABILITY METRICS FOR USER EXPERIENCEvivatechijri
THE USABILITY METRICS FOR USER EXPERIENCE was innovatively created by Google engineers and it is ready for production in record time. The success of Google is to attributed the efficient search algorithm, and also to the underlying commodity hardware. As Google run number of application then Google’s goal became to build a vast storage network out of inexpensive commodity hardware. So Google create its own file system, named as THE USABILITY METRICS FOR USER EXPERIENCE that is GFS. THE USABILITY METRICS FOR USER EXPERIENCE is one of the largest file system in operation. Generally THE USABILITY METRICS FOR USER EXPERIENCE is a scalable distributed file system of large distributed data intensive apps. In the design phase of THE USABILITY METRICS FOR USER EXPERIENCE, in which the given stress includes component failures , files are huge and files are mutated by appending data. The entire file system is organized hierarchically in directories and identified by pathnames. The architecture comprises of multiple chunk servers, multiple clients and a single master. Files are divided into chunks, and that is the key design parameter. THE USABILITY METRICS FOR USER EXPERIENCE also uses leases and mutation order in their design to achieve atomicity and consistency. As of there fault tolerance, THE USABILITY METRICS FOR USER EXPERIENCE is highly available, replicas of chunk servers and master exists.
Google File System was innovatively created by Google engineers and it is ready for production in record time. The success of Google is to attributed the efficient search algorithm, and also to the underlying commodity hardware. As Google run number of application then Google’s goal became to build a vast storage network out of inexpensive commodity hardware. So Google create its own file system, named as Google File System that is GFS. Google File system is one of the largest file system in operation. Generally Google File System is a scalable distributed file system of large distributed data intensive apps. In the design phase of Google file system, in which the given stress includes component failures , files are huge and files are mutated by appending data. The entire file system is organized hierarchically in directories and identified by pathnames. The architecture comprises of multiple chunk servers, multiple clients and a single master. Files are divided into chunks, and that is the key design parameter. Google File System also uses leases and mutation order in their design to achieve atomicity and consistency. As of there fault tolerance, Google file system is highly available, replicas of chunk servers and master exists.
A Study of Tokenization of Real Estate Using Blockchain Technologyvivatechijri
Real estate is by far one of the most trusted investments that people have preferred, being a lucrative investment it provides a steady source of income in the form of lease and rents. Although there are numerous advantages, one of the key downsides of real estate investments is lack of liquidity. Thus, even though global real estate investments amount to about twice the size of investments in stock markets, the number of investors in the real estate market is significantly lower. Block chain technology has real potential in addressing the issues of liquidity and transparency, opening the market to even retail investors. Owing to the functionality and flexibility of creating Security Tokens, which are backed by real-world assets, real estate can be made liquid with the help of Special Purpose Vehicles. Tokens of ERC 777 standard, which represent fractional ownership of the real estate can be purchased by an investor and these tokens can also be listed on secondary exchanges. The robustness of Smart Contracts can enable the efficient transfer of tokens and seamless distribution of earnings amongst the investors. This work describes Ethereum blockchainbased solutions to make the existing Real Estate investment system much more efficient.
A Study of Data Storage Security Issues in Cloud Computingvivatechijri
Cloudcomputingprovidesondemandservicestoitsclients.Datastorageisamongoneoftheprimaryservices providedbycloudcomputing.Cloudserviceproviderhoststhedataofdataownerontheirserverandusercan accesstheirdatafromtheseservers.Asdata,ownersandserversaredifferentidentities,theparadigmofdata storagebringsupmanysecuritychallenges.Anindependentmechanismisrequiredtomakesurethatdatais correctlyhostedintothecloudstorageserver.Inthispaper,wewilldiscussthedifferenttechniquesthatare usedforsecuredatastorageoncloud. Cloud computing is a functional paradigm that is evolving and making IT utilization easier by the day for consumers. Cloud computing offers standardized applications to users online and in a manner that can be accessed regularly. Such applications can be accessed by as many persons as permitted within an organization without bothering about the maintenance of such application. The Cloud also provides a channel to design and deploy user applications including its storage space and database without bothering about the underlying operating system. The application can run without consideration for on premise infrastructure. Also, the Cloud makes massive storage available both for data and databases. Storage of data on the Cloud is one of the core activities in Cloud computing. Storage utilizes infrastructure spread across several geographical locations.
Unblocking The Main Thread - Solving ANRs and Frozen FramesSinan KOZAK
In the realm of Android development, the main thread is our stage, but too often, it becomes a battleground where performance issues arise, leading to ANRS, frozen frames, and sluggish Uls. As we strive for excellence in user experience, understanding and optimizing the main thread becomes essential to prevent these common perforrmance bottlenecks. We have strategies and best practices for keeping the main thread uncluttered. We'll examine the root causes of performance issues and techniques for monitoring and improving main thread health as wel as app performance. In this talk, participants will walk away with practical knowledge on enhancing app performance by mastering the main thread. We'll share proven approaches to eliminate real-life ANRS and frozen frames to build apps that deliver butter smooth experience.
Social media management system project report.pdfKamal Acharya
The project "Social Media Platform in Object-Oriented Modeling" aims to design
and model a robust and scalable social media platform using object-oriented
modeling principles. In the age of digital communication, social media platforms
have become indispensable for connecting people, sharing content, and fostering
online communities. However, their complex nature requires meticulous planning
and organization.This project addresses the challenge of creating a feature-rich and
user-friendly social media platform by applying key object-oriented modeling
concepts. It entails the identification and definition of essential objects such as
"User," "Post," "Comment," and "Notification," each encapsulating specific
attributes and behaviors. Relationships between these objects, such as friendships,
content interactions, and notifications, are meticulously established.The project
emphasizes encapsulation to maintain data integrity, inheritance for shared behaviors
among objects, and polymorphism for flexible content handling. Use case diagrams
depict user interactions, while sequence diagrams showcase the flow of interactions
during critical scenarios. Class diagrams provide an overarching view of the system's
architecture, including classes, attributes, and methods .By undertaking this project,
we aim to create a modular, maintainable, and user-centric social media platform that
adheres to best practices in object-oriented modeling. Such a platform will offer users
a seamless and secure online social experience while facilitating future enhancements
and adaptability to changing user needs.
Conservation of Taksar through Economic RegenerationPriyankaKarn3
This was our 9th Sem Design Studio Project, introduced as Conservation of Taksar Bazar, Bhojpur, an ancient city famous for Taksar- Making Coins. Taksar Bazaar has a civilization of Newars shifted from Patan, with huge socio-economic and cultural significance having a settlement of about 300 years. But in the present scenario, Taksar Bazar has lost its charm and importance, due to various reasons like, migration, unemployment, shift of economic activities to Bhojpur and many more. The scenario was so pityful that when we went to make inventories, take survey and study the site, the people and the context, we barely found any youth of our age! Many houses were vacant, the earthquake devasted and ruined heritages.
Conservation of those heritages, ancient marvels,a nd history was in dire need, so we proposed the Conservation of Taksar through economic regeneration because the lack of economy was the main reason for the people to leave the settlement and the reason for the overall declination.
Online music portal management system project report.pdfKamal Acharya
The iMMS is a unique application that is synchronizing both user
experience and copyrights while providing services like online music
management, legal downloads, artists’ management. There are several
other applications available in the market that either provides some
specific services or large scale integrated solutions. Our product differs
from the rest in a way that we give more power to the users remaining
within the copyrights circle.
An Internet Protocol address (IP address) is a logical numeric address that is assigned to every single computer, printer, switch, router, tablets, smartphones or any other device that is part of a TCP/IP-based network.
Types of IP address-
Dynamic means "constantly changing “ .dynamic IP addresses aren't more powerful, but they can change.
Static means staying the same. Static. Stand. Stable. Yes, static IP addresses don't change.
Most IP addresses assigned today by Internet Service Providers are dynamic IP addresses. It's more cost effective for the ISP and you.
Understanding Cybersecurity Breaches: Causes, Consequences, and PreventionBert Blevins
Cybersecurity breaches are a growing threat in today’s interconnected digital landscape, affecting individuals, businesses, and governments alike. These breaches compromise sensitive information and erode trust in online services and systems. Understanding the causes, consequences, and prevention strategies of cybersecurity breaches is crucial to protect against these pervasive risks.
Cybersecurity breaches refer to unauthorized access, manipulation, or destruction of digital information or systems. They can occur through various means such as malware, phishing attacks, insider threats, and vulnerabilities in software or hardware. Once a breach happens, cybercriminals can exploit the compromised data for financial gain, espionage, or sabotage. Causes of breaches include software and hardware vulnerabilities, phishing attacks, insider threats, weak passwords, and a lack of security awareness.
The consequences of cybersecurity breaches are severe. Financial loss is a significant impact, as organizations face theft of funds, legal fees, and repair costs. Breaches also damage reputations, leading to a loss of trust among customers, partners, and stakeholders. Regulatory penalties are another consequence, with hefty fines imposed for non-compliance with data protection regulations. Intellectual property theft undermines innovation and competitiveness, while disruptions of critical services like healthcare and utilities impact public safety and well-being.
Best Practices of Clothing Businesses in Talavera, Nueva Ecija, A Foundation ...IJAEMSJORNAL
This study primarily aimed to determine the best practices of clothing businesses to use it as a foundation of strategic business advancements. Moreover, the frequency with which the business's best practices are tracked, which best practices are the most targeted of the apparel firms to be retained, and how does best practices can be used as strategic business advancement. The respondents of the study is the owners of clothing businesses in Talavera, Nueva Ecija. Data were collected and analyzed using a quantitative approach and utilizing a descriptive research design. Unveiling best practices of clothing businesses as a foundation for strategic business advancement through statistical analysis: frequency and percentage, and weighted means analyzing the data in terms of identifying the most to the least important performance indicators of the businesses among all of the variables. Based on the survey conducted on clothing businesses in Talavera, Nueva Ecija, several best practices emerge across different areas of business operations. These practices are categorized into three main sections, section one being the Business Profile and Legal Requirements, followed by the tracking of indicators in terms of Product, Place, Promotion, and Price, and Key Performance Indicators (KPIs) covering finance, marketing, production, technical, and distribution aspects. The research study delved into identifying the core best practices of clothing businesses, serving as a strategic guide for their advancement. Through meticulous analysis, several key findings emerged. Firstly, prioritizing product factors, such as maintaining optimal stock levels and maximizing customer satisfaction, was deemed essential for driving sales and fostering loyalty. Additionally, selecting the right store location was crucial for visibility and accessibility, directly impacting footfall and sales. Vigilance towards competitors and demographic shifts was highlighted as essential for maintaining relevance. Understanding the relationship between marketing spend and customer acquisition proved pivotal for optimizing budgets and achieving a higher ROI. Strategic analysis of profit margins across clothing items emerged as crucial for maximizing profitability and revenue. Creating a positive customer experience, investing in employee training, and implementing effective inventory management practices were also identified as critical success factors. In essence, these findings underscored the holistic approach needed for sustainable growth in the clothing business, emphasizing the importance of product management, marketing strategies, customer experience, and operational efficiency.
How to Manage Internal Notes in Odoo 17 POSCeline George
In this slide, we'll explore how to leverage internal notes within Odoo 17 POS to enhance communication and streamline operations. Internal notes provide a platform for staff to exchange crucial information regarding orders, customers, or specific tasks, all while remaining invisible to the customer. This fosters improved collaboration and ensures everyone on the team is on the same page.
Natural Is The Best: Model-Agnostic Code Simplification for Pre-trained Large...YanKing2
Pre-trained Large Language Models (LLM) have achieved remarkable successes in several domains. However, code-oriented LLMs are often heavy in computational complexity, and quadratically with the length of the input code sequence. Toward simplifying the input program of an LLM, the state-of-the-art approach has the strategies to filter the input code tokens based on the attention scores given by the LLM. The decision to simplify the input program should not rely on the attention patterns of an LLM, as these patterns are influenced by both the model architecture and the pre-training dataset. Since the model and dataset are part of the solution domain, not the problem domain where the input program belongs, the outcome may differ when the model is trained on a different dataset. We propose SlimCode, a model-agnostic code simplification solution for LLMs that depends on the nature of input code tokens. As an empirical study on the LLMs including CodeBERT, CodeT5, and GPT-4 for two main tasks: code search and summarization. We reported that 1) the reduction ratio of code has a linear-like relation with the saving ratio on training time, 2) the impact of categorized tokens on code simplification can vary significantly, 3) the impact of categorized tokens on code simplification is task-specific but model-agnostic, and 4) the above findings hold for the paradigm–prompt engineering and interactive in-context learning and this study can save reduce the cost of invoking GPT-4 by 24%per API query. Importantly, SlimCode simplifies the input code with its greedy strategy and can obtain at most 133 times faster than the state-of-the-art technique with a significant improvement. This paper calls for a new direction on code-based, model-agnostic code simplification solutions to further empower LLMs.
An Alternative to Hard Drives in the Coming Future:DNA-BASED DATA STORAGE
1. VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 4 (2021)
ISSN(Online): 2581-7280
VIVA Institute of Technology
9th
National Conference onRole of Engineers in Nation Building – 2021 (NCRENB-2021)
1
An Alternative to Hard Drives in the Coming Future:
DNA-BASED DATA STORAGE
Rajeshri Deshmukh1,
Chandani Patel2
1
Department of Computer Applications, VIVA School of MCA, India
2
Department of Computer Applications, VIVA School of MCA, India
Abstract-- Demand for data storage is growing exponentially, but the capacity of existing storage media is not
keeping up, there emerges a requirement for a storage medium with high capacity, high storage density, and
possibility to face up to extreme environmental conditions. According to a research in 2018, every minute
Google conducted 3.88 million searches, other people posted 49,000 photos on Instagram, sent 159,362,760 e-
mails, tweeted 473,000 times and watched 4.33 million videos on YouTube. In 2020 it estimated a creation of
1.7 megabytes of knowledge per second per person globally, which translates to about 418 zettabytes during a
single year. The magnetic or optical data-storage systems that currently hold this volume of 0s and 1s typically
cannot last for quite a century. Running data centres takes vast amounts of energy. In short, we are close to
have a substantial data-storage problem which will only become more severe over time. Deoxyribonucleic acid
(DNA) are often potentially used for these purposes because it isn't much different from the traditional method
utilized in a computer. DNA’s information density is notable, 215 petabytes or 215 million gigabytes of data can
be stored in just one gram of DNA. First we can encode all data at a molecular level and then store it in a
medium that will last for a while and not become out-dated just like floppy disks. Due to the improved
techniques for reading and writing DNA, a rapid increase is observed in the amount of possible data storage in
DNA.
Keywords: data storage, DNA, floppy disk, information density, optical data storage systems
I. INTRODUCTION
The outing of information stockpiling started from bones, shakes, and paper. At that point this
excursion digressed to punched cards, attractive tapes, gramophone records, and floppies, at that point forward.
Thereafter with the occasion of the innovation optical circles including CDs, DVDs, Blue-beam plates, and
blaze drives came into activity. Those are exposed to rot. Being non-biodegradable materials these contaminate
the climate and furthermore discharge high measures of warmth energy while utilizing energy for activity.
Consistently in 2018, Google led 3.88 million pursuits, and people watched 4.33 million recordings on
YouTube, sent 159,362,760 messages, tweeted multiple times and posted 49,000 photographs on Instagram, in
sync with programming organization Domo. By 2020 an expected 1.7 megabytes of data are visiting be made
every second per individual worldwide, which means around 418 zettabytes during one year (418 billion one-
terabyte hard drive of data), accepting a total populace of seven.8 billion[3].
The attractive or optical information stockpiling frameworks that presently hold this volume of 0s and 1s
normally can't keep going for longer than a century, if that. Further, running server farms takes gigantic
measures of energy. To put it plainly, we are on the purpose of have a weighty information stockpiling issue that
may just turn out to be more extreme over the long haul. Interest for information stockpiling is developing
2. VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 4 (2021)
ISSN(Online): 2581-7280
VIVA Institute of Technology
9th
National Conference onRole of Engineers in Nation Building – 2021 (NCRENB-2021)
2
dramatically, however the limit of existing stockpiling media isn't keeping up. A large portion of the world's
information today is put away on attractive and optical media.
Regardless of enhancements in optical plates, putting away a zettabyte of information would in any case take a
large number of units, and utilize huge actual space. In the event that we are to safeguard the world's
information, we need to look for critical advances away thickness and sturdiness[3]. Utilizing DNA to document
information is an alluring chance since it is amazingly thick (up to around 1 exabyte for each cubic millimetre)
and strong (half-existence of more than 500 years).
II. LITERATURE REVIEW
DNA, which consists of long nucleotide chains A, T, C and G, is that the information-storage material
of life. Within the sequence of those letters, data are often processed, turning DNA into a replacement sort of
information technology. It's already sequenced (read) daily, synthesized (written to) and quickly replicated
accurately. As has been shown by the complete genome sequencing of a fossil horse that lived quite 500,000
years ago, DNA is additionally remarkably stable. And it doesn't take much energy to store it. But it's the
potential for storage that shines. At a density far exceeding that of electronic devices, DNA can accurately store
vast quantities of knowledge.For example, consistent with calculations published in 2016 in Nature Materials by
George Church of Harvard University and his colleagues, the straightforward bacterium Escherichia coli
features a storage density of about 1019 bits per millilitre. At that density, a cube of DNA measuring around one
meter on one side might well fulfil all the world's present storage needs for a year.
2.1 REVIEW OF PREVIOUS STUDIES
PCs and other computerized electronic gadgets store information and work with the double numeric
framework that utilizes just two advanced numbers or 0 and 1. The writings are changed over to parallel variant
in PC framework. Thus, PCs work, and compute in twofold, in the end convert data to messages lucid. One byte
contains eight pieces comprising of one or the other 0's or 1's and having 28 (256) potential qualities (from 0 to
255), and stores one single letter (Figure 1 and Table 1). As demonstrated in the transformation ASCII (Table
1). The 26 letters with the upper cases and lower cases are changed over among Letter, Binary and
Hexadecimal. To store an enormous record or archive need substantially more memory information. An
ordinary tune may require many megabytes, with couple gigabytes to store a film and a few terabytes for the
books put away in a huge library[2].
As demonstrated in Table 2 are the extents of estimation and memory for the utilization of paired
framework from the littlest unit "byte" to the huge units, including byte (B), kilobyte (KB), megabyte (MB),
gigabyte (GB), terabyte (TB), pegabyte (PB), Exabyte (EB), zettabyte (ZB), yottabyte (YB), brontobyte (BB),
Geopbyte (GPB, etc. The units like brontobyte (BB), Geopbyte (GPB) are unfathomable colossal qualities that
may never be utilized in our genuine world (Table 2).
TABLE 1
The conversion ascii table of the twenty-six letters with the upper and lower cases among letter, binary and
hexadecimal.
Letter Binary Hexadecimal Letter Binary Hexadecimal
A 1000001 41 a 1100001 61
B 1000010 42 b 1100010 62
C 1000011 43 c 1100011 63
D 1000100 44 d 1100100 64
E 1000101 45 e 1100101 65
F 1000110 46 f 1100110 66
G 1000111 47 g 1100111 67
H 1001000 48 h 1101000 68
I 1001001 49 i 1101001 69
J 1001010 4A j 1101010 6A
K 1001011 4B k 1101011 6B
L 1001100 4C l 1101100 6C
3. VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 4 (2021)
ISSN(Online): 2581-7280
VIVA Institute of Technology
9th
National Conference onRole of Engineers in Nation Building – 2021 (NCRENB-2021)
3
Letter Binary Hexadecimal Letter Binary Hexadecimal
M 1001101 4D m 1101101 6D
N 1001110 4E n 1101110 6E
O 1001111 4F o 1101111 6F
P 1010000 50 p 1110000 70
Q 1010001 51 q 1110001 71
R 1010010 52 r 1110010 72
S 1010011 53 s 1110011 73
T 1010100 54 t 1110100 74
U 1010101 55 u 1110101 75
V 1010110 56 v 1110110 76
W 1010111 57 w 1110111 77
X 1011000 58 x 1111000 78
Y 1011001 59 y 1111001 79
Z 1011010 5A z 1111010 7A
TABLE 2
The sizes of measurement and memory
Sizes
Byte
Magnitude Units Storage*
1 B 100 Byte A character “A”, “1”, “$”
10 B 101
100 B 102
1 KB 103 Kilo byte The size for graphics of small websites ranges between 5
and 100 KB
10 KB 104
100 KB 105
1 MB 106 Mega byte
( 1 MB: 1 million)
The size for a high resolution JPEG image is about 1-5 MB
10 MB 107 The size for a 3-minute song is about 30 MB
100 MB 108
1 GB 109 Giga byte The size for a standard DVD drive is about 5 GB
10 GB 1010 (1 GB: 1 billion)
100 GB 1011
1 TB 1012 Tera byte
(1 TB: 1 trillion)
The size for a typical internal HDD is about 2 TB
10 TB 1013
100 TB 1014
2.2 PURPOSE
With the remarkable development in the limit of data produced and the arising need for information to
be put away for delayed timeframe, there arises a requirement for a capacity medium with high limit, high
stockpiling thickness, and plausibility to withstand extraordinary ecological conditions.
DNA arises because the approaching automobile for facts stockpiling with its putting highlights. DNA
has a extraordinary stockpiling limit. Castillo states that everyone the information withinside the entire Internet
may be located in a system that is lesser than unit cubic inch. DNA is visible as the right medium in such way in
a fashionable feel in mild of the reality that rather than making use of 1 s and 0 s through the PC to save facts,
DNA comprising of adenine, guanine, cytosine, and thymine (A, G, C, and T) successfully matched into
nucleotide base combines A-T and G-C may be used for placing away information in a form of fold code.
DNA arises as the planned mechanism for information stockpiling with its striking highlights. DNA has an
incredible stockpiling limit. Castillo states that all the data in the whole Internet could be situated in a gadget
which is lesser than unit cubic inch. DNA is seen as the ideal medium in such manner generally in light of the
4. VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 4 (2021)
ISSN(Online): 2581-7280
VIVA Institute of Technology
9th
National Conference onRole of Engineers in Nation Building – 2021 (NCRENB-2021)
4
fact that as opposed to utilizing 1 s and 0 s by the PC to store information, DNA comprising of adenine, guanine,
cytosine, and thymine (A, G, C, and T) effectively combined into nucleotide base matches A-T and G-C can be
used for putting away data in a type of twofold code.
As the urgent need for top capacity data-storage medium rises, DNA is taken into account ideal during
this think of single nucleotide can represent 2 bits of data. Accordingly 455 EB of knowledge are often encoded
in 1 gram of single stranded DNA (ssDNA). Entire information that's produced by the planet over a year are
often stored in only 4 grams of DNA.
High memory space is obtainable by DNA because it is 3-dimensional (3D) by structure. DNA offers readable
and reliable information for millennia, which may be extended to almost infinity by drying and protecting from
oxygen and water [3]. DNA can withstand a broader range of temperatures (−800°C–800°C). It utilizes power
usage million times more effectively than a contemporary pc. Additionally it privileges more storage options
because it stores data during a nonlinear structure unlike most of the media storing data during a linear structure.
DNA promises more options to enhance latency and extraction of knowledge, because it allows reading data in
bi-directions. The important incontrovertible fact that DNA is invisible to human eye ensures that DNA is
secure and is impossible to be harmed by living organisms[3].
III. WORKING OF DNA STORAGE
Lately computerized information is a vital piece of our life. Our own information, for example,
individual data, our computerized keys, advanced wallet data, passwords and bank subtleties are a portion of the
vital information that should be put away safely.
The advanced information is encoded in a DNA arrangement, the relating grouping data is incorporated
into a fake DNA and the data is decoded by sequencing the counterfeit DNA strand. This is the specific way of
putting away and recovering computerized information from DNA.
3.1 ENCODING DATA INTO THE DNA SEQUENCE
The computer is laboured on a binary gadget of one and 2. In the first actual step, virtual information is
included into the DNA. The DNA has four nitrogenous bases: Adenine (A), Cytosine (C), Guanine (G) and
Thymine (T). For storing information into the DNA, the A, T, G and C bases of DNA first transformed into
binary codes 1 and 0. 00 for A, 01 for G, 10 for C and eleven for T are the binary codes for storing facts. The
facts withinside the binary shape is transformed into the collection of A, T, G, C. Now we've the lengthy virtual
collection of DNA.
3.1.1 CODES FOR ENCRYPTING DATA IN DNA
In the past, basically, 3 codes were used to store DNA information. In general, both of these codes
consider that an alphabetic language is encoded in DNA[4]. Since most of the studies considered English as the
alphabetic language, the writing scheme for phonetics may have been used for shorthand.
It should meet the dual requirements as follows in order for a code to be optimum:
1) DNA (nucleotides) should be used commercially, primarily because synthesizing extended
oligonucleotides is an expensive operation, although it appears to be reasonably economical to
replicate.
2) After data encoding, it should be able to reconstruct the message.
Although it's not considered to be essential, if the coding scheme offers some error detection and
protection mechanism it might be of tremendous advantage. But this feature isn't considered vitally important,
because there are other mechanisms for addressing this issue like using multiple copies of DNA. Because the
written communication inherently consists of self-correcting mechanisms it makes this feature of error detection
and correction not essentially important [4].
Huffman Coding -This code uses the principle of varying the length of symbols used for representing a
personality. Most recurrently appearing character within the text is assigned rock bottom number of symbols
while the smallest amount recurrently appearing character is assigned the foremost number of symbols.
Employing this principle results in developing of a really economical code. Average code length is around 2.2
characters in Huffman coding scheme[8]. This is often the smallest amount average codon length achieved.
5. VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 4 (2021)
ISSN(Online): 2581-7280
VIVA Institute of Technology
9th
National Conference onRole of Engineers in Nation Building – 2021 (NCRENB-2021)
5
Unambiguity of the code is achieved through comprising of just one way during which the encrypted message
are often read once the start line is mentioned.
Disadvantages related with Huffman coding incorporate not equipping for numbers and images. This is
mostly in light of the fact that the recurrence of demonstrating these images is exceptionally reliant on the
content which surveys the way that they can't be remembered for forming the Huffman code. Besides it isn't
reasonable for long haul stockpiling because of the way that when distinctive length codons amassed together it
probably won't uncover an example. Hence the people in the future probably won't have the option to identify
the meaning of the example[8].
The Comma Code - In this methodology a solitary base (G) is considered as the comma. Codons of 5-
base length are isolated from one another utilizing base G. % base codons comprise of other three bases,
specifically, A, C, and T, further more restricted to single A:T base pair and two G:C base sets. The C of the
second G-C pair is constantly situated in the upper strand.
Consisting of isothermal melting temperature is the benefit of the composition of the message DNA
making use of this scheme. Dominant characteristic of comma code is the studying body of six codons such as
G, the comma, which isn't always executed through different codes. This enables to discover a clean studying
body without the need to say a beginning point. Protection mechanism from insertion and deletion mutation is
likewise assured through this technique which makes the alternative codes plenty greater complex.
Drawback of this code is that it's not economical because it repeats the comma-base G to make an
automatic reading frame. The Alternating Code. This scheme consists of 6 base codons which are 64 in number
including pyrimidine and purines. Construction of the message DNA in a completely synthetic nature is that the
primary feature of this approach. As this creates fully artificial DNA it's suitable for future storage which
overcomes the disadvantage of Huffman code. Additionally, it offers benefits like being isothermal and error
detecting but it's not superior to comma code.
Alternating code also comprises repetitive features which makes it non-economical. It's the most
drawback related to this coding scheme. Therefore attention of the researchers has been led towards developing
a cheap code without repetitive features.
Comma-Free Code - It's also referred to as prefix free code. This comprises fixed length base frames without
commas to separate the frames. Therefore, it uses an automatic frame detection mechanism. Comma-free code
doesn't contains identical four base pairs which is that the only way of hindering from natural DNA sequences.
These codons are possible to be read simply in a method and support error detection mechanisms also. Although
comma-free code is strong and therefore the error correction works to correct against small-scale loss like DNA
point mutations, it doesn't have the power to recover broken data when an outsized DNA segment is deleted
from the info encoded DNA region.
3.2 ARTIFICIAL DNA SYNTHESIS
The single-abandoned self-assertive DNA grouping can be combined synthetically. Based on the
computerized succession information, every nucleotide is added to the adjoining nucleotide. Notwithstanding,
the proficiency of counterfeit DNA blend is 99% however the blunder of 1% can make a significant issue in
advanced information stockpiling. To conquer this issue, huge quantities of equal beginning locales are given to
deliver various duplicates of the given succession. In this manner, in spite of having a blunder in a solitary
duplicate numerous other precise can be delivered.
3.3 STORING OF SAMPLE
Presently we have our information reinforcement as a fluid drop of a few nano-grams of DNA. The
DNA can be put away in profound freeze where it tends to be keep going for a very long time or we can send it
to the outer stockpiling frameworks (given by certain organizations) which can store our DNA for over thousand
years.
DNA stays stable in any brutal conditions for a long period of time. Regardless, a few successions could be lost
throughout some stretch of time.
3.4 SEQUENCING OF DNA
For extracting the virtual facts again to its authentic form, we must collection the complete DNA. DNA
sequencing is a manner wherein a DNA collection is study into the virtual collection. The labelled nucleotides
6. VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 4 (2021)
ISSN(Online): 2581-7280
VIVA Institute of Technology
9th
National Conference onRole of Engineers in Nation Building – 2021 (NCRENB-2021)
6
are brought complementary to our DNA strand. Each nucleotide is labelled with a different fluorescent dye. The
depth of color emitted through every dye is recorded through the detector.
3.5 DECODING INFORMATION
Finally, the sequence gets back to the decoder which decodes the DNA sequence back into binary
language. After decoding, we can retrieve our data back.
TABLE 3
Comparison of data storage units with respect to access time and durability.
Durability Data storage unit Access time
3 years Flash drive Mili second
5 years HDD (hard disk) 10 second
Up to 30 years Magnetic tape 1 minute
More than 100 years DNA storage More than 12 hours
IV. ADVANTAGES OF DNA DATA STORAGE MEDIUM
The worldwide data square measure powerfully dilated at the dramatic rate. The customary media
cannot adequately manage the need of the big data storage. DNA could fill in as a possible mechanism of
processed data storage, with its potential favourable circumstances, for instance, high thickness, high replication
effectiveness, end of the day strength and end of the day solidness [9].
DNA at its theoretic greatest limit will encrypt around 2 items for each ester. An entire server farm
worked by IBM in 2011 has around a hundred petabytes (PBs) of data golf stroke away limit. All the same,
because of having a high thickness, DNA going regarding as associate data golf stroke away medium will store
plenty of data at a touch size [11].
A solitary gram of DNA at its theoretic most extreme will store around two hundred PBs of data, much
twofold occasions quite that of the full IBM server farm. All in all, all information recorded all over on the
globe will be place away in an exceedingly few kilograms of DNAs, or cherish simply one shoebox contrasted
and therefore the necessity of scores of huge data storage habitats for typical media. Information encoded DNA
medium is ready to try to end of the day storage owing to having high strength.
DNA will keep going for millennia exposed, dry and boring spots. Abundant below additional
unfortunate climate, DNA's half-life is as long as hundred years. DNA will hold stable at vasoconstrictor or
heat, with the wide reach from - 800°C to 800°C [11]. DNA media will likewise confirm regarding data quite
customary processed data media. Albeit new data square measure increasing at a dramatic rate, the bulk of them
square measure saved in files for end of the day storage.
These chilly data will not be recovered promptly or used typically. Consequently, to store them in
DNA media is simple, useful and unpaid. Another little bit of leeway is that DNA is deeply saved. The
traditional DNAs will exactly duplicate themselves at a high productivity and systematically with the base-
matching principle (A with T, C with G). Afterward, DNA medium will deeply save data constancy for quite
whereas.
V.CHALLENGES FOR DNA DATA STORAGE MEDIUM
In lightweight of its special qualities and contrasted and therefore the customary media, polymer may
be the potential and promising mechanism for advanced info storage. However, it's so far to travel before
polymer may be economically applied [13]. The difficulties we'd like to manage exist in several viewpoints,
together with vital expense, low outturn, the restricted admittance to info storage, short factory-made oligo DNA
sections, mistake rate in mix and sequencing [15].
The utilization of polymer in info storage is considerably additional pricey than the opposite standard
media like tape, circle, and HDD (hard plate drive). As of now, to write and interpret info price nearly $15,000
per computer memory unit (MB). Then, current innovation in polymer uniting is restricted, with simply short
oligo DNA groupings to be homogenised. The best length of every oligo DNA half is restricted to many
hundred nucleotides.
During this manner, to store a solitary chronicled record, particularly one monumental document might
need innumerable oligo DNAs. And moreover, the time has come back burning-through for info to compose
into and get over oligo DNAs, with the contribution of assorted advances together with ever-changing info over
7. VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 4 (2021)
ISSN(Online): 2581-7280
VIVA Institute of Technology
9th
National Conference onRole of Engineers in Nation Building – 2021 (NCRENB-2021)
7
to parallel, secret writing twofold to oligo DNA, integration and golf stroke away polymer successions, and ill
extraordinary groupings from polymer storage library, sequencing and disentangling, and finally ever-changing
flex to info clear. The traditional media, for instance, circle and tape have their coherent tending to knowledge,
however, oligo DNAs haven't.
Consequently, it's laborious to deal with the extraordinary encoded polymer succession that we have a
tendency to hope to own. Within the interim, capricious admittance to DNA-based info storage is important, be
that because it might, oligo DNAs do not have irregular access capability. Through current methodologies,
simply mass access is accessible for polymer info storage. The complete DNA-based info storage ought to be
organized, sequenced and decoded from polymer info storage despite the actual fact that we have a tendency to
merely have to be compelled to examine a solitary computer memory unit. On these lines, the proper
preliminary wont to specifically recover the proper polymer grouping is needed [14].
This can likewise provide capricious access throughout DNA sequencing and data ill. The sequencing
with the novel preliminary will specifically examine simply the specified oligo DNA, rather than the complete
polymer library. Also, at present, polymer combination and sequencing don't seem to be wholly impressive.
Throughout polymer mix and sequencing, the event of addition, cancellation, replacement and completely
different blunders is happened, with miscalculation rate being regarding I Chronicles per ester. The innovation
and therefore the expense of polymer combination and sequencing don't seem to be applicable for current info
storage.
VI. CONCLUSION
Despite the fact that DNA computerized information stockpiling innovation is exorbitant and tedious as
of now. In any case, it will end up being extremely valuable soon. Indisputably, DNA advanced information
stockpiling will be the lone expect putting away information sooner rather than later. It will alter the
computerized innovation without a doubt. The ascent of DNA information stockpiling, beforehand the stuff of
sci-fi, is being made conceivable by progresses in biotechnology, especially upgrades in high-throughput DNA
sequencing and union.
Likewise, on the grounds that these bio-developers control what materials enter their investigations,
and their arrangements don't should be carefully designed to work inside a living creature, there are less
overhead expenses contrasted with average life science tests [12]. The excursion has not been without detours,
be that as it may. Notwithstanding emotional improvement, working with DNA can be moderate and costly.
Further smoothing out is as yet required.
This review critically analyzes the prevailing methods of storing data onto DNA. Data is encrypted into
DNA using diverse codes and this text analyzes and discusses the codes used for encrypting data. Multiple
approaches for designing DNA codons and diverse data storage styles are analyzed intimately identifying the
pros and cons of every approach. Secret writing techniques using DNA molecules for secure data storage also
are discussed through this text. DNA are often used as an organic memory to store massive amounts of
knowledge. This paper also analyzes the mechanism where living organism might be used as storage devices
while identifying limitations and appropriate applicability [9]. Challenges faced through trying to use organic
memory concepts also are discussed through this paper. Big data storage and analytics and therefore the way it's
led to DNA computing to unravel hard computational problems also are discussed here.
ACKNOWLEDGEMENTS
I am thankful to my college for giving us opportunity to make this project a success. I give my special
thanks and sincere gratitude towards Prof.Chandani Patel for encouraging me to complete this research paper,
guiding me and helping me through all the obstacles in the research.
Without her assistance, my research paper would have been impossible. Also I present my obligation
towards all our past years teachers who have bestowed deep understanding and knowledge in us, over the past
years. We are obliged to our parents and family members who always supported me greatly and encouraged me
in each and every step.
REFERENCES
[1] S. Shrivastava and R. Badlani, “Data storage in DNA,” International Journal of Electrical Energy, vol. 2, no. 2, pp. 119–124, 2014.
[2] DNA and the Digital Data Storage, https://www.hsj.gr/medicine/dna-and-the-digital-data-storage.php?aid=24516
[3] How DNA data storage works, https://geneticeducation.co.in/dna-digital-data-
storage/#:~:text=The%20digital%20data%20is%20encoded,retrieving%20digital%20data%20from%20DNA.
[4] DNA Data Storage Is Closer, https://www.scientificamerican.com/article/dna-data-storage-is-closer-than-you-think/
8. VIVA-Tech International Journal for Research and Innovation Volume 1, Issue 4 (2021)
ISSN(Online): 2581-7280
VIVA Institute of Technology
9th
National Conference onRole of Engineers in Nation Building – 2021 (NCRENB-2021)
8
[5] G. C. Smith, C. C. Fiddes, J. P. Hawkins, and J. P. L. Cox, “Some possible codes for encrypting data in DNA,” Biotechnology Letters,
vol. 25, no. 14, pp. 1125–1130, 2003.
[6] M. K. Rogers and K. C. Seigfried-Spellar, “Digital forensics and cyber crime,” in Proceedings of the 4th International ICST
Conference on Digital Forensics & Cyber Crime (ICDF2C '12), Lafayette, Ind, USA, October 2012.
[7] N. Yatchie, Y. Ohashi, and M. Tomita, “Stabilizing synthetic data in the DNA of living organisms,” Systems and Synthetic Biology,
vol. 2, no. 1-2, pp. 19–25, 2008.
[8] M. Ailenberg and O. D. Rotstein, “An improved Huffman coding method for archiving text, images, and music characters in
DNA,” BioTechniques, vol. 47, no. 3, pp. 747–754, 2009.
[9] Appuswamy RLK, Barbry P, Antonini M, Madderson O, Freemont P (2019) Archive: Using DNA in the DBMS storage hierarchy.
CIDR 2019, Biennal Conference on Innovative Data Systems Research, California, USA.
[10] De Silva PY, GU Ganegoda (2016) New trends of digital data storage in DNA. Biomed Res Int pp: 8072463-8072472.
[11] O' Driscoll A, Sleator RD (2013) Synthetic DNA: the next generation of big data storage. Bioengineered 4: 123-1235.
[12] Bornholt J, Lopez R, Carmean DM, Ceze L, Seelig G, et al. (2016) A DNA-based archival storage system. ASPLOS 201 (21st ACM
International Conference on Architectural Support for Programming Languages and Operating Systems, Atlanta, GA).
[13] Organick L, Ang SD, Chen YJ, Lopez R, Yekhanin S, et al. (2018) Random access in large-scale DNA data storage. Nat Biotechnol
36: 242-248.
[14] Yazdi SM, Yuan Y, Ma J, Zhao H, Milenkovic O (2015) A rewritable, random-access DNA-based storage system. Sci Rep 5: 14138-
14140.
[15] Ahn T, Ban H, Park H (2018) Storing digital information in the long read DNA. Genomics Inform 16: e30-35.
[16] A. J. Doig, “Improving the efficiency of the genetic code by varying the codon length—the perfect genetic code,” Journal of
Theoretical Biology, vol. 188, no. 3, pp. 355–360, 1997.