The document discusses machine reading comprehension (MRC) techniques for question answering (QA) systems, comparing search-based and natural language processing (NLP)-based approaches. It covers key milestones in the development of extractive QA models using NLP, from early sentence-level models to current state-of-the-art techniques like cross-attention, self-attention, and transfer learning. It notes the speed and scalability benefits of combining search and reading methods for QA.
Python을 활용한 챗봇 서비스 개발 강의자료입니다. 1일차 강의에서는 챗봇의 개요, Python을 이용한 챗봇 서비스 개발의 기초적인 내용 및 한글 자연어 처리에 방법에 관하여 설명합니다.
발표자: 허희수(서울시립대 박사과정) 발표일: 2018.7. 최근 음성인식 기술이 적용된 인공지능 스피커, 스마트 가전 등이 보급되면서 화자인식의 필요성이나 기술에 대한 관심이 증가하고 있습니다. 본 발표에서는 먼저 화자인식이 동작하는 과정이나 원리를 간략하게 설명합니다. 그 뒤에, 심층 신경망이 화자인식 시스템에 적용되는 과정을 몇 가지 연구들을 통해 보입니다. 마지막으로 화자인식과 관련한 최신의 연구들과 앞으로의 연구 방향을 소개하면서 발표를 마무리합니다.
서울시 챗봇팀이 개발한 ‘청년정책봇’은 시나리오 기반이 아닌 딥러닝 기반의 챗봇 서비스다. ETRI에서 개발한 KorBERT를 통해 언어 처리 모델을 대신하고, 형태소 분석 API를 통해 질문 문장에 대한 의도를 분석하였다. 카카오에서 배포한 khaii 형태소 분석기 적용을 통해 구문분석 정확도를 향상을 확인할 수 있었다. 또한, 위키 QA API를 통해 일반적인 질의응답을 위한 기능을 추가했다. 현재 상용화된 챗봇서비스의 대부분은 미리 구성된 시나리오(Flowchart)를 따라가는 방식을 활용하며, 자연어 처리 기술은 신뢰도가 낮아 사용되지 않고 있다. 그에 반해, ‘청년정책봇’은 cdQA 파이프라인을 접목해 유사도 높은 문서를 언어 처리 모델에 적용하는 방식으로 접근해 신뢰도를 높일 수 있었다. 기존 빌더를 통해, 상용화된 서비스 대비 두 가지 장점이 있다. 첫 번째 장점은 딥러닝 모델에 따른 발전 가능성으로써 ETRI KorBERT의 지속적인 개선에 따라 청년정책봇의 기계 독해 성능도 같이 개선된다는 것이다. 두 번째 장점은 서비스 지속 가능성으로써 cdQA 파이프라인에 기반해 주기적인 웹 크롤링을 통해 데이터 추가가 가능하기 때문에 소프트웨어 유지 보수에 필요한 자원을 최소화할 수 있다는 것이다. 청년정책 챗봇을 통해 cdQA 파이프라인과 ETRI BERT 모델을 활용해 기존의 데이터 인풋 제한을 극복하고 기계 독해에 대한 솔루션을 제시할 수 있었다.
안녕하세요 딥러닝 논문 읽기 모임입니다. 오늘 업로드된 논문 리뷰 영상은 NeurIPS 2020 에 발표된 'Big Bird - Transformers for Longer Sequences'라는 제목의 논문입니다. 오늘 소개해 드릴 논문은 Big Bird로, Transformer 계열 논문들의 Full Attention 구조의 한계를 리캡하고, Long Sequence의 처리를 매우 효율적으로 처리하기 위함을 목표로 나온 논문입니다. 트랜스포머의 엄청난 성능은 이미 다들 잘 알고 계시지만, 시퀀스 길이가 길어질수록 연산의 한계에 부딪히게 되는데, 이에 많은 논문이 비효율적인 연산을 줄이고자 많은 시도가 있었고, Big Bird도 그중 하나의 논문이라고 생각해 주시면 됩니다. 오늘 논문 리뷰를 위해 자연어 처리팀 백지윤 님이 자세한 리뷰 도와주셨습니다.
The document summarizes the "Attention Is All You Need" paper, which introduced the Transformer model for natural language processing. The Transformer uses attention mechanisms rather than recurrent or convolutional layers, allowing for more parallelization. It achieved state-of-the-art results in machine translation tasks using techniques like multi-head attention, positional encoding, and beam search decoding. The paper demonstrated the Transformer's ability to draw global dependencies between input and output with constant computational complexity.
コンピュータセキュリティシンポジウム2019 敵対的視点とユーザ行動 2E1-2
발표자: 김현중 (서울대 박사과정) 발표일: 2017.9. 개요: 자연어처리에서 학습데이터에 존재하지 않는 단어를 제대로 처리할 수 없는 문제를 미등록단어(out of vocabulary) 문제라고 합니다. 이 문제는 애플리케이션에 따라서 해결책이 다릅니다. 문서 군집화/분류나 기계번역 등의 분야에서는 subwords 기반으로 단어를 표현함으로써 미등록 단어 문제를 우회하고 있습니다. 반면 키워드/연관어 분석, 토픽 모델링과 같은 분석을 위해서는 온전한 형태로 단어를 인식해야 하기에 subwords를 활용할 수 없으며, 미등록단어를 처리할 수 있는 토크나이저/품사판별기가 필요합니다. 그러나 한국어 형태소 분석기들은 말뭉치나 사전을 이용하여 학습을 하기 때문에 미등록단어를 제대로 인식하지 못합니다. 이를 해결하기 위하여 한국어 형태소 분석기들은 사용자 사전 추가 기능을 제공합니다. 하지만 텍스트의 도메인이 바뀔 때마다 각 도메인에 적합한 학습데이터나 사용자 정의 단어 사전을 만드는 일은 매우 고달픈 일입니다. 제가 최근에 작업을 하는 분야는 한국어 자연어처리 과정에서 이러한 수작업을 최소화하기 위한 "비지도학습 기반 자연어처리 방법들"입니다. 좀 더 세부적으로 설명하면 (1) 텍스트에서 통계 기반으로 단어를 추출하고, (2) 이를 이용하여 분석하려는 텍스트 도메인에 가장 적합한 토크나이저를 만듭니다. (3) 또한 신조어가 가장 많이 발생하는 명사의 경우, 토크나이징과 동시에 품사를 추정합니다. (4) 추가적으로, 띄어쓰기 오류를 데이터 기반으로 교정함으로써 (1) ~ (3)의 성능을 높입니다. 이번 테크톡에서는 (1) 위에서 언급된 비지도학습 기반 한국어 자연어처리 연구와, (2) 이를 바탕으로 키워드/연관어 분석을 수행한 사례를 공유합니다.
Python을 활용한 챗봇 서비스 개발 2일차 강의자료입니다. 2일차 강의에서는 머신러닝의 개요, 크롤러 개발 및 Elastic search를 이용한 검색기반 챗봇에 대하여 설명합니다.
This document provides an overview of deep learning basics for natural language processing (NLP). It discusses the differences between classical machine learning and deep learning, and describes several deep learning models commonly used in NLP, including neural networks, recurrent neural networks (RNNs), encoder-decoder models, and attention models. It also provides examples of how these models can be applied to tasks like machine translation, where two RNNs are jointly trained on parallel text corpora in different languages to learn a translation model.
2020年12月15~16日に開催されたイベント「SIAI2020 第 1 回インダストリアル AI シンポジウム」(主催:人工知能学会,名古屋市)のチュートリアル講演「ベンチャーによるAI活用事例」で使った資料です。 来栖川電算がトヨタマップマスター様と共同で実施している���精度地図作成に関する取り組みを紹介させて頂きました。高精度地図を自動生成する深層学習手法の話ではなく、それを活用して地図作成(アノテーション作業)業務を効率化する話です。様々なアプローチと落とし穴を分かりやすく解説しています。大きな費用対効果を生み出す地に足ついた取り組みの話でもあるので、PoC どまりでお困りの方にもお勧めの内容です。 AI を用いて「どのように問題を解くか」について考えると、主要な関心事が「どのようにデータ収集・アノテーションするか」になることが多く、データ収集・アノテーションこそが AI 研究開発の本質と言っても過言ではありません。このような流れに対応するために、来栖川電算では「一貫した正しいデータを効率的に作れるようにする技術」を研究開発し、annofab をはじめとする製品・サービスにのせてお客様へ提供させて頂いております。ご興味がある方はお気軽にご相談ください。
NN論文を肴に酒を飲む会#4の発表資料
오늘 소개해 드릴 논문은 구글의 BERT와 페이스북 현재 메타의 RoBERTa를 기반으로 만들어진 모델입니다. RoBERTa + Disentangled Attention과 enhanced mask decode 두가지의 핵심 기술로 RoBERTa를 더욱 개선 시킨 모델이라고 이해하시면 될 것 같습니다. 추가적으로 Scale Invariant Fine Tuning을 도입하여 RoBERTa를 상당히 많은 테스크에서, NLU 테스크에서는 RoBERTa, BERT이상의 성능을 보여준 논문이기도 합니다. 논문의 자세한 리뷰부터, 백그라운드 지식까지, 자연어처리팀 진명훈님이 도와주셨습니다.
DeNAの分析部は、事業における意思決定の質を最大限に高めるための部門です。より高度な分析を低コストで実現するために、BigQueryを中心としたデータ分析基盤を活用しています。 本セッションでは「DeNAのゲーム事業における意思決定を分析部がどのように支えているか」と「データ分析基盤を運用する中でぶつかった課題をどのように乗り越えてきたか」について技術的な側面から、実例を交えてお話します。
This document presents the Duet model for document ranking. The Duet model uses a combination of local and distributed representations of text to perform both exact and inexact matching of queries to documents. The local model operates on a term interaction matrix to model exact matches, while the distributed model projects text into an embedding space for inexact matching. Results show the Duet model, which combines these approaches, outperforms models using only local or distributed representations. The Duet model benefits from training on large datasets and can effectively handle queries containing rare terms or needing semantic matching.
일본 NAIST 신도 교수가 최근에 정리한 "최신 딥러닝을 이용한 자연어 처리기술"을 번역한 자료입니다. 새로운 기술에 대한 간단한 소개정도를 실은 자료입니다. 세부적인 기술내용은 참고논문을 찾으시면 될 듯 합니다.
The document discusses various machine learning clustering algorithms like K-means clustering, DBSCAN, and EM clustering. It also discusses neural network architectures like LSTM, bi-LSTM, and convolutional neural networks. Finally, it presents results from evaluating different chatbot models on various metrics like validation score.
This document describes Dense-Sparse Phrase Index (DenSPI), a new approach for open-domain question answering that can retrieve and read the entire Wikipedia in 0.5 seconds using CPUs. DenSPI indexes phrases from Wikipedia using both dense representations based on BERT to capture semantic and syntactic information, as well as sparse representations based on bag-of-words to capture lexical information. It uses a combination of dense and sparse retrieval and nearest neighbor search techniques to efficiently match questions to relevant phrases. Evaluation shows DenSPI can perform open-domain QA 44x faster than prior work while maintaining or achieving higher accuracy.
This document discusses how artificial intelligence could impact qualitative research. It begins by noting that speech-to-text capabilities are developing rapidly and could allow for faster transcription of interviews. This would increase the amount of text produced but could also be difficult to analyze. The document then discusses how text analytics using machine learning could help researchers find insights in large amounts of qualitative text data, but that these tools would need to be developed specifically for qualitative data, which uses informal language compared to structured data sources. Finally, it cautions that while word clouds may be a useful output for qualitative data, statistics have no role to play in qualitative research.
We reassess a recent study (Hassan et al., 2018) that claimed that machine translation (MT) has reached human parity for the transla- tion of news from Chinese into English, using pairwise ranking and considering three vari- ables that were not taken into account in that previous study: the language in which the source side of the test set was originally writ- ten, the translation proficiency of the evalua- tors, and the provision of inter-sentential con- text. If we consider only original source text (i.e. not translated from another language, or translationese), then we find evidence showing that human parity has not been achieved. We compare the judgments of professional trans- lators against those of non-experts and dis- cover that those of the experts result in higher inter-annotator agreement and better discrim- ination between human and machine transla- tions. In addition, we analyse the human trans- lations of the test set and identify important translation issues. Finally, based on these find- ings, we provide a set of recommendations for future human evaluations of MT.
Natural language processing (NLP) involves making computers understand human language to interpret unstructured text. NLP has applications in machine translation, speech recognition, question answering, and text summarization. Understanding language requires analyzing words, sentences, context and meaning. Common NLP tasks include tokenization, tagging parts of speech, and named entity recognition. Popular Python NLP libraries that can help with these tasks are NLTK, spaCy, Gensim, Pattern, and TextBlob.
Natural language processing (NLP) involves making computers understand human language to interpret unstructured text. NLP has applications in machine translation, speech recognition, question answering, and text summarization. Understanding language requires analyzing words, sentences, context and meaning. Common NLP tasks include tokenization, tagging parts of speech, and named entity recognition. Popular Python NLP libraries that can help with these tasks are NLTK, spaCy, Gensim, Pattern, and TextBlob.
This is a presentation I gave to the fresh graduate students intending to help them utilitize various online resources to do better research.
The document provides an introduction to a course on natural language processing, outlining the course overview, topics to be covered including introductions to NLP and Watson, machine learning for NLP, and why NLP is difficult. It provides information on the course instructor, teaching assistant, homepages, office hours, goals and topics of the course, organization, recommended textbooks, assignments, grading, class policies, and an outline of course topics.
This was a talk given at the annual GALA conference in Amsterdam on March 27th 2017. The topic is Neural Machine Translation. Where are we now? Neural Machine Translation is at the peak of a hype cycle. There is no doubt it is an emerging technology with massive potential, but it is not yet a sweeping solution to all ills. Several factors prevent NMT from being commercially ready. Expectations, therefore, need to be managed. That is the goal of this presentation.
This presentation was provided by Kyle Lo of The Allen Institute for AI (AI2) during the NISO hot topic event "Preprints." The virtual conference was held on April 21, 2021.
Here are some issues with entity precision in this example: - The tissue samples are not precisely described (e.g. no identifiers, donor details) - The antibodies are described by name but lack identifiers like catalog numbers - References to other works (e.g. Porzig et al 2007) are not in a standard citation format - Chemicals, proteins etc are described by name only without identifiers Precisely describing entities with identifiers/standards would make the claims more reproducible and verifiable.
This document provides guidance on transcribing qualitative interviews. It discusses why transcription is important, potential layers of transcription detail, challenges like mishearing words, anonymization practices, and balancing participant privacy with contextual details. Resources for transcription like audio players, word processors, and anonymization logs are presented. Considerations for a transcription project like intended detail, anonymization approach, format, and contextual metadata are reviewed.
Programmers love science! At least, so they say. Because when it comes to the ‘science’ of developing code, the most used tool is brutal debate. Vim versus emacs, static versus dynamic typing, Java versus C#, this can go on for hours at end. In this session, software engineering professor Felienne Hermans will present the latest research in software engineering that tries to understand and explain what programming methods, languages and tools are best suited for different types of development.
Keynote presented at the International Association of University Libraries Conference (IATUL), 20 June 2017 in Bolzano, Italy. Library metadata was created to describe objects and enable a reader to understand when they had the same or a different object in hand. Now linked data concepts and techniques are allowing us to recreate, merge, and link our metadata assets in new ways that better support discovery - both in our local systems and on the wider web. Tennant described this migration and the potential it has for solving key discovery problems.
Presentation for the Connecticut State Library / Continuing Education, September 11, 2008. This innovative half-day workshop will provide background on usability and define the user experience (UX). We will offer a "live usability lab" with audience assessment of one library web site and provide time and resources to create usability scenarios for YOUR web resources. Attendees will participate in interactive usability testing to evaluate web-based library resources from the user's perspective. You will also develop questions and methodology to assess usability and the UX @ your library!
The merits of stand-off markup (LAF) versus inline markup (TEI) for processing text as data. Ideas applied to work with the Hebrew Bible, resulting in tools for researchers and end-users.
Natural language processing (NLP) is a subfield of artificial intelligence that studies how to process and understand human language, with the ultimate goal of enabling natural communication between humans and computers; it is an interdisciplinary field that draws from computer science, linguistics, psychology and other areas to allow computers to understand, generate and translate between different human languages. NLP techniques include morphology, lexicography, syntax, semantics and discourse analysis to analyze words, sentences and full conversations at different levels of meaning.
End to-end goal-oriented question answering systems version 2.0: An updated version with references of the old version (https://www.slideshare.net/QiHe2/kdd-2018-tutorial-end-toend-goaloriented-question-answering-systems). 08/22/2018: The old version was just deleted for reducing the confusion.
This document provides an overview of question answering applications and challenges. It defines question answering as receiving natural language questions and providing concise answers. Recent developments in question answering systems are discussed, including IBM Watson. Challenges for question answering over semantic data are explored, such as lexical gaps, ambiguity, granularity, and alternative resources. Large-scale linguistic resources and machine learning approaches for question answering are also covered. Applications of question answering technologies are examined.
Presented on 24-06-2016, at VU Amsterdam Slides created collaboratively with Marten Postma & Piek Vossen (VU Amsterdam)
This presents a new resource for helping to find names of entities in social media. It takes an inclusive approach, meaning we get high variety in named entities - something other corpora have struggled with, leaving them poorly placed to help machine learning approaches generalise beyond the lexical level.
Wes McKinney gave the keynote presentation at PyCon APAC 2016 in Seoul. He discussed his work on Python data analysis tools like pandas, Apache Arrow, and Feather. He also talked about open source sustainability and governance. McKinney is working on the second edition of his book Python for Data Analysis, which is scheduled for release in 2017.
[233] 대형 컨테이너 클러스터에서의 고가용성 Network Load Balancing: Maglev Hashing Scheduler in IPVS, Linux Kernel
[236] 스트림 저장소 최적화 이야기: 아파치 드루이드로부터 얻은 교훈
The document discusses challenges with using reinforcement learning for robotics. While simulations allow fast training of agents, there is often a "reality gap" when transferring learning to real robots. Other approaches like imitation learning and self-supervised learning can be safer alternatives that don't require trial-and-error. To better apply reinforcement learning, robots may need model-based approaches that learn forward models of the world, as well as techniques like active localization that allow robots to gather targeted information through interactive perception. Closing the reality gap will require finding ways to better match simulations to reality or allow robots to learn from real-world experiences.
This document describes research on using deep learning to predict student performance in massive open online courses (MOOCs). It introduces GritNet, a model that takes raw student activity data as input and predicts outcomes like course graduation without feature engineering. GritNet outperforms baselines by more than 5% in predicting graduation. The document also describes how GritNet can be adapted in an unsupervised way to new courses using pseudo-labels, improving predictions in the first few weeks. Overall, GritNet is presented as the state-of-the-art for student prediction and can be transferred across courses without labels.
This document provides a summary of new datasets and papers related to computer vision tasks including object detection, image matting, person pose estimation, pedestrian detection, and person instance segmentation. A total of 8 papers and their associated datasets are listed with brief descriptions of the core contributions or techniques developed in each.
그림이 정상 출력되는 다음 링크의 자료를 확인해 주세요. https://www.slideshare.net/deview/233-network-load-balancing-maglev-hashing-scheduler-in-ipvs-linux-kernel
This document presents a formula for calculating the loss function J(θ) in machine learning models. The formula averages the negative log likelihood of the predicted probabilities being correct over all samples S, and includes a regularization term λ that penalizes predicted embeddings being dissimilar from actual embeddings. It also defines the cosine similarity term used in the regularization.
[225]NSML: 머신러닝 플랫폼 서비스하기 & 모델 튜닝 자동화하기
[216]Search Reliability Engineering (부제: 지진에도 흔들리지 않는 네이버 검색시스템)
The document discusses running a TensorFlow Serving (TFS) container using Docker. It shows commands to: 1. Pull the TFS Docker image from a repository 2. Define a script to configure and run the TFS container, specifying the model path, name, and port mapping 3. Run the script to start the TFS container exposing port 13377
The document discusses linear algebra concepts including: - Representing a system of linear equations as a matrix equation Ax = b where A is a coefficient matrix, x is a vector of unknowns, and b is a vector of constants. - Solving for the vector x that satisfies the matrix equation using linear algebra techniques such as row reduction. - Examples of matrix equations and their component vectors are shown.
This document describes the steps to convert a TensorFlow model to a TensorRT engine for inference. It includes steps to parse the model, optimize it, generate a runtime engine, serialize and deserialize the engine, as well as perform inference using the engine. It also provides code snippets for a PReLU plugin implementation in C++.
[242]컴퓨터 비전을 이용한 실내 지도 자동 업데이트 방법: 딥러닝을 통한 POI 변화 탐지
This document describes the steps to convert a TensorFlow model to a TensorRT engine for inference. It includes steps to parse the model, optimize it, generate a runtime engine, serialize and deserialize the engine, as well as perform inference using the engine. It also provides code snippets for a PReLU plugin implementation in C++.
This presentation, delivered at the Postgres Bangalore (PGBLR) Meetup-2 on June 29th, 2024, dives deep into connection pooling for PostgreSQL databases. Aakash M, a PostgreSQL Tech Lead at Mydbops, explores the challenges of managing numerous connections and explains how connection pooling optimizes performance and resource utilization. Key Takeaways: * Understand why connection pooling is essential for high-traffic applications * Explore various connection poolers available for PostgreSQL, including pgbouncer * Learn the configuration options and functionalities of pgbouncer * Discover best practices for monitoring and troubleshooting connection pooling setups * Gain insights into real-world use cases and considerations for production environments This presentation is ideal for: * Database administrators (DBAs) * Developers working with PostgreSQL * DevOps engineers * Anyone interested in optimizing PostgreSQL performance Contact info@mydbops.com for PostgreSQL Managed, Consulting and Remote DBA Services
An invited talk given by Mark Billinghurst on Research Directions for Cross Reality Interfaces. This was given on July 2nd 2024 as part of the 2024 Summer School on Cross Reality in Hagenberg, Austria (July 1st - 7th)
To help you choose the best DiskWarrior alternative, we've compiled a comparison table summarizing the features, pros, cons, and pricing of six alternatives.
Everything that I found interesting about machines behaving intelligently during June 2024
Password Rotation in 2024 is still Relevant
As a popular open-source library for analytics engineering, dbt is often used in combination with Airflow. Orchestrating and executing dbt models as DAGs ensures an additional layer of control over tasks, observability, and provides a reliable, scalable environment to run dbt models. This webinar will cover a step-by-step guide to Cosmos, an open source package from Astronomer that helps you easily run your dbt Core projects as Airflow DAGs and Task Groups, all with just a few lines of code. We’ll walk through: - Standard ways of running dbt (and when to utilize other methods) - How Cosmos can be used to run and visualize your dbt projects in Airflow - Common challenges and how to address them, including performance, dependency conflicts, and more - How running dbt projects in Airflow helps with cost optimization Webinar given on 9 July 2024
These fighter aircraft have uses outside of traditional combat situations. They are essential in defending India's territorial integrity, averting dangers, and delivering aid to those in need during natural calamities. Additionally, the IAF improves its interoperability and fortifies international military alliances by working together and conducting joint exercises with other air forces.
This presentation explores the practical application of image description techniques. Familiar guidelines will be demonstrated in practice, and descriptions will be developed “live”! If you have learned a lot about the theory of image description techniques but want to feel more confident putting them into practice, this is the presentation for you. There will be useful, actionable information for everyone, whether you are working with authors, colleagues, alone, or leveraging AI as a collaborator. Link to presentation recording and slides: https://bnctechforum.ca/sessions/details-of-description-part-ii-describing-images-in-practice/ Presented by BookNet Canada on June 25, 2024, with support from the Department of Canadian Heritage.
Solar Storms (Geo Magnetic Storms) are the motion of accelerated charged particles in the solar environment with high velocities due to the coronal mass ejection (CME).
In the modern digital era, social media platforms have become integral to our daily lives. These platforms, including Facebook, Instagram, WhatsApp, and Snapchat, offer countless ways to connect, share, and communicate.
Sustainability requires ingenuity and stewardship. Did you know Pigging Solutions pigging systems help you achieve your sustainable manufacturing goals AND provide rapid return on investment. How? Our systems recover over 99% of product in transfer piping. Recovering trapped product from transfer lines that would otherwise become flush-waste, means you can increase batch yields and eliminate flush waste. From raw materials to finished product, if you can pump it, we can pig it.
Kief Morris rethinks the infrastructure code delivery lifecycle, advocating for a shift towards composable infrastructure systems. We should shift to designing around deployable components rather than code modules, use more useful levels of abstraction, and drive design and deployment from applications rather than bottom-up, monolithic architecture and delivery.
Invited Remote Lecture to SC21 The International Conference for High Performance Computing, Networking, Storage, and Analysis St. Louis, Missouri November 18, 2021
Are you interested in dipping your toes in the cloud native observability waters, but as an engineer you are not sure where to get started with tracing problems through your microservices and application landscapes on Kubernetes? Then this is the session for you, where we take you on your first steps in an active open-source project that offers a buffet of languages, challenges, and opportunities for getting started with telemetry data. The project is called openTelemetry, but before diving into the specifics, we’ll start with de-mystifying key concepts and terms such as observability, telemetry, instrumentation, cardinality, percentile to lay a foundation. After understanding the nuts and bolts of observability and distributed traces, we’ll explore the openTelemetry community; its Special Interest Groups (SIGs), repositories, and how to become not only an end-user, but possibly a contributor.We will wrap up with an overview of the components in this project, such as the Collector, the OpenTelemetry protocol (OTLP), its APIs, and its SDKs. Attendees will leave with an understanding of key observability concepts, become grounded in distributed tracing terminology, be aware of the components of openTelemetry, and know how to take their first steps to an open-source contribution! Key Takeaways: Open source, vendor neutral instrumentation is an exciting new reality as the industry standardizes on openTelemetry for observability. OpenTelemetry is on a mission to enable effective observability by making high-quality, portable telemetry ubiquitous. The world of observability and monitoring today has a steep learning curve and in order to achieve ubiquity, the project would benefit from growing our contributor community.
Your comprehensive guide to RPA in healthcare for 2024. Explore the benefits, use cases, and emerging trends of robotic process automation. Understand the challenges and prepare for the future of healthcare automation
Six months into 2024, and it is clear the privacy ecosystem takes no days off!! Regulators continue to implement and enforce new regulations, businesses strive to meet requirements, and technology advances like AI have privacy professionals scratching their heads about managing risk. What can we learn about the first six months of data privacy trends and events in 2024? How should this inform your privacy program management for the rest of the year? Join TrustArc, Goodwin, and Snyk privacy experts as they discuss the changes we’ve seen in the first half of 2024 and gain insight into the concrete, actionable steps you can take to up-level your privacy program in the second half of the year. This webinar will review: - Key changes to privacy regulations in 2024 - Key themes in privacy and data governance in 2024 - How to maximize your privacy program in the second half of 2024
Is your patent a vanity piece of paper for your office wall? Or is it a reliable, defendable, assertable, property right? The difference is often quality. Is your patent simply a transactional cost and a large pile of legal bills for your startup? Or is it a leverageable asset worthy of attracting precious investment dollars, worth its cost in multiples of valuation? The difference is often quality. Is your patent application only good enough to get through the examination process? Or has it been crafted to stand the tests of time and varied audiences if you later need to assert that document against an infringer, find yourself litigating with it in an Article 3 Court at the hands of a judge and jury, God forbid, end up having to defend its validity at the PTAB, or even needing to use it to block pirated imports at the International Trade Commission? The difference is often quality. Quality will be our focus for a good chunk of the remainder of this season. What goes into a quality patent, and where possible, how do you get it without breaking the bank? ** Episode Overview ** In this first episode of our quality series, Kristen Hansen and the panel discuss: ⦿ What do we mean when we say patent quality? ⦿ Why is patent quality important? ⦿ How to balance quality and budget ⦿ The importance of searching, continuations, and draftsperson domain expertise ⦿ Very practical tips, tricks, examples, and Kristen’s Musts for drafting quality applications https://www.aurorapatents.com/patently-strategic-podcast.html
This is a slide deck that showcases the updates in Microsoft Copilot for May 2024
Cybersecurity is a major concern in today's connected digital world. Threats to organizations are constantly evolving and have the potential to compromise sensitive information, disrupt operations, and lead to significant financial losses. Traditional cybersecurity techniques often fall short against modern attackers. Therefore, advanced techniques for cyber security analysis and anomaly detection are essential for protecting digital assets. This blog explores these cutting-edge methods, providing a comprehensive overview of their application and importance.