Sharing my thoughts and cases about co-work with test and developemnt. Two big approaches.
One is Engineering approach (
1. Early testing education
2. Test design
3. Test code guide
4. Pair-testing, programming
5. Test-Automation),
Second is Strategic activities (
1. Test Strategy/Plan
2. Test analysis/report)
Also, I wanted to mention tester's various career paths.
Thank you.
[온라인교육시리즈] 글로벌 서비스를 위한 인프라 구축방법(남용현 클라우드 솔루션 아키텍트)
글로벌 향 서비스 구축 시, 네이버 클라우드 플랫폼에서 사용할 수 있는 서비스들과 인프라단에서 고려해야 할 사항들에 대해서 자세히 소개해 드립니다 | Let me introduce you in detail the services available on the Naver cloud platform and what the infrastructure needs to consider when building a global service.
The document discusses various machine learning clustering algorithms like K-means clustering, DBSCAN, and EM clustering. It also discusses neural network architectures like LSTM, bi-LSTM, and convolutional neural networks. Finally, it presents results from evaluating different chatbot models on various metrics like validation score.
상업적 이용 및 출처없는 무단전재를 금합니다.
애자일과 애자일 테스트 소개 (테스트기본교육 3장 2절)
애자일의 스크럼, XP에 대한 기본적인 소개와 스크럼 팀 안에서 테스트 역할자로써 사용자 스토리 리뷰, 테스트 설계, 짝 테스트, 테스트 자동화 등에 대한 내용을 사례 기반으로 소개하고 있습니다.
When develpment met test(shift left testing)SangIn Choung
Sharing my thoughts and cases about co-work with test and developemnt. Two big approaches.
One is Engineering approach (
1. Early testing education
2. Test design
3. Test code guide
4. Pair-testing, programming
5. Test-Automation),
Second is Strategic activities (
1. Test Strategy/Plan
2. Test analysis/report)
Also, I wanted to mention tester's various career paths.
Thank you.
글로벌 향 서비스 구축 시, 네이버 클라우드 플랫폼에서 사용할 수 있는 서비스들과 인프라단에서 고려해야 할 사항들에 대해서 자세히 소개해 드립니다 | Let me introduce you in detail the services available on the Naver cloud platform and what the infrastructure needs to consider when building a global service.
The document discusses various machine learning clustering algorithms like K-means clustering, DBSCAN, and EM clustering. It also discusses neural network architectures like LSTM, bi-LSTM, and convolutional neural networks. Finally, it presents results from evaluating different chatbot models on various metrics like validation score.
The document discusses challenges with using reinforcement learning for robotics. While simulations allow fast training of agents, there is often a "reality gap" when transferring learning to real robots. Other approaches like imitation learning and self-supervised learning can be safer alternatives that don't require trial-and-error. To better apply reinforcement learning, robots may need model-based approaches that learn forward models of the world, as well as techniques like active localization that allow robots to gather targeted information through interactive perception. Closing the reality gap will require finding ways to better match simulations to reality or allow robots to learn from real-world experiences.
[243] Deep Learning to help student’s Deep LearningNAVER D2
This document describes research on using deep learning to predict student performance in massive open online courses (MOOCs). It introduces GritNet, a model that takes raw student activity data as input and predicts outcomes like course graduation without feature engineering. GritNet outperforms baselines by more than 5% in predicting graduation. The document also describes how GritNet can be adapted in an unsupervised way to new courses using pseudo-labels, improving predictions in the first few weeks. Overall, GritNet is presented as the state-of-the-art for student prediction and can be transferred across courses without labels.
[234]Fast & Accurate Data Annotation Pipeline for AI applicationsNAVER D2
This document provides a summary of new datasets and papers related to computer vision tasks including object detection, image matting, person pose estimation, pedestrian detection, and person instance segmentation. A total of 8 papers and their associated datasets are listed with brief descriptions of the core contributions or techniques developed in each.
[226]NAVER 광고 deep click prediction: 모델링부터 서빙까지NAVER D2
This document presents a formula for calculating the loss function J(θ) in machine learning models. The formula averages the negative log likelihood of the predicted probabilities being correct over all samples S, and includes a regularization term λ that penalizes predicted embeddings being dissimilar from actual embeddings. It also defines the cosine similarity term used in the regularization.
[214] Ai Serving Platform: 하루 수 억 건의 인퍼런스를 처리하기 위한 고군분투기NAVER D2
The document discusses running a TensorFlow Serving (TFS) container using Docker. It shows commands to:
1. Pull the TFS Docker image from a repository
2. Define a script to configure and run the TFS container, specifying the model path, name, and port mapping
3. Run the script to start the TFS container exposing port 13377
The document discusses linear algebra concepts including:
- Representing a system of linear equations as a matrix equation Ax = b where A is a coefficient matrix, x is a vector of unknowns, and b is a vector of constants.
- Solving for the vector x that satisfies the matrix equation using linear algebra techniques such as row reduction.
- Examples of matrix equations and their component vectors are shown.
This document describes the steps to convert a TensorFlow model to a TensorRT engine for inference. It includes steps to parse the model, optimize it, generate a runtime engine, serialize and deserialize the engine, as well as perform inference using the engine. It also provides code snippets for a PReLU plugin implementation in C++.
The document discusses machine reading comprehension (MRC) techniques for question answering (QA) systems, comparing search-based and natural language processing (NLP)-based approaches. It covers key milestones in the development of extractive QA models using NLP, from early sentence-level models to current state-of-the-art techniques like cross-attention, self-attention, and transfer learning. It notes the speed and scalability benefits of combining search and reading methods for QA.
3. 동기 & 목적
• 앱 개발자들이 flash memory의 I/O 동작을 이해하면 더 빠른 앱을 개발할 수
있을 것이라고 생각했습니다.
• 성능을 측정하고 I/O 동작의 분석을 통해 원인을 설명하는 방식으로 진행됩니
다.
• 최근에 출시된 또는 곧 만나게 될 flash memory를 소개하겠습니다.
• 설명을 위한 도표는 이해를 돕기 위해 간략하게 표현하였고, 이로 인해 다소
과장된 부분이 있을 수 있습니다.
23. 주요 interface 속도
0 100 200 300 400 500 600
LTE Category 10 DL
IEEE 802.11ac
SD card class 10
eMMC 5.0
USB 3.0
UFS 2.0 HS-GEAR3
MB/s
24. UFS에서 압축 앱 성능 test
00:16
00:26
01:20
00:09
00:17
00:36
0
20
40
60
80
100
A B C
eMMC UFS
25. eMMC는 Half, UFS는 Full
UFS
READ
WRITE
storage READ
WRITE
ZIPhost ZIP
TIME
READ
WRITE
ZIP
READ
WRITE
ZIP
READ WRITEstorage READ WRITE
ZIPhost ZIP
TIME
eMMC
READ READ
ZIP
26. eMMC는 Half, UFS는 Full
UFS
READ
WRITE
storage READ
WRITE
ZIPhost ZIP
TIME
READ
WRITE
ZIP
READ
WRITE
ZIP
READ WRITEstorage READ WRITE
ZIPhost ZIP
TIME
eMMC
READ READ
ZIP
27. eMMC, UFS 전송 방식 차이점
UFS
eMMC
TIME
storage
host
TIME
storage
host
D0 D1 D2 D3
28. UFS, thread에 따른 BM 결과
0.0
2.0
4.0
6.0
8.0
1 4 8
number of thread
4KB Random Read
DDP QDP
0.0
2.0
4.0
6.0
8.0
1 4 8
number of thread
4KB Random Write
DDP QDP
35. Barrier Command
W W WWF W W F F
RAM
RAM
NAND
RAM
NAND
NAND
W W WWB W W B B
G1 G2 G3
Keep an order
RAM NAND
RAM
36. SQLite의 write request 빈도
0%
20%
40%
60%
80%
100%
web
surfing
camera facebook contacts file copy hangouts movie
player
image
viewer
music
player
YouTube video
recording
Journal Meta SQLite Normal Data
37. SQLite의 chunk size 비율
0%
20%
40%
60%
80%
100%
web
surfing
camera facebook contacts file copy hangouts movie
play
image
viewer
music
play
YouTube video
recording
4KB 8KB 12~16KB 20~32KB 36~64KB 68~512KB
41. 마치며
• 가능하면 multi-thread, flush는 필요할 때만.
• Ecosystem이 발전하면 더 빠른 I/O 환경이 구축.
• 효율적인 RAM 사용.
• 4K UHD 영상, VR 촬영으로 더 빠르고 많은 저장 공간 필요.
• 빠른 flash memory를 활용할 수 있는 앱도 있었으면...