This document discusses becoming a data-driven organization. It recommends investing in robust data extraction and loading processes. Quick wins should be obvious improvements that were overlooked previously. Analyses should minimize friction and experimentation is important. Measurements should be focused on distributions rather than single numbers, capturing external relationships. The checklist involves investing in data, finding quick wins, being methodical, focusing on one key performance indicator at a time while changing often, and avoiding stale metrics.
Načrtovanje uporabniške izkušnje je nujno že pred objavo spletne strani, saj na takšen način lahko zagotovimo boljši učinek spletnega marketinga. Pomembno je, da vsako spremembo na spletni strani testiramo.
This document discusses doing data science with Clojure. It notes that Clojure excels at structure manipulation and encoding through functions over collections without rigid data structures. This allows for composable and fast data analysis in a way that focuses on the intent through consistent APIs and currying. Live programming is also discussed as a way to catch errors early and enable faster iteration through more context and easier debugging. The ecosystem of Clojure tools is presented as facilitating tasks like machine learning, plotting, and using notebooks as dashboards.
Whenever a programming language comes out with a new feature, us smug lisp weenies shrug and point out how lisp had that in the early seventies; and if you look at the list of influences of a given language, there is bound to be a lisp in there. In this talk I will try to unpack what makes lisp special, why it is called programming programming language , how it changes one’s thinking, and how that thinking can be applied elsewhere.
Dynamic Shuttle Platform provides an affordable, convenient airport shuttle service for passengers and a marketing, sales, and operations platform for shuttle companies. It operates in cities with populations between 100,000-800,000 that lack a major nearby airport. The platform allows passengers to book and pay for shuttle rides online or via mobile app, and provides route planning, pricing, and customer support to shuttle companies. It has transported over 700,000 passengers with a high customer satisfaction score and is profitable in new cities within 6-8 months with an investment of €150,000 per city.
Having programmers do data science is terrible, if only everyone else were not even worse. The problem is of course tools. We seem to have settled on either: a bunch of disparate libraries thrown into a more or less agnostic IDE, or some point-and-click wonder which no matter how glossy, never seems to trully fit our domain once we get down to it. The dual lisp tradition of grow-your-own-language and grow-your-own-editor gives me hope there is a third way. This presentation is a meditation on how I approach data problems with Clojure, what I believe the process of doing data science should look like and the tools needed to get there. Some already exists (or can at least be bodged together); others can be made with relative ease (and we are already working on some of these); but a few will take a lot more hammock time. Talk delivered at :clojureD 2016 http://www.clojured.de/
The Net Promoter Score is calculated by subtracting the percentage of customers who are Detractors from the percentage of customers who are Promoters. With Google Analytics we can analyze individual segments from different perspective: acquisition, behaviour, conversions and we also can do remarketing on basis of all gathered data.
Najpogostejše napake pri optimizaciji spletnih trgovin in kako jih odpravite
Successfully forecasting future demand is key in allowing GoOpti its low prices while isolating transport partners from risk. It this talk Simon Belak, Chief Data scientist at GoOpti, will take you through how he approaches forecasting and the lessons that he learned along the way. The focus is going to be on models that do not require excessive amounts of data, are legible and work well as part of a continuous process (rather than being a one-of problem).
Having programmers do data science is terrible, if only everyone else were not even worse. The problem is of course tools. We seem to have settled on either: a bunch of disparate libraries thrown into a more or less agnostic IDE, or some point-and-click wonder which no matter how glossy, never seems to truly fit our domain once we get down to it. The dual lisp tradition of grow-your-own-language and grow-your-own-editor gives me hope there is a third way. This talk is a meditation on the ideal environment for doing data science and how to (almost) get there. I will cover how I approach data problems with Clojure (and why Clojure in the first place), what I believe the process of doing data science should look like and the tools needed to get there. Some already exists (or can at least be bodged together); others can be made with relative ease (and we are already working on some of these); but a few will take a lot more hammock time.
At the end results are the only thing that matters. You need to ask yourselves how does my digital revenue stream look like? How can I establish advanced digital metrics and define KPIs to understand my customers better? How can I track them through whole consumer decision journey? Which areas should I focus on to improve my business results? Looking through the eyes of an CEO and adviser to numerous companies in the field of digital, participants will receive precious advice on how to growth hack their digital marketing.
Ciljanje kupcev preko več kanalov. Če bi imeli samo en kanal, se investicija v marketing ne bi obrestovala. Niso vsi kupci enako dobri. Razviti je potrebno dolgoročni odnos s kupci.
Clojure has always been good at manipulating data. With the release of spec and Onyx (“a masterless, cloud scale, fault tolerant, high performance distributed computation system”) good became best. In this talk you will learn about a data layer architecture build around Kafka and Onyx that is self-describing, declarative, scalable and convenient to work with for the end user. The focus will be on the power and elegance of describing data and computation with data; and the inferences and automations that can be built on top of that.
In this talk, you will discover how the 15k LOC codebase was implemented with spec so you don't have to (but probably should). Validation; testing; destructuring; composable “data macros” via conformers; we’ve tried spec in all its multifaceted glory. You will discover a distillation of lessons learned interspersed with musing on how spec alters development flow and one’s thinking.
1) The document discusses how consumer behavior has fundamentally changed, with 80% of conversions including more than one digital channel and over 10,000 keywords used by users. 2) It emphasizes the importance of understanding the customer's entire decision journey, from initial interactions to transactions, and establishing metrics to track behavior and outcomes across different stages. 3) The author advocates adapting marketing efforts to better target users throughout their journey, with content tailored based on their needs and goals at different points in time.
What insight can we get from report in Google Analytics, about report based on data in Enhanced Ecommerce.
@sbelak Simon Belak Using Onyx in anger Clojure has always been good at manipulating data. With the release of spec and Onyx ("masterless, cloud scale, fault tolerant, high performance distributed computation system") good became best. In this talk I will walk you through a data layer architecture build around Kafka an Onyx that is self-describing, declarative, scalable and convenient to work with for the end user. The focus will be on the power and elegance of describing data and computation with data; and the inferences and automations that can be built on top of that.
This webinar is designed to help companies strengthen their competitive advantage by leveraging publicly available Web sources.