SlideShare a Scribd company logo
.NET Core + ASP.NET Core Training Course
Session 3
.NET Core
What we learned?
Session 1,2 Overview
• An Introduction on .NET Core 1.0
• .NET Core Components (CoreFX, CLR)
• .NET Core Deployment Models (Portable Apps, Self-contained Apps)
• .NET Core project structure
• An overview on .NET Standard Library
• .NET Portability Analyzer
• .NET Core Tools (.NET CLI)
.NET Core
What we’ll learn today?
Session 3 Agenda
• Introducing to Compiler
• What is the LLVM?
• LLILC
• RyuJIT
• AOT Compilation
• Preprocessors and Conditional Compilation
• An Overview on Dependency Injection
• Demos
.NET Core
Adaptability
Introducing to Compiler
We learned that .NET Core built on two main parts:
• Core CLR
It includes the garbage collector, JIT compiler,
base .NET data types and many low-level
classes.
• CoreFX
is platform-neutral code that is shared across all
platforms. Platform-neutral code can be
implemented as a single portable assembly that
be used on all platforms.
Windows has a larger implementation since CoreFX implements some
Windows-only features, such as Microsoft.Win32.Registry but does not yet
implement any Unix-only concepts.
.NET Core
Adaptability
Introducing to Compiler
There are a mix of platform-specific and platform-neutral libraries in .NET Core.
• CoreCLR is platform-specific. It's built in C/C++, so is platform-specific by construction.
• System.IO and System.Security.Cryptography.Algorithms are platform-specific, given that the storage
and cryptography APIs differ significantly on each OS.
• System.Collections and System.Linq are platform-neutral, given that they create and operate over
data structures.
.NET Core
LLVM
What is the LLVM?
The LLVM compiler infrastructure project (formerly Low Level Virtual Machine) is
a "collection of modular and reusable compiler and toolchain technologies"[3]
used to develop compiler front ends and back ends.
written in C and C++ and is designed for compile-time, link-time, run-time, and "idle-time" optimization of
programs written in arbitrary programming languages
started in 2000 at the University of Illinois at Urbana–Champaign, originally developed as a research infrastructure
to investigate dynamic compilation techniques for static and dynamic programming languages.
.NET Core
LLVM
What is the LLVM
LLVM can provide the middle layers of a complete compiler system, taking intermediate representation (IR) code from
a compiler and emitting an optimized IR. This new IR can then be converted and linked into machine-dependent
assembly language code for a target platform.
LLVM can also generate relocatable machine code at compile-time or link-time or even binary machine code at run-
time.
• Edit time
• Compile time
• Distribution time
• Installation time
• Link time
• Load time
• Run time
Program lifecycle phase
• Front ends: programming language support
C#, Java bytecode, Swift, Python, R, Ruby, Objective-C, Sony PlayStation 4 SDK, Common Lisp, D, Delphi, Fortran, OpenGL Shading Language, Scala, etc.
• Back ends: instruction set and microarchitecture support
ARM, Qualcomm Hexagon, MIPS, Nvidia Parallel Thread Execution PowerPC, AMD TeraScale, AMD Graphics Core Next (GCN), SPARC, XCorex86/x86-
64, z/Architecture, ARM, and PowerPC.
LLVM Components
.NET Core
LLVM
What is the LLVM
In fact, the name LLVM might refer to any of the following:
The LLVM project/infrastructure: This is an umbrella for several projects that, together, form a complete compiler: frontends, backends, optimizers, assemblers,
linkers, libc++, compiler-rt, and a JIT engine. ("LLVM is comprised of several projects".)
An LLVM-based compiler: This is a compiler built partially or completely with the LLVM infrastructure. For example, a compiler might use LLVM for the frontend and backend but
use GCC and GNU system libraries to perform the final link. ("I used LLVM to compile C programs to a MIPS platform“).
LLVM libraries: This is the reusable code portion of the LLVM infrastructure. ("My project uses LLVM to generate code through its Just-in-Time compilation framework").
LLVM core: The optimizations that happen at the intermediate language level and the backend algorithms form the LLVM core where the project started. ("LLVM and Clang are
two different projects“).
The LLVM IR: This is the LLVM compiler intermediate representation. ("I built a frontend that translates my own language to LLVM“).
Getting Started with LLVM Core Libraries
Bruno Cardoso Lopes, Rafael Auler
.NET Core
LLILC
LLILC - Overview
Is an LLVM based compiler for .NET Core. It includes a set of cross-platform .NET code generation tools
that enables compilation of MSIL byte code to LLVM supported platforms.
Today LLILC is being developed against dotnet/CoreCLR for use as a JIT, as well as an cross platform object
emitter and disassembler that is used by CoreRT as well as other dotnet utilites.
• code generator based on LLVM for MSIL (C#)
• allow compilation of MSIL using industrial strength components from a C++ compiler
The LLILC architecture is split broadly into three logical components
1. High level MSIL transforms, that expand out high level semantics into more MSIL
2. High level type optimizations, that removes unneeded types from the program
3. Translation to LLVM BitCode and code generation.
Pronunciation is: 'lilac‘
Today we're building a JIT to allow us to validate the MSIL translation to BitCode as well as build muscle on LLVM. This will be followed
by work on the required high level transforms, like method delegates, and generics, to get the basics working for AOT, and lastly the
type based optimizations to improve code size and code quality.
.NET Core
LLILC - Architectural Components
LLILC - Architectural Components
CoreCLR
The CoreCLR is the open source dynamic execution environment for MSIL (C#).
It provides a
• dynamic type system
• a code manager that organizes compilation
• an execution engine (EE)
Additionally the runtime provides the helpers, type tests, and memory barriers required by the code generator for
compilation.
Garbage Collector
The CLR relies on a precise, relocating garbage collector. This garbage collector is used within CoreCLR for the JIT
compilation model, and within the native runtime for the AOT model.
.NET Core
LLILC - Architectural Components
LLILC - Architectural Components
LLVM
LLVM is a great code generator that supports lots of platforms and CPU targets. It also has facilities to be used as
both a JIT and AOT compiler
IL Transforms this area is not defined. Further design work is needed for this within the AOT tool
IL Transforms precondition the incoming MSIL to account for items like delegates, generics, and inter-op thunks. The
intent of the transform phases is to flatten and simplify the C# language semantics to allow a more straight forward
mapping to BitCode.
Type Based Optimizations this area is not defined. Further design work is needed for this within the AOT tool
A number of optimizations can be done on the incoming programs type graph. The two key ones are tree shaking,
and generics sharing. In tree shaking, unused types and fields are removed from the program to reduce code size and
improve locality. For generic sharing, where possible generic method instances are shared to reduce code size.
.NET Core
LLILC - Architectural Components
LLILC - Architectural Components
Exception Handling Model
The CLR EH model includes features beyond the C++ Exception Handling model. C# allows try{} and catch(){} clauses
like in C++ but also includes finally {} blocks as well. Additionally there are compiler synthesized exceptions that will
be thrown for accessing through a null reference, accessing outside the bounds of a data type, for overflowing
arithmetic, and divide by zero.
Ahead of Time (AOT) Compilation Driver
Is responsible for marshalling resources for compilation. The driver will load the assemblies being compiled via the
Simple Type System (STS) and then for each method invoke the MSIL reader to translate to BitCode, with the results
emitted into object files. The resulting set of objects is then compiled together using the LLVM LTO facilities.
.NET Core
LLILC - Architectural Components
LLILC - Architectural Components
Simplified Type System
The Simplified Type System is a C++ implementation of a MSIL type loader. This component presents the driver and
code generator with an object and type model of the MSIL assembly.
Dependency Reducer (DR) and Generics
The DR and Generics support is still being fleshed out. They don't quite have a stake in the ground here yet.
.NET Core
LLILC - Architectural Components
LLILC - Architectural Components
.NET Core
Terminology
Terminology and Concepts
An interpreter for language X is a program (or a machine, or just some kind of mechanism in general) that executes any
program p written in language X such that it performs the effects and evaluates the results as prescribed by the
specification of X. CPUs are usually interpreters for their respective instructions sets, although modern high-
performance workstation CPUs are actually more complex than that; they may actually have an underlying proprietary
private instruction set and either translate (compile) or interpret the externally visible public instruction set.
A compiler from X to Y is a program (or a machine, or just some kind of mechanism in general) that translates any
program p from some language X into a semantically equivalent program p′ in some language Y in such a way that the
semantics of the program are preserved, i.e. that interpreting p′ with an interpreter for Y will yield the same results and
have the same effects as interpreting p with an interpreter for X. (Note that X and Y may be the same language.)
.NET Core
Ahead-of-Time (AOT) compiler
Ahead-of-Time (AOT)
The terms Ahead-of-Time (AOT) and Just-in-Time (JIT) refer to when compilation takes place: the "time" referred to in
those terms is "runtime", i.e. a JIT compiler compiles the program as it is running, an AOT compiler compiles the
program before it is running. Note that this requires that a JIT compiler from language X to language Y must somehow
work together with an interpreter for language Y, otherwise there wouldn't be any way to run the program. (So, for
example, a JIT compiler which compiles JavaScript to x86 machine code doesn't make sense without an x86 CPU; it
compiles the program while it is running, but without the x86 CPU the program wouldn't be running.)
Note that this distinction doesn't make sense for interpreters: an interpreter runs the program, the idea of an AOT
interpreter that runs a programming before it is running or a JIT interpreter that runs a program while it is running is
nonsensical. • AOT compiler: compiles before running
• JIT compiler: compiles while running
• interpreter: runs
.NET Core
Roslyn
Roslyn
Through Roslyn, compilers become platforms—APIs that you can use for code related tasks in your tools and applications.
It provides meta-programming, code generation and transformation, interactive use of the C# and VB languages, and
embedding of C# and VB in domain specific languages.
Each phase of this pipeline is now a separate component:
1. The parse phase, where source is tokenized and parsed into syntax that follows the language grammar.
2. The declaration phase, where declarations from source and imported metadata are analyzed to form named symbols.
3. The bind phase, where identifiers in the code are matched to symbols.
4. The emit phase, where all the information built up by the compiler is emitted as an assembly.
.NET Core
Roslyn
Roslyn
Roslyn API Layers
Roslyn consists of two main layers of APIs – the
Compiler APIs and Workspaces APIs.
.NET Core
Roslyn Sample
Roslyn
using Microsoft.CodeAnalysis;
using Microsoft.CodeAnalysis.CSharp;
using Microsoft.CodeAnalysis.CSharp.Syntax;
Main Method:
.NET Core
RyuJIT
RyuJIT
RyuJIT is the next generation Just-In-Time (JIT) compiler for .NET. It uses a high-performance JIT architecture, focused on
high throughput JIT compilation. It is much faster than the previous JIT64 64-bit JIT that has been used for the last 10
years (introduced in 2005 .NET 2.0 release). There was always a big gap in throughput between the 32- and 64-bit JITs.
That gap has been closed, making it easier to exclusively target 64-bit architectures or migrate workloads from 32- to 64-
bit.
RyuJIT is enabled for 64-bit processes running on top of the .NET Framework 4.6. Your app will run in a 64-bit process if
it is compiled as 64-bit or AnyCPU (although not as Prefer 32-bit), and run on a 64-bit operating system. RyuJIT is
similarly integrated into .NET Core, as the 64-bit JIT.
The project was initially targeted to improve high-scale 64-bit cloud workloads, although it has much broader
applicability. We do expect to add 32-bit support in a future release.
.NET Core
Understanding C# Preprocessor Directives
Conditional Compilation
These commands are never actually translated to any commands in your executable code, but they affect aspects of the
compilation process.
Example: using preprocessor directives to prevent the compiler from compiling certain portions of code.
#define and #undef
Tells the compiler that a symbol with the given name (for example DEBUG) exists. It is a little bit like
declaring a variable, except that this variable doesn’t really have a value—it just exists.
#define DEBUG | #undef DEBUG
#define and #undef directives should be place at the beginning of the C# source file, before any code that declares any
objects to be compiled.
.NET Core
C# Preprocessor Directives
Conditional Compilation
#if, #elif, #else, and #endif
int DoSomeWork(double x)
{
// do something
#if DEBUG
WriteLine($"x is {x}");
#endif
}
#define ENTERPRISE
#define W10
// further on in the file
#if ENTERPRISE
// do something
#if W10
// some code that is only relevant to enterprise
// edition running on W10
#endif
#elif PROFESSIONAL
// do something else
#else
// code for the leaner version
#endif
.NET Core
C# Preprocessor Directives
Conditional Compilation
#if, #elif, #else, and #endif
int DoSomeWork (double x)
{
// do something
#if DEBUG
WriteLine($"x is {x}");
#endif
}
#define ENTERPRISE
#define W10
// further on in the file
#if ENTERPRISE
// do something
#if W10
// some code that is only relevant to enterprise
// edition running on W10
#endif
#elif PROFESSIONAL
// do something else
#else
// code for the leaner version
#endif
Target Frameworks
C# Preprocessor Directives
#if (C# Reference) on MSDN
.NET Core
dotnet watch
dotnet watch
dotnet watch is a development time tool that runs a dotnet command when source files change. It can be used to
compile, run tests, or publish when code changes.
.NET Core
What is Dependency Injection?
Dependency Injection
Dependency injection (DI) is a technique for achieving loose coupling between objects and their collaborators, or
dependencies. Rather than directly instantiating collaborators, or using static references, the objects a class needs in
order to perform its actions are provided to the class in some fashion. Most often, classes will declare their
dependencies via their constructor, allowing them to follow the Explicit Dependencies Principle. This approach is known
as “constructor injection”.
“high level modules should not depend on low level modules; both should depend on abstractions.”
When a system is designed to use DI, with many classes requesting their dependencies via their constructor (or
properties), it’s helpful to have a class dedicated to creating these classes with their associated dependencies. These
classes are referred to as containers, or more specifically, Inversion of Control (IoC) containers or Dependency Injection
(DI) containers.
.NET Core
Dependency vs Inversion of Control
Dependency Injection
• Inversion of control :- It’s a generic
term and implemented in several
ways (events, delegates etc).
• Dependency injection :- DI is a
subtype of IOC and is implemented by
constructor injection, setter injection
or method injection.
.NET Core
Breaking changes in RC2:
Breaking changes in RC2
• Removed support for async/Task<> Main.
• Removed support for instantiating of entry point type (Program).
• The Main method should be public static void Main or public static int Main.
• Removed support for injecting dependencies into the Program class's constructor and Main method.
• Use PlatformServices and CompilationServices instead.
To get to IApplicationEnvironment, IRuntimeEnvironment, IAssemblyLoaderContainer,
IAssemblyLoadContextAccessor, ILibraryManager use
Microsoft.Extensions.PlatformAbstractions.PlatformServices.Default static object.
To get to ILibraryExporter, ICompilerOptionsProvider use the
Microsoft.Extensions.CompilationAbstractions.CompilationServices.Default static object.
• Removed support for CallContextServiceLocator. Use PlatformServices and CompilationServices instead.
.NET Core
Strategy:
Strategy Pattern
The classes and objects participating in this pattern are:
• Strategy (SortStrategy): declares an interface common to all supported algorithms. Context uses this interface to call the
algorithm defined by a ConcreteStrategy
• ConcreteStrategy (QuickSort, ShellSort, MergeSort): implements the algorithm using the Strategy interface
• Context (SortedList)
• is configured with a ConcreteStrategy object
• maintains a reference to a Strategy object
• may define an interface that lets Strategy access its data.
.NET Core
Strategy:
Strategy Pattern
Service Lifetimes and Registration Options
• Transient: Transient lifetime services are created each time they are requested. This lifetime works best for
lightweight, stateless services.
• Scoped: Scoped lifetime services are created once per request.
• Singleton: Singleton lifetime services are created the first time they are requested (or when ConfigureServices is run
if you specify an instance there) and then every subsequent request will use the same instance.
.NET Core
Demo
Demo

More Related Content

.NET Core, ASP.NET Core Course, Session 3

  • 1. .NET Core + ASP.NET Core Training Course Session 3
  • 2. .NET Core What we learned? Session 1,2 Overview • An Introduction on .NET Core 1.0 • .NET Core Components (CoreFX, CLR) • .NET Core Deployment Models (Portable Apps, Self-contained Apps) • .NET Core project structure • An overview on .NET Standard Library • .NET Portability Analyzer • .NET Core Tools (.NET CLI)
  • 3. .NET Core What we’ll learn today? Session 3 Agenda • Introducing to Compiler • What is the LLVM? • LLILC • RyuJIT • AOT Compilation • Preprocessors and Conditional Compilation • An Overview on Dependency Injection • Demos
  • 4. .NET Core Adaptability Introducing to Compiler We learned that .NET Core built on two main parts: • Core CLR It includes the garbage collector, JIT compiler, base .NET data types and many low-level classes. • CoreFX is platform-neutral code that is shared across all platforms. Platform-neutral code can be implemented as a single portable assembly that be used on all platforms. Windows has a larger implementation since CoreFX implements some Windows-only features, such as Microsoft.Win32.Registry but does not yet implement any Unix-only concepts.
  • 5. .NET Core Adaptability Introducing to Compiler There are a mix of platform-specific and platform-neutral libraries in .NET Core. • CoreCLR is platform-specific. It's built in C/C++, so is platform-specific by construction. • System.IO and System.Security.Cryptography.Algorithms are platform-specific, given that the storage and cryptography APIs differ significantly on each OS. • System.Collections and System.Linq are platform-neutral, given that they create and operate over data structures.
  • 6. .NET Core LLVM What is the LLVM? The LLVM compiler infrastructure project (formerly Low Level Virtual Machine) is a "collection of modular and reusable compiler and toolchain technologies"[3] used to develop compiler front ends and back ends. written in C and C++ and is designed for compile-time, link-time, run-time, and "idle-time" optimization of programs written in arbitrary programming languages started in 2000 at the University of Illinois at Urbana–Champaign, originally developed as a research infrastructure to investigate dynamic compilation techniques for static and dynamic programming languages.
  • 7. .NET Core LLVM What is the LLVM LLVM can provide the middle layers of a complete compiler system, taking intermediate representation (IR) code from a compiler and emitting an optimized IR. This new IR can then be converted and linked into machine-dependent assembly language code for a target platform. LLVM can also generate relocatable machine code at compile-time or link-time or even binary machine code at run- time. • Edit time • Compile time • Distribution time • Installation time • Link time • Load time • Run time Program lifecycle phase • Front ends: programming language support C#, Java bytecode, Swift, Python, R, Ruby, Objective-C, Sony PlayStation 4 SDK, Common Lisp, D, Delphi, Fortran, OpenGL Shading Language, Scala, etc. • Back ends: instruction set and microarchitecture support ARM, Qualcomm Hexagon, MIPS, Nvidia Parallel Thread Execution PowerPC, AMD TeraScale, AMD Graphics Core Next (GCN), SPARC, XCorex86/x86- 64, z/Architecture, ARM, and PowerPC. LLVM Components
  • 8. .NET Core LLVM What is the LLVM In fact, the name LLVM might refer to any of the following: The LLVM project/infrastructure: This is an umbrella for several projects that, together, form a complete compiler: frontends, backends, optimizers, assemblers, linkers, libc++, compiler-rt, and a JIT engine. ("LLVM is comprised of several projects".) An LLVM-based compiler: This is a compiler built partially or completely with the LLVM infrastructure. For example, a compiler might use LLVM for the frontend and backend but use GCC and GNU system libraries to perform the final link. ("I used LLVM to compile C programs to a MIPS platform“). LLVM libraries: This is the reusable code portion of the LLVM infrastructure. ("My project uses LLVM to generate code through its Just-in-Time compilation framework"). LLVM core: The optimizations that happen at the intermediate language level and the backend algorithms form the LLVM core where the project started. ("LLVM and Clang are two different projects“). The LLVM IR: This is the LLVM compiler intermediate representation. ("I built a frontend that translates my own language to LLVM“). Getting Started with LLVM Core Libraries Bruno Cardoso Lopes, Rafael Auler
  • 9. .NET Core LLILC LLILC - Overview Is an LLVM based compiler for .NET Core. It includes a set of cross-platform .NET code generation tools that enables compilation of MSIL byte code to LLVM supported platforms. Today LLILC is being developed against dotnet/CoreCLR for use as a JIT, as well as an cross platform object emitter and disassembler that is used by CoreRT as well as other dotnet utilites. • code generator based on LLVM for MSIL (C#) • allow compilation of MSIL using industrial strength components from a C++ compiler The LLILC architecture is split broadly into three logical components 1. High level MSIL transforms, that expand out high level semantics into more MSIL 2. High level type optimizations, that removes unneeded types from the program 3. Translation to LLVM BitCode and code generation. Pronunciation is: 'lilac‘ Today we're building a JIT to allow us to validate the MSIL translation to BitCode as well as build muscle on LLVM. This will be followed by work on the required high level transforms, like method delegates, and generics, to get the basics working for AOT, and lastly the type based optimizations to improve code size and code quality.
  • 10. .NET Core LLILC - Architectural Components LLILC - Architectural Components CoreCLR The CoreCLR is the open source dynamic execution environment for MSIL (C#). It provides a • dynamic type system • a code manager that organizes compilation • an execution engine (EE) Additionally the runtime provides the helpers, type tests, and memory barriers required by the code generator for compilation. Garbage Collector The CLR relies on a precise, relocating garbage collector. This garbage collector is used within CoreCLR for the JIT compilation model, and within the native runtime for the AOT model.
  • 11. .NET Core LLILC - Architectural Components LLILC - Architectural Components LLVM LLVM is a great code generator that supports lots of platforms and CPU targets. It also has facilities to be used as both a JIT and AOT compiler IL Transforms this area is not defined. Further design work is needed for this within the AOT tool IL Transforms precondition the incoming MSIL to account for items like delegates, generics, and inter-op thunks. The intent of the transform phases is to flatten and simplify the C# language semantics to allow a more straight forward mapping to BitCode. Type Based Optimizations this area is not defined. Further design work is needed for this within the AOT tool A number of optimizations can be done on the incoming programs type graph. The two key ones are tree shaking, and generics sharing. In tree shaking, unused types and fields are removed from the program to reduce code size and improve locality. For generic sharing, where possible generic method instances are shared to reduce code size.
  • 12. .NET Core LLILC - Architectural Components LLILC - Architectural Components Exception Handling Model The CLR EH model includes features beyond the C++ Exception Handling model. C# allows try{} and catch(){} clauses like in C++ but also includes finally {} blocks as well. Additionally there are compiler synthesized exceptions that will be thrown for accessing through a null reference, accessing outside the bounds of a data type, for overflowing arithmetic, and divide by zero. Ahead of Time (AOT) Compilation Driver Is responsible for marshalling resources for compilation. The driver will load the assemblies being compiled via the Simple Type System (STS) and then for each method invoke the MSIL reader to translate to BitCode, with the results emitted into object files. The resulting set of objects is then compiled together using the LLVM LTO facilities.
  • 13. .NET Core LLILC - Architectural Components LLILC - Architectural Components Simplified Type System The Simplified Type System is a C++ implementation of a MSIL type loader. This component presents the driver and code generator with an object and type model of the MSIL assembly. Dependency Reducer (DR) and Generics The DR and Generics support is still being fleshed out. They don't quite have a stake in the ground here yet.
  • 14. .NET Core LLILC - Architectural Components LLILC - Architectural Components
  • 15. .NET Core Terminology Terminology and Concepts An interpreter for language X is a program (or a machine, or just some kind of mechanism in general) that executes any program p written in language X such that it performs the effects and evaluates the results as prescribed by the specification of X. CPUs are usually interpreters for their respective instructions sets, although modern high- performance workstation CPUs are actually more complex than that; they may actually have an underlying proprietary private instruction set and either translate (compile) or interpret the externally visible public instruction set. A compiler from X to Y is a program (or a machine, or just some kind of mechanism in general) that translates any program p from some language X into a semantically equivalent program p′ in some language Y in such a way that the semantics of the program are preserved, i.e. that interpreting p′ with an interpreter for Y will yield the same results and have the same effects as interpreting p with an interpreter for X. (Note that X and Y may be the same language.)
  • 16. .NET Core Ahead-of-Time (AOT) compiler Ahead-of-Time (AOT) The terms Ahead-of-Time (AOT) and Just-in-Time (JIT) refer to when compilation takes place: the "time" referred to in those terms is "runtime", i.e. a JIT compiler compiles the program as it is running, an AOT compiler compiles the program before it is running. Note that this requires that a JIT compiler from language X to language Y must somehow work together with an interpreter for language Y, otherwise there wouldn't be any way to run the program. (So, for example, a JIT compiler which compiles JavaScript to x86 machine code doesn't make sense without an x86 CPU; it compiles the program while it is running, but without the x86 CPU the program wouldn't be running.) Note that this distinction doesn't make sense for interpreters: an interpreter runs the program, the idea of an AOT interpreter that runs a programming before it is running or a JIT interpreter that runs a program while it is running is nonsensical. • AOT compiler: compiles before running • JIT compiler: compiles while running • interpreter: runs
  • 17. .NET Core Roslyn Roslyn Through Roslyn, compilers become platforms—APIs that you can use for code related tasks in your tools and applications. It provides meta-programming, code generation and transformation, interactive use of the C# and VB languages, and embedding of C# and VB in domain specific languages. Each phase of this pipeline is now a separate component: 1. The parse phase, where source is tokenized and parsed into syntax that follows the language grammar. 2. The declaration phase, where declarations from source and imported metadata are analyzed to form named symbols. 3. The bind phase, where identifiers in the code are matched to symbols. 4. The emit phase, where all the information built up by the compiler is emitted as an assembly.
  • 18. .NET Core Roslyn Roslyn Roslyn API Layers Roslyn consists of two main layers of APIs – the Compiler APIs and Workspaces APIs.
  • 19. .NET Core Roslyn Sample Roslyn using Microsoft.CodeAnalysis; using Microsoft.CodeAnalysis.CSharp; using Microsoft.CodeAnalysis.CSharp.Syntax; Main Method:
  • 20. .NET Core RyuJIT RyuJIT RyuJIT is the next generation Just-In-Time (JIT) compiler for .NET. It uses a high-performance JIT architecture, focused on high throughput JIT compilation. It is much faster than the previous JIT64 64-bit JIT that has been used for the last 10 years (introduced in 2005 .NET 2.0 release). There was always a big gap in throughput between the 32- and 64-bit JITs. That gap has been closed, making it easier to exclusively target 64-bit architectures or migrate workloads from 32- to 64- bit. RyuJIT is enabled for 64-bit processes running on top of the .NET Framework 4.6. Your app will run in a 64-bit process if it is compiled as 64-bit or AnyCPU (although not as Prefer 32-bit), and run on a 64-bit operating system. RyuJIT is similarly integrated into .NET Core, as the 64-bit JIT. The project was initially targeted to improve high-scale 64-bit cloud workloads, although it has much broader applicability. We do expect to add 32-bit support in a future release.
  • 21. .NET Core Understanding C# Preprocessor Directives Conditional Compilation These commands are never actually translated to any commands in your executable code, but they affect aspects of the compilation process. Example: using preprocessor directives to prevent the compiler from compiling certain portions of code. #define and #undef Tells the compiler that a symbol with the given name (for example DEBUG) exists. It is a little bit like declaring a variable, except that this variable doesn’t really have a value—it just exists. #define DEBUG | #undef DEBUG #define and #undef directives should be place at the beginning of the C# source file, before any code that declares any objects to be compiled.
  • 22. .NET Core C# Preprocessor Directives Conditional Compilation #if, #elif, #else, and #endif int DoSomeWork(double x) { // do something #if DEBUG WriteLine($"x is {x}"); #endif } #define ENTERPRISE #define W10 // further on in the file #if ENTERPRISE // do something #if W10 // some code that is only relevant to enterprise // edition running on W10 #endif #elif PROFESSIONAL // do something else #else // code for the leaner version #endif
  • 23. .NET Core C# Preprocessor Directives Conditional Compilation #if, #elif, #else, and #endif int DoSomeWork (double x) { // do something #if DEBUG WriteLine($"x is {x}"); #endif } #define ENTERPRISE #define W10 // further on in the file #if ENTERPRISE // do something #if W10 // some code that is only relevant to enterprise // edition running on W10 #endif #elif PROFESSIONAL // do something else #else // code for the leaner version #endif Target Frameworks C# Preprocessor Directives #if (C# Reference) on MSDN
  • 24. .NET Core dotnet watch dotnet watch dotnet watch is a development time tool that runs a dotnet command when source files change. It can be used to compile, run tests, or publish when code changes.
  • 25. .NET Core What is Dependency Injection? Dependency Injection Dependency injection (DI) is a technique for achieving loose coupling between objects and their collaborators, or dependencies. Rather than directly instantiating collaborators, or using static references, the objects a class needs in order to perform its actions are provided to the class in some fashion. Most often, classes will declare their dependencies via their constructor, allowing them to follow the Explicit Dependencies Principle. This approach is known as “constructor injection”. “high level modules should not depend on low level modules; both should depend on abstractions.” When a system is designed to use DI, with many classes requesting their dependencies via their constructor (or properties), it’s helpful to have a class dedicated to creating these classes with their associated dependencies. These classes are referred to as containers, or more specifically, Inversion of Control (IoC) containers or Dependency Injection (DI) containers.
  • 26. .NET Core Dependency vs Inversion of Control Dependency Injection • Inversion of control :- It’s a generic term and implemented in several ways (events, delegates etc). • Dependency injection :- DI is a subtype of IOC and is implemented by constructor injection, setter injection or method injection.
  • 27. .NET Core Breaking changes in RC2: Breaking changes in RC2 • Removed support for async/Task<> Main. • Removed support for instantiating of entry point type (Program). • The Main method should be public static void Main or public static int Main. • Removed support for injecting dependencies into the Program class's constructor and Main method. • Use PlatformServices and CompilationServices instead. To get to IApplicationEnvironment, IRuntimeEnvironment, IAssemblyLoaderContainer, IAssemblyLoadContextAccessor, ILibraryManager use Microsoft.Extensions.PlatformAbstractions.PlatformServices.Default static object. To get to ILibraryExporter, ICompilerOptionsProvider use the Microsoft.Extensions.CompilationAbstractions.CompilationServices.Default static object. • Removed support for CallContextServiceLocator. Use PlatformServices and CompilationServices instead.
  • 28. .NET Core Strategy: Strategy Pattern The classes and objects participating in this pattern are: • Strategy (SortStrategy): declares an interface common to all supported algorithms. Context uses this interface to call the algorithm defined by a ConcreteStrategy • ConcreteStrategy (QuickSort, ShellSort, MergeSort): implements the algorithm using the Strategy interface • Context (SortedList) • is configured with a ConcreteStrategy object • maintains a reference to a Strategy object • may define an interface that lets Strategy access its data.
  • 29. .NET Core Strategy: Strategy Pattern Service Lifetimes and Registration Options • Transient: Transient lifetime services are created each time they are requested. This lifetime works best for lightweight, stateless services. • Scoped: Scoped lifetime services are created once per request. • Singleton: Singleton lifetime services are created the first time they are requested (or when ConfigureServices is run if you specify an instance there) and then every subsequent request will use the same instance.