Azure DevOps Service Hook actions for Jenkins CI

Configuring a service hook in Azure DevOps for Jenkins CI you have to choose an action — Trigger generic build or Trigger Git build.

Which one to choose?

Trigger generic build

This one forces you to choose a build. For a multibranch pipeline, it would be a combination of Jenkins job and branch, like <job-name>/master.

This is ok for freestyle job with only one branch for CI.

Trigger Git build

If you are making CI for features, fixes and pull-requests this action is recommended one. Jenkins Git plugin with ‘Discover branches’ behaviour in a job takes care of the rest.

It’s enough to make a single service hook with this action for all repositories in a project.

Jenkins CI and Azure DevOps with git and NuGet feed

I was building Jenkins pipeline last week and had to research how can I build dotnet code from Azure DevOps Git repository and use Azure DevOps NuGet feed. Despite the promise of harmony perfectness with Jenkins and Azure DevOps, there’s no connector between Jenkins and Azure DevOps Artifacts. Also, integration requires continuous maintenance because of Personal Access Token (PAT) temporality.

But if you really need, see how I implemented my Jenkins + Azure DevOps Artifacts and Git integration.

What credentials are needed?

You have to create 3 types of a token:

  1. Jenkins API Token for Azure DevOps Service Hook. It’s used to trigger Jenkins build after code is pushed. Jenkins user owning this API token should have permissions: Overall(Read), Job(Build), Job(Read).
  2. Azure DevOps Personal Access Token (PAT) for accessing Git repository from Jenkins. When you create this PAT add the Code(read) scope to it. Expiration up to 1 year.
  3. Azure DevOps Personal Access Token (PAT) for accessing NuGet feed. This PAT is created automatically by Azure Artifacts Credential Provider and has scope Packaging(Read & write). Expires after 3 months.

So, the first thing you probably have to do is create Jenkins build user and temporarily keep its password because you have to log in under its account to create API token on Jenkins side. I discourage you from using your personal user account because whether you leave company builds shouldn’t stop working.

(Upd. 2019/08/21: Look the fresh article to know how to avoid PAT from 3rd point).

Make NuGet looking for packages in Azure DevOps

Create NuGet.Config

In the root of a project, I make NuGet.Config file with settings like:

<?xml version="1.0" encoding="utf-8"?&gt;
<configuration&gt;
    <packageSources&gt;
        <clear /&gt; <!-- ensure only the sources defined below are used --&gt;
        <add key="YourFeedName" value="https://YourOrgName.pkgs.visualstudio.com/_packaging/YourFeedName/nuget/v3/index.json" /&gt;
    </packageSources&gt;
</configuration&gt;

You shouldn’t care about NuGet configuration in profiles of developers or build agents and the easiest way to avoid the care is to manage it as code.

Authorize with Azure Artifacts Credential Provider

Just configuration is not enough, you need authorization. You have to deploy Azure Artifacts Credential Provider and authorize under profile of Jenkins build agent.

  1. Download Azure Artifacts Credential Provider (windows, linux / mac).
  2. Unpack it under %userprofile%\.nuget\ (Windows) or $HOME/.nuget/ (Linux / Mac).
  3. Execute: dotnet restore --interactive
  4. Go to https://microsoft.com/devicelogin, enter the code displayed by previous CLI command then authenticate.

When authorization is made PAT with scope Packaging(Read & write) is created for 3 months and stored under %appdata%\Local\MicrosoftCredentialProvider\ (Windows) or $HOME/.local/share/MicrosoftCredentialProvider/ (Linux / Mac).

(Upd. 2019/08/21: Download and unpack, don’t execute .. --interactive. Look the fresh article to know a way to have only one PAT for Azure DevOps instead of two).

Make Jenkins able to pull from Azure DevOps Git repository

  1. Manually create PAT with scope Code(read). You can set expiration up to 1 year for it.
  2. On Jenkins master create credentials. As login use your Azure DevOps (Azure AD / Microsoft account) user login (whose PAT was created).
  3. When configuring build project, use this credentials.

Create Jenkins build project

You have to do it before creating an Azure DevOps service hook.

Make Azure DevOps triggering Jenkins when code pushed

  1. Login to Jenkins under the build user account. I appealed to make it at the beginning of this article (you can reset it’s password any time).
  2. Create an API token (give it a meaningful name) and copy to the buffer.
  3. Go to Azure DevOps at /<project>/_settings/serviceHooks and create the hook you need.

(Upd. 2019/08/21: Now, with Jenkins pipelines I use Trigger Git build service hook instead of Trigger generic build. It doesn’t require you to create a trigger instance for every project, build job should be Multibranch Pipeline).

Recap

  1. Create a Jenkins build user.
  2. Create NuGet.Config in the code repository.
  3. Configure NuGet authorization in the profile of a Jenkins build agent.
  4. Create PAT for Jenkins to pull from Azure DevOps Git repository.
  5. (Upd. 2019/08/21: not recommended) create PAT under Jenkins build user account.
  6. Create a service hook in Azure DevOps.

(Upd. 2019/08/21: Look the fresh article to know how to avoid PAT from 5th point).

Writing software requirements specifications

When I started in business application development, my wrong believing was that some general software requirements specification format can be applied to all kinds of business systems.

My research

I was looking for widely applicable standards, trying to:

  • Generalize integrator’s specification formats, that was successfully applied to customize business systems in which integrator specializes (and I was encountered).
  • Adopt national standards like GOST (Russian).
  • Adopt international standards like ISO.

Here I mean, I was researching for my own format, that I could give an analyst or use myself to standardize analysis work.

These attempts happened with me a lot of times, I would say constantly.

Frequently, when I started to describe something to be developed, the particular situation made irrelevant a lot of parts of previously used formats (as well as GOST and ISO standards) because:

  1. Different systems use different terminology (that is used by users and developers).
  2. In a customization project, it makes no sense to write requirements that are inherent to the existing / purchased system. Every system determines its own zones where you have to make decisions (while other zones are closed to change so you have nothing to describe).
  3. For better time and cost estimation, things that are specific and important should not be missed by developers due to the large size of the specification, describing everything in a standard format.

Even if I had software requirements specification format, that I could apply to the system by excluding and adopting its parts, it didn’t mean that any analyst could do it as well (practice has shown, I can’t expect it). For me, it means how much important analyst’s experience is.

Finally, there are very few things, that could be standardized. Let’s talk about them.

Top level

Let’s classify software requirement specifications with 2 characteristics: (a) the level and (b) the kind of a specification.

In terms of the level there are:

  1. High-level specifications, that allow you to start talking with developers. Those are enough if you have (an internal) development team, deeply skilled in company technical and business context.
  2. Detailed specifications, that are highly recommended, when you hire an external team or consistently rotate developers.

Different specification kinds are:

  1. Specifying modifications in a purchased or existing system.
  2. Describing completely new software to build.
  3. Specifying exchange formats.

It’s nice to have a top-level document, describing which existing systems will be touched and what software would be developed. It’s necessary for big solutions and is named “architectural document”.

The structure of documents I tend to have in a project is like:

  1. Business requirements (top level).
  2. Architectural document – no details, just overview.
  3. Specifications.

The order in the list above is the same as the natural order of birth of those documents in a project.

By the project roles, the authors of those documents are:

  1. Business analyst for business requirements.
  2. IT architect for an architectural document.
  3. System analysts or software architects for specifications.

That’s all you can classify at the top level.

Specification level

A simple specification may contain sections:

1. Introduction.
2. Definitions.
3. Non-functional requirements.
4. Functional requirements.

Detailed specification for a new software may include (my schema):

1. Introduction.
2. Definitions.
3. Non-functional requirements. It may include:
3.1. Runtime environment.
3.2. Managing source code.
3.3. Back-end requirements.
3.4. Front-end requirements.
3.5. Integration principles.
3.6. Logging and telemetry.
4. Access control and audit. It includes:
4.1. Authentication and authorization.
4.2. Permissions and roles.
4.3. Audit.
5. User interface.
5.1. Conception.
5.2. Sections and screen forms.
5.2.x. <Section name>
5.2.x.1. Section view.
5.2.x.2. Entity view.
5.2.x.3. Entity editing.
6. Model
6.x. Object <Object Name>
6.x.1. Properties.
6.x.2. Requirements.
6.x. Enumeration <Enum Name>
7. Periodic jobs.
8. Integration procedures.
9. Reports.

The introduction must be verbose if an external team works on a project, describing the context and key ideas of a software product.

The largest parts of a specification usually are 5 and 6. Instead of ‘x’ you substitute new number for every new UI section (in part 5) or object / enum (in part 6).

Properties in 6.x.1 usually are described in the form of a table, having columns for the name, type and description of the property.

Good examples of requirements for sections 6.x.2 are:

  • The object must support <term from definitions>.
  • On assignment of the value X to property Y do <describe action>.
  • On saving an instance of the object do <describe action>.
  • Property Z is read-only.
  • Combination of values in properties X, Y, Z must be unique among instances of the object in the database.

Basically, requirements of 6.x.2 split into 2 kinds:

  1. Related to object behaviour.
  2. Related to the database.

Requirements related to the behaviour of an object must be unit-testable.

Perhaps, the hardest thing to describe is the user interface. But classic UI section of business software may be described as:

5.2.x.1. Section view.
5.2.x.1.1. Layout.
5.2.x.1.2. Toolbar.
5.2.x.1.3. Filters.
5.2.x.1.4. Registry.
5.2.x.2. Entity view.
5.2.x.2.1. Layout.
5.2.x.2.2. Toolbar.
5.2.x.2.3. Form.
5.2.x.3. Entity editing.
5.2.x.3.1. Layout.
5.2.x.3.2. Toolbar.
5.2.x.3.3. Form.

Above of the part “5.2. Sections and screen forms” I put “5.1. Conception” illustrating high-level UI layout and describing what is UI section, toolbar, where is the view / editing area.

In case of modifications to purchased or existing software, a specification may vary from containing a flat list of requirements to being similar to the described detailed specification format. It may be:

1. Introduction
2. Definitions
3. Model changes
3.x. Object <Object Name>
3.x.1. Properties.
3.x.2. Requirements.
3.x. Enumeration <Enum Name>
4. Periodic jobs.
5. Integration procedures.
6. Reports.

In systems that have forms bond to objects (CRUD design) in part “3.x.2. Requirements” I put 3 kinds of the functional requirements:

  1. Related to UI behaviour.
  2. Related to object behaviour.
  3. Related to the database.

Always write exchange format specifications in a separate document, because it is related to at least two systems.

That’s all you can classify at the level of specification.

Best effort

The last 20 years IT world is changing from guaranties for anything to best-effort over all the stack — strong to eventual consistency, DSL -> ethernet, etc.

There are less dedicated and more shared resource usage cases across the industry. It seems like the road to hell, but it is akin evolution from turtles to animals.

Take a look, how everything is changed:

1. World of monolithic ERPs on ACID-compliant databases, logic close to data. Strong consistency everywhere. To be gone.
2. SOA world with integrated applications, logic abstract from data. Eventual consistency between big blocks.
3. Microservices with CQRS / ES. Eventual consistency between small blocks. It’s trend.

It seems like we are getting fewer guarantees from the systems, bad thing, but in fact, we are getting more reliability. Sounds strange, isn’t it?

Evolution in nature have passed the similar way from exoskeletons and giant creatures to social constructs where every single creature has fewer guarantees of surviving by itself, but the whole system becomes more flexible, adaptive and therefore reliable.

That’s where we go in IT.

“It works fast” vs “it has an ability to serve”

Many of developers have a strong believing — faster is better. It’s not universal truth.

The fact that you can achieve faster execution in development environment under moderate workload doesn’t mean you achieve more capacity, more ability to serve users. You may be faced with:

  1. Resource contention in parallel execution.
  2. Unscalability for technology piece.
  3. Higher cost to scale with technology working fast than with technology working ok.

Basically, speed in not “the more the better” quality aspect for customers. Speed have to be good enough or more than enough for some degree.

Frequently, not speed, but capacity limits businesses from growth.

Unconscious defines the path of our life

If one likes something, one finds a reasons to be involved with it. Vice versa, if one irrationally dislike a subject, he or she will throw away whatever rational proofs may be.

Initially, we have only two points that we have to pass — begin and end of our life. While we have no other points, our behaviour completely depend on what we like natively. We tend to do what we like, then learn it more and become professionals. Yes, we now have a lot of points, but they all are close to what we initially liked in the space of life as we felt it, probably, in our childhood.

Other people constantly suggest us to include their points in our path. Technologies, products… Sometimes we struggle with it, sometimes we instantly take their points. And frequently it’s a matter of how much we like the person who suggests, not subject.

This is why a presentation of any idea is so much important.

Mixing WS-Federation and Windows Authentication in IIS

Imagine, you have an ASP.NET web application in IIS accessed by:

  • / — everyone
  • /orders — customers, authenticated with federated SSO
  • /admin — personnel, authenticated with Active Directory

How to configure this?

Ugly solution

For such kind of auth-mixing the internet suggests the following algorithm:

  1. In the controller of /admin, if a client is not windows authenticated then return response with status code 418, else do the normal job.
  2. In global.asax.cs in Application_EndRequest() method, if the status code is 418, set it to 401.2 to provoke a challenge by an authentication module next in a pipeline.

Simple, but:

  1. Windows authentication performed as a result of such 401 substitutions would be NTLM, but never Kerberos.
  2. It doesn’t consider the case of user walking from personnel URL to customer URL and back, having customer’s fed-auth cookies and Kerberos / NTLM authentication at the same time. How would principal corresponding to URL be reconstructed?
  3. It relies on fake code instead of some kind of authorization exception or filter attribute. This spoils structure of the code.
  4. It relies on the order of EndRequest event handling in the pipeline. There’s no documented guarantees about that order.

I believe a double response status code change is the dirty hack.

Correct solution

To get some general understanding read my IIS internals article, especially final parts about federated authentication.

Create some configuration section to put the collection of URLs (admin URLs used by personnel) in it, for which your skip federated and go for windows authentication:

<federationAuthenticationExclusions&gt;
    <items&gt;
      <add url="/admin" /&gt;
      <add url="/debug" /&gt;
    </items&gt;
</federationAuthenticationExclusions&gt;

Of course, you need to support this by section / collection / item objects in code. See example.

Extend WSFederationAuthenticationModule to ignore handling of status code 401 for admin URLs:

using System;
using System.Collections.Generic;
using System.IdentityModel.Services;
using System.Linq;
using System.Web;
using WebApplication1.Configuration.FederationAuthenticationExclusions;

namespace WebApplication1.Modules
{
    public class FederationAuthenticationModule : WSFederationAuthenticationModule
    {
        protected override void OnEndRequest(object sender, EventArgs args)
        {
            HttpApplication httpApplication = (HttpApplication)sender;

            // step over federative authentication if URL in federationAuthenticationExclusions section of Web.config
            foreach (Item item in Section.Default.Items)
            {
                if (httpApplication.Request.RawUrl.StartsWith(item.Url, StringComparison.InvariantCultureIgnoreCase))
                    return;
            }
            base.OnEndRequest(sender, args);
        }
    }
}

Extend SessionAuthenticationModule to not just skip for admin / personnel URLs, but also to clean httpApplication.Context.User up if some windows authenticated one is visiting federated URLs:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Security.Principal;
using System.Threading;
using System.Web;
using WebApplication1.Configuration.FederationAuthenticationExclusions;

namespace WebApplication1.Modules
{
    public class SessionAuthenticationModule : System.IdentityModel.Services.SessionAuthenticationModule
    {
        protected override void OnAuthenticateRequest(object sender, EventArgs eventArgs)
        {
            HttpApplication httpApplication = (HttpApplication)sender;

            // step over federative authentication if URL in federationAuthenticationExclusions section of Web.config
            foreach (Item item in Section.Default.Items)
            {
                if (httpApplication.Request.RawUrl.StartsWith(item.Url, StringComparison.InvariantCultureIgnoreCase))
                    return;
            }

            IIdentity identity = Thread.CurrentPrincipal.Identity;

            // in case of federative authentication (URL not in exclusions)
            // if user is authenticated, but it is not federative authentication, reset authentication
            if (identity.IsAuthenticated || identity.AuthenticationType != "Federation")
            {
                httpApplication.Context.User = null;
            }

            // if user is not authenticated, try to authenticate as usual by FedAuth cookie
            base.OnAuthenticateRequest(sender, eventArgs);
        }
    }
}

If a user goes to admin / personnel URL then to customer page, session module will try to reconstruct principal from fed-auth cookie normally.

If a user goes to customer page then to admin / personnel URL, session module will skip reconstruction, and normal windows authentication will happen.

The working example you can take at https://github.com/dmlarionov/IISMixedAuthExample.

IIS internals

Here I’m not about Kestrel and .NET core. It’s article about classic things — Windows, IIS and ASP.NET pools in it. I just bring it all together from different official sources, posts of respected authors and from my personal observations and scrutinizing of Microsoft code.

Windows and IIS are optimized to use hardware. Keeping of TCP connection can be off-loaded to the server-class network card (they have own TCP/IP stack support), while HTTP protocol is processed in kernel mode driver. This dramatically reduces the number of interruptions and context switches, giving more CPU power to perform your code.

It means, that request is passed from kernel to w3wp.exe (executable representing application pool and running under its credentials) directly. There’s only one transfer from kernel to user mode (CPU terms) to exactly that user, under which application runs.

Some security is implemented in the kernel, interfacing (through SSPI) to both kernel and w3wp.exe.

Look at the picture:

First, take attention at the user mode part. Each combination of “w3wp.exe native code” (green rectangles) and “Managed code” (inside of it) is a single process from OS perspective. Managed code is loaded by Common Language Runtime (which is native DLL itself) inside of the w3wp.exe space.

NTLM, Kerberos and Negotiate protocols are implemented by Security Support Providers in kernel mode. Negotiate is not authentication protocol itself, it’s a module used to negotiate one — Kerberos or NTLM. The only way to enable Kerberos in IIS is by enabling Negotiate for Windows Authentication.

For Kerberos protocol to work, there should be a key, belonging to a service account, for decryption of service access ticket (issued by KDC for a client). This service account (to which key belongs to) depend on configuration:

  1. If kernel mode authentication is enabled (by default) — SYSTEM (It is {MACHINE}$ from the perspective of Active Directory).
  2. If kernel mode authentication is disabled — application pool account. In this case, you need to generate SPN (Security Principal Name) for the domain user, under which application pool runs.

For Kerberos and SPN consult this article.

For more about request processing pipeline read in ASP.NET Application Life Cycle Overview for IIS 7.0 (it’s old but informative) or in the source code of HttpApplication, HttpRuntime and related classes in Reference Source of .NET Framework.

Request processing and authentication

For every request w3wp.exe forks an OS thread (native thing) and associate pooled instance of HttpApplication class (managed) to it. You can check these instances as counter in Performance Monitoring and understand how many requests was processed in parallel just a moment before.

Windows authentication

New w3wp.exe OS thread doesn’t have correct windows user authentication token attached to it. Such token assignment can be made only in native code. Therefore, windows authentication is processed by two modules:

  1. Native WindowsAuthenticationModule (Inetsrv\Authsspi.dll) which works with SSPI to authenticate, holds a session and attaches user authentication token to the OS thread.
  2. Managed WindowsAuthenticationModule which recreates principal in .NET (this principal is based on the token of the thread).

Federated and forms authentication

Despite the matter of protocols and using or not of SSPI, from the perspective of application, authentication is the presence of trusted information about the principal. In ASP.NET such information is expected to be in HttpContext.Current.User and Thread.CurrentPrincipal.

Federated authentication and forms authentication modules are not related to SSPI at all. They both:

  1. Handles authenticate event and recreates .NET principal. Authenticate event is not a moment when user types password (which is the challenge response), it’s a step in a pipeline executed for each request to recreate principal in .NET before going further.
  2. Handles EndRequest pipeline event and challenges user with authentication if the response to be sent had 401 status code.

Federated authentication and forms authentication modules works slightly differently in terms of attaching principal to expected places:

  1. Federated authentication modules assigns constructed principal to HttpContext.User, then directly to Thread.CurrentPrincipal.
  2. Forms authentication module does the same for HttpContext.User, but then some magic about assigning to Thread.CurrentPrincipal; I have read module code, but I didn’t understand it pretty well (and that magick doesn’t happen in 0.001% of cases, see Scott Hanselman).

I said “Federated authentication modules” (plural form) not by mistake. In contrast to forms module, there are two managed modules for federated auth:

  1. WSFederationAuthenticationModule that takes care of token in request after redirection back from authentication service (STS in terms of WS-Federation protocol) and redirects to STS when needed. Recreates principal from token.
  2. SessionAuthenticationModule that creates a fed-auth cookie, holding authentication during a session. Recreates principal from that cookie.

Both federated modules has to be added explicitly in Web.Config, while forms module is auto-added by IIS if forms authentication is configured.

Mixing and tuning authentication

Handlers of the authenticate event in all Microsoft modules take care of existing user passed in HttpContext. If HttpContext.User is not null then module skips its work because it means that the user is somehow authenticated by another module.

You can’t strictly control the order in which modules handle EndRequest event. If you use some + windows authentication, I guess (experiments show it’s true) native windows authentication module will handle status code 401 last.

How, possibly, you can alter authentication behaviour:

  1. Managed windows authentication module class is sealed and don’t have events that could help you hook it up. But other modules sit in the IIS pipeline after it for ingress and before it for egress, so you can manipulate users and challenges in other modules.
  2. Forms module have events you may need to alter behaviour.
  3. Classes of federated authentication modules are open for extending and due its explicitness you can add your altered modules to handle WS-Federation instead of original ones.

See example of Mixing WS-Federation and Windows Authentication in IIS.

Types of a project outcome

Let’s talk about the ways of classifying a project outcome (not very serious).

1. Project to earn money

In this category are projects with direct impact on revenue, profit, cost. For instance, change to more productive technology, cutting expenses.

Outcome is estimated in money.

2. Project to mitigate some risk

Projects about better manageability also fall into this category. For example, improving release or incident management in IT.

Lower risks (probability * possible impact) should be outcome of this kind of projects.

3. Project to gain more loyalty

It should be project about customer’s loyalty. I believe that quality perceived by customer also converts to loyalty. That’s why quality projects (reducing number of failures or % of defects, making a service faster, customer journey easier) fall into this category.

Value can be measured in money based on correlation like NPS / churn and churn / revenue decline.

4. Project for passion and pleasure

Outcome can not be measured. You are engaged, maybe, to someone else’s loyalty project (type 3). But, anyway, this probably the most valuable sort of projects.