How to Configure Portainer CE with Entra ID for OAuth Authentication

Portainer CE by default doesnt support Entra ID (formerly Azure AD) for SSO.
Mostly because it’s for non-commercial use, but I actually have a private Microsoft 365 tenant for myself, so I wanted to use Entra ID Authentication for that.

With this guide, I will tell you how you can use custom oAuth to configure Entra ID sign in, since it wasn’t a breeze to find out myself.


Why Use Entra ID with Portainer CE?

  • Single Sign-On (SSO): Use Entra ID credentials to log in to Portainer.
  • Enhanced Security: Enforce policies such as multi-factor authentication (MFA) via Entra ID.
  • Simplified User Management: Centralize access control through your existing Entra ID setup.

Prerequisites

  1. A running instance of Portainer CE (version 2.9 or later).
  2. An Entra ID tenant (part of a Microsoft 365 or Azure subscription).
  3. Administrative privileges on both Entra ID and Portainer CE.

Step 1: Register an App in Entra ID

  1. Log in to Entra ID Portal:
  1. Create a New App Registration:
  • Go to Azure Active Directory > App Registrations > + New Registration.
  • Provide a name for the app (e.g., Portainer OAuth).
  • Set Supported Account Types:
    • Single Tenant (if only your organization will use Portainer).
  • Add a Redirect URI:
    • Type: Web
    • URI: https://<your-portainer-url>
    • Replace <your-portainer-url> with your Portainer CE domain or IP address. (HTTPS is required for SSO)
  • Click Register.
  1. Save the Key Details:
  • After registration, copy:
    • Application (client) ID
    • Directory (tenant) ID

Step 2: Configure Permissions in Entra ID

  1. Add API Permissions:
  • Go to API Permissions > + Add a permission.
  • Select Microsoft Graph > Delegated Permissions.
  • Add:
    • openid
    • profile
    • email
  • Click Grant admin consent to apply permissions for all users.
  1. Create a Client Secret:
  • Go to Certificates & Secrets > + New Client Secret.
  • Add a description (e.g., Portainer OAuth).
  • Set an expiration period (e.g., 12 months).
  • Save the Client Secret value. You’ll need it for Portainer.

Step 3: Configure Custom OAuth in Portainer CE

  1. Log in to Portainer:
    Access your Portainer CE instance as an administrator.
  2. Navigate to Authentication Settings:
  • Go to Settings > Authentication.
  • Select the Custom OAuth provider.
  1. Enter the Entra ID OAuth Details:
    Use the following settings based on your configuration:
  • Client ID: <Your Application (client) ID>
  • Client Secret: <Your Client Secret>
  • Authorization URL: https://login.microsoftonline.com/<Your-Tenant-ID>/oauth2/v2.0/authorize
  • Access Token URL: https://login.microsoftonline.com/<Your-Tenant-ID>/oauth2/v2.0/token
  • Resource URL: https://graph.microsoft.com/v1.0/me
  • Redirect URL: https://<your-portainer-url>
  • Logout URL: https://login.microsoftonline.com/<Your-Tenant-ID>/oauth2/v2.0/logout
  • User Identifier: userPrincipalName
  • Scopes: openid profile
  • Auth Styles: in params

Step 4: Test the Integration

  1. Log out of Portainer and access the login page.
  2. You should see the OAuth login option.
  3. Authenticate using your Entra ID credentials.
  4. If successful, you will be redirected to Portainer’s dashboard, don’t forget to give the account permissions, because you can’t add it automatically to a team with the community edition of Portainer!

Common Issues and Troubleshooting

  1. Unauthorized Error:
  • Ensure that In Params is the Auth Style
  1. Redirect URI Mismatch:
  • Ensure the Redirect URI in Portainer matches exactly with what is configured in Entra ID, no oauth/callback as stated by some guides.
  1. Missing Claims:
  • Add optional claims in Entra ID:
    • Go to Token Configuration > + Add optional claim.
    • Add the following claims for the ID token:
    • email
    • name
    • upn (User Principal Name).
  1. Token Validation Errors:
  • Ensure openid, profile, and email scopes are properly configured and granted admin consent.

Conclusion

Integrating Portainer CE with Entra ID provides a secure and centralized authentication solution for your containerized environments. By leveraging OAuth, you can streamline user access, enforce MFA, and manage access control directly from Entra ID.

How AI can finally drive the chip industry to new heights

While everyone talks about what AI can do, less attention is given to how AI drives (much needed) innovation in the chip industry.
For years there was a well known monopoly in the PC industry, AMD and Intel ruled over the PC landscape.
There simply was no better alternative, and there was no incentive for chip manufacturers like Qualcomm to invest in the PC industry, because of the lack of innovation on the platform (Windows), and the lack of growth of the PC market.

AI-driven innovations offer new opportunities to revive Windows on ARM, a platform that was not successful in the past. This article compares the Qualcomm Snapdragon X Elite and Snapdragon X Plus with current chips from Intel and AMD, examines the challenges of previous Windows on ARM implementations, and discusses why these new innovations could make Windows on ARM successful.

The History of Windows on ARM

Windows RT was Microsoft’s first serious attempt to run Windows on ARM chips, launched in 2012 alongside the Surface RT tablet. It aimed to combine the energy efficiency of ARM with the power of Windows. Unfortunately, it suffered from a lack of compatibility with traditional x86 applications. Users could only run apps from the Windows Store, severely limiting usability. As a result, Windows RT was poorly received and eventually discontinued.

Windows 10 on ARM was Microsoft’s next attempt to run Windows on ARM chips. Initially, only x86 emulation was possible, meaning older 32-bit Windows applications could run, but performance was disappointing due to the emulation layer. Support for x64 emulation was added later, allowing 64-bit applications to run. Despite these improvements, performance still did not meet expectations because ARM processors were seen as ‘fast phone processors,’ inadequate for full-fledged laptops.

Qualcomm Snapdragon X Elite and X Plus

The Qualcomm Snapdragon X Elite and Snapdragon X Plus are the latest and most advanced ARM chips on the market, designed with a strong focus on AI performance and energy efficiency. Qualcomm specifically invested in developing these chips to address the shortcomings of previous ARM chips for Windows devices. The need to meet the requirements of AI-driven applications, such as Microsoft’s Copilot+, was a key driver for this investment.

Both chips offer advanced support for x86 and x64 emulation. Thanks to the new “Prism” emulation layer in Windows 11 on ARM, these chips significantly improve the performance of emulated applications. This means x86 and x64 applications run on ARM devices with minimal performance loss, crucial for the usability of Windows on ARM.

Comparison with Current Chips from Intel and AMD

Current x64 processors from Intel and AMD show different levels of AI performance:

  • Intel’s Meteor Lake: Offers up to 10 TOPS of AI performance, which does not meet the requirements for Microsoft’s Copilot+ certification.
  • AMD’s Ryzen 8040 series: Offers up to 16 TOPS of AI performance, also insufficient for Copilot+.

Competition and Innovation in the Chip Market

The introduction of the Snapdragon X Elite and X Plus could significantly accelerate competition in the chip market. Currently dominated by Intel and AMD, the PC chip market might see a broader range of manufacturers in the future. This could be similar to the smartphone market, where multiple companies compete with their own chips.

For example, Samsung has its Exynos processors, Google develops Tensor chips for its Pixel phones, and other major players in the mobile chip industry like MediaTek could also enter the PC market.

This diversification could lead to faster innovations, better performance, energy efficiency, and more choices for consumers.

Microsoft Copilot+ Certification

To be certified for Microsoft’s Copilot+, PCs must deliver at least 40 TOPS of AI performance. Many current x64 chips, such as Intel’s Meteor Lake and AMD’s Ryzen, do not meet this requirement. The Snapdragon X Elite and Snapdragon X Plus, with 45 TOPS for the NPU, do meet this standard and are currently the only chips that do.

Why Windows on ARM Can Succeed Now

The combination of improved performance and energy efficiency makes the Snapdragon X Elite and X Plus game-changers for Windows on ARM. With the ability to efficiently run AI-driven applications like Microsoft’s Copilot+, these chips meet the high performance standards needed for modern computers. Qualcomm and Microsoft have developed a powerful emulation layer that ensures x86 and x64 applications run smoothly on ARM devices, enhancing the usability and acceptance of Windows on ARM.

Conclusion

Despite past failures, the current generation of AI-optimized chips from Qualcomm offers new hope for Windows on ARM. The Snapdragon X Elite and X Plus outperform current x64 chips from Intel and AMD in AI performance. These investments in AI and chip innovation could ensure that Windows on ARM not only meets performance standards but also remains energy efficient, potentially leading to broader acceptance and success of the platform.

Automating Windows Server Environment Inventory with PowerShell

As IT administrators managing complex Windows Server environments, we are often tasked with keeping track of various server configurations, services, roles, and other essential aspects of our infrastructure. Manual tracking and documentation can be time-consuming and error-prone, which is why automating the inventory process is an excellent solution. In this blog post, we’ll introduce a comprehensive PowerShell script that automates the collection of critical data about your Windows Server environment and exports the information into organized CSV files for easy analysis.

Introducing the Windows Server Environment Inventory Script

The Windows Server Environment Inventory PowerShell script is designed to help administrators efficiently gather vital information about their infrastructure. The script consolidates data about servers, services, roles, shares, SMB connections, and certificates, allowing you to keep an eye on your environment and identify potential issues quickly.

Key Features
  • Comprehensive inventory of Windows Server environments
  • Collection of server details, connectivity, services, scheduled tasks, roles, shares, SMB connections, and certificates
  • Export of inventory data into organized CSV files for easy analysis and reporting
  • Automation of data collection process to improve efficiency and accuracy

Requirements and Setup

Before running the script, make sure you have the following requirements in place:

  • PowerShell 5.1 or later
  • Active Directory PowerShell Module
  • Appropriate permissions to query Active Directory, remote servers, and export CSV files

To get started, simply clone or download the project files to a local directory and ensure the Active Directory PowerShell Module is installed on your system.

Running the Script

Open a PowerShell session with administrative privileges and navigate to the directory containing the script. Execute the script by running the following command:

PS C:\> .\InventoryWindowsServerEnvironment.ps1

The script will collect information about Windows Servers in your Active Directory environment and generate multiple CSV files in the same directory as the script. These files can be opened and analyzed using spreadsheet software or other data analysis tools.

Analyzing the Results

The generated CSV files provide comprehensive information about your Windows Server environment, including:

  1. Server connectivity details (export-connectivityreport.csv)
  2. Services running on each server (export-services.csv)
  3. Scheduled tasks on each server (export-scheduledtasks.csv)
  4. Installed roles and features on each server (export-installroles.csv)
  5. File shares and file servers (export-fileshares.csv and export-fileservers.csv)
  6. SMB connections on each server (export-smbconnections.csv)
  7. Certificates from the Computer Personal store on each server (export-personalcertificates.csv)

You can use this data to monitor and manage your Windows Server environment, identify potential issues related to connectivity, services, roles, or certificates, and ensure your infrastructure is running optimally.

Conclusion

Automating the Windows Server environment inventory process with PowerShell is a powerful way to improve efficiency, accuracy, and maintainability. The Windows Server Environment Inventory script simplifies data collection, enabling you to focus on analyzing the results and addressing any issues that arise. By leveraging this script, you can keep your finger on the pulse of your infrastructure and ensure a robust and reliable Windows Server environment.

Download it at:
Windows Server Environment Inventory from Azure DevOps

Azure Bicep vs. Terraform: Comparing the Two Infrastructure as Code Tools

Infrastructure as Code (IaC) is becoming increasingly popular in today’s cloud computing landscape. It allows organizations to define and manage cloud resources in a declarative way, automating the process of resource deployment and configuration. Two popular IaC tools for Azure are Azure Bicep and Terraform. While they share some similarities, they also have several key differences.

Language and Syntax
Azure Bicep is a domain-specific language (DSL) that was designed specifically for Azure resources. It uses a YAML-like syntax that is easy to read and write. In contrast, Terraform uses its own language called HashiCorp Configuration Language (HCL). HCL is more flexible than Azure Bicep, as it can be used to define resources across multiple cloud providers, including Azure.

Resource Coverage
Azure Bicep provides a more comprehensive coverage of Azure resources, and is specifically designed for the Azure platform. It has built-in support for all Azure resources, including the latest features and services. In contrast, Terraform offers support for multiple cloud providers, including Azure, AWS, and Google Cloud Platform. However, its coverage of Azure resources may not be as comprehensive as Azure Bicep.

Deployment and Management
Azure Bicep integrates directly with Azure Resource Manager (ARM), which allows for easier deployment and management of resources. It also provides built-in support for features such as template validation and parameterization. Terraform, on the other hand, uses its own deployment engine and can be used to deploy resources to multiple cloud providers. It also provides a wide range of plugins and modules that allow for more advanced deployment and management scenarios.

Learning Curve
Azure Bicep has a smaller learning curve than Terraform, as it is specifically designed for Azure resources and uses a simpler syntax. This makes it easier for developers and IT professionals who are new to IaC to get started. Terraform, on the other hand, has a steeper learning curve due to its more complex language and wider range of capabilities.

Community and Ecosystem
Terraform has a larger and more active community than Azure Bicep. This means that there are more resources, tutorials, and support available for Terraform users. Terraform also has a wider range of third-party plugins and modules that extend its capabilities. However, Azure Bicep is growing in popularity and has an active community of developers contributing to its development.

In conclusion, both Azure Bicep and Terraform are powerful IaC tools with their own strengths and weaknesses. Azure Bicep is specifically designed for Azure resources, has a simpler syntax, and integrates directly with ARM. Terraform, on the other hand, is more flexible, has a wider range of capabilities, and can be used to manage resources across multiple cloud providers. The choice between these two tools will depend on your specific requirements and preferences.

High level step-by-step guide building an Azure Virtual Desktop environment using Azure Bicep

Azure Virtual Desktop is a cloud-based virtual desktop infrastructure (VDI) service that enables users to access their Windows desktops and applications from anywhere. Azure Virtual Desktop is an excellent solution for enterprises that want to provide their employees with secure and reliable remote access to their desktops and applications. In this blog post, we will discuss how to build an entire Azure Virtual Desktop environment using Azure Bicep.

Step 1: Set up your environment

Before you can create the Azure Virtual Desktop environment, you need to ensure that you have the necessary prerequisites, including an Azure subscription, a virtual network, and a Windows Active Directory domain. You can create a virtual network and Windows Active Directory domain in Azure using Azure Bicep.

Step 2: Define your infrastructure in Azure Bicep

Using Azure Bicep, you can define your infrastructure as code. You can create a Bicep file that defines your virtual network, virtual machines, storage accounts, and other necessary resources for your Azure Virtual Desktop environment. You can use the following resources in your Bicep file:

  • Virtual network
  • Subnet
  • Network security group
  • Storage account
  • Virtual machine
  • Windows Active Directory domain

The Bicep file should define the dependencies between these resources, so that Azure can deploy them in the correct order.

Step 3: Create a deployment script

Once you have defined your infrastructure in Azure Bicep, you need to create a deployment script that will deploy your infrastructure to Azure. This script will use the Azure CLI to deploy your infrastructure using the Bicep file you created. You can use the following commands to create and deploy the Azure Virtual Desktop environment:

  • az deployment group create: This command creates a new deployment for the specified resource group and deploys the Bicep file.
  • az deployment group validate: This command validates the Bicep file and checks for syntax errors.

You can also use Azure DevOps or other deployment tools to automate the deployment of your Azure Virtual Desktop environment.

Step 4: Install and configure Azure Virtual Desktop components

Once your infrastructure is in place, you need to install and configure the Azure Virtual Desktop components. This involves setting up the virtual machines, configuring the host pool, and installing the necessary software. You can use the following components in your Azure Virtual Desktop environment:

  • Host pool: This is a collection of virtual machines that provide remote desktop access to users.
  • Virtual machines: These are the virtual machines that host the desktops and applications.
  • Remote Desktop Session Host (RDSH): This is a Windows Server role that provides remote desktop services to users.
  • Windows Virtual Desktop Agent: This is the software that you install on the virtual machines to connect them to the Azure Virtual Desktop service.

You can install and configure these components using PowerShell scripts, Azure Automation, or other deployment tools.

Step 5: Test and validate

After you have set up your Azure Virtual Desktop environment, you need to test and validate it to ensure that everything is working as expected. You can do this by connecting to the virtual desktops and running various tests to ensure that everything is working correctly. You can also use Azure Monitor and other monitoring tools to monitor the performance and availability of your Azure Virtual Desktop environment.

Conclusion

In conclusion, building an entire Azure Virtual Desktop environment using Azure Bicep requires some expertise and planning. It is essential to understand the Azure Virtual Desktop architecture and components, as well as how to define infrastructure in Azure Bicep. However, by following these steps, you can build an Azure Virtual Desktop environment that is efficient, scalable, and easy to manage.

Why use Infrastructure as Code solutions like Azure Bicep or Terraform

When it comes to deploying infrastructure in the cloud, there are many tools available to choose from. Two popular choices are Azure Bicep and Terraform. Both of these tools are infrastructure as code (IaC) solutions, meaning that they allow you to define your cloud infrastructure in a declarative language and then deploy that infrastructure using automation. In this blog post, we will explore why using Azure Bicep or Terraform in an enterprise environment is beneficial.

Benefits of Using Azure Bicep or Terraform

Consistency and Reusability

In an enterprise environment, consistency and reusability are crucial. You need to ensure that your infrastructure is deployed in the same way every time and that your infrastructure components are reusable. This is where Azure Bicep or Terraform come in. These tools allow you to define your infrastructure in a declarative language, which means that your infrastructure will be deployed in the same way every time. Additionally, because you can reuse code in Azure Bicep or Terraform, you can create templates for your infrastructure components that can be used across different projects and environments.

Automation and Efficiency

Another benefit of using Azure Bicep or Terraform in an enterprise environment is automation and efficiency. By defining your infrastructure in code, you can automate the deployment process, which reduces the likelihood of human error and makes the deployment process more efficient. Additionally, because you can reuse code in Azure Bicep or Terraform, you can create templates that can be used across different projects and environments, which reduces the amount of time and effort required to deploy infrastructure.

Version Control

In an enterprise environment, version control is essential. You need to be able to track changes to your infrastructure over time, and you need to be able to roll back changes if necessary. With Azure Bicep or Terraform, you can use version control to track changes to your infrastructure code. This allows you to see who made changes, when those changes were made, and what those changes were. Additionally, you can roll back changes if necessary, which helps you avoid downtime or other issues that might arise from changes to your infrastructure.

Collaboration

Finally, collaboration is critical in an enterprise environment. You need to be able to work with other members of your team to deploy infrastructure, and you need to be able to share your infrastructure code with others. Azure Bicep or Terraform makes collaboration easy by allowing you to define your infrastructure in a declarative language that can be easily shared and understood by others. Additionally, because you can use version control with Azure Bicep or Terraform, you can collaborate on changes to your infrastructure code with others, which makes it easier to work together.

Conclusion

In conclusion, using Azure Bicep or Terraform in an enterprise environment offers numerous benefits, including consistency and reusability, automation and efficiency, version control, and collaboration. By using these tools, you can ensure that your infrastructure is deployed in the same way every time, reduce the amount of time and effort required to deploy infrastructure, track changes to your infrastructure code over time, and collaborate more effectively with others. If you are looking for an IaC solution for your enterprise, Azure Bicep or Terraform are both excellent choices to consider.

Universal Cloud Print Preview

I love it, but do customers too?

I love how this solution is this easy to implement, and how this brings the ease of a modern workspace with a fitting printing solution on the user end.
The experience to the user is simply amazing, and I think is took some real effort on Microsoft’s end.
So, they did nice work on that.

However, there are some limitations:
Follow me or badge printing are simply not possible for now.
Physically added paper bins can’t be added.
Auto-stapling is not available.

I understand that these are added functionality by the printer manufacturers, but organizations have bought these machines fitting to their printing needs, you can’t take this away by implementing this.

And it’s all because you don’t have a native print driver.
Don’t get me wrong, I hate native printer drivers.
Without driver isolation on, you can destroy printing for so many clients.
Especially in large organizations, this has been an enormous issue.
Something you can’t blame Microsoft for, because it was simply bad programming on the printer manufacturers end.
Alot of them where using legacy shared components (a lot of printer drivers used old HP LaserJet Components, which would conflict and make peoples print spoolers crash in the past).

I think a wide alliance with Printer Manufacturers is necessary to make this happen, since we can only implement this in doctors’ offices or organizations where there are as many printers as people.

However, I love how this really brings an end to crashing print spoolers and printer configuration issues.

As companies are getting increasingly paperless, this solution gets more fitting by time, I think.
For legacy organizations, with comprehensive printing requirements, I think we may also have to start thinking about other ways to make sure printing is implemented in a fitting way.
Being creative has been a real help before, a lot of solutions are available, so maybe this will be a big competitor too.

Windows 10X: more reliable but may slow certain things down.

As you may know, Windows Core OS is the core operating system for many future variants of Windows.
One of those is Windows 10X, the operating system that would become the primary operating system of Dual Screen devices initially.

Microsoft has recently announced that because of the Novel Coronavirus outbreak, plans to release Windows 10X for the Surface Neo and other dual-screen devices will be postponed and they will now focus on building the operating system for existing single-screen devices.

One of the mayor advantages of Windows 10X is also its Achilles heel.
It will run Win32 applications in containers, where only Modern apps will run native in the operating system.
This reduces the attack surface of the operating system since every application is sandboxed.

They also will be able to service the operating system faster and more reliably.
Some even say that a simple reboot will suffice for Windows 10X, since the operating system can fully install the update in the background (like it does on Chrome OS devices).

This would be the best of both worlds: better battery life, a more secure operating system, fast servicing, more reliability (a win32 app will in the worst case crash the container it is running in), and still support for Win32 apps.
Let’s be honest, only Win32 apps will be able to crash the entire operating system.

However, the big issue here is that it will be emulating an operating system for those containers, as we all know this reduces performance in some way.
This really shows that Microsoft is still working towards a future where Win32 apps are history.

A script to list all members and owners per Team

This script will list all members and owners per team.
When you add -savedcred:$true it will save a credential file locally, which will give you automated access using that same account to your tenant.
It won’t save your credential plane text, it will use credential vault, which should be perfectly secure.
When using MFA on that service-account, make sure you use an app password.

It also list the object ID’s of both the users and the teams, which means you can use the exported CSV for other scripts (like removing a user from all teams).

The following columns are shown:

You can download it from: https://gallery.technet.microsoft.com/A-script-to-list-all-413530c6

Edge stable goes live before official announcement

[et_pb_section admin_label=”section”] [et_pb_row admin_label=”row”] [et_pb_column type=”4_4″][et_pb_text admin_label=”Text”]

When you Google ‘Edge Stable’ today, you’ll get the official download link for the stable version of the Chromium-based Microsoft Edge.
I downloaded it and saw it had the same Chromium version-number als the beta-channel, which most likely means it will be released soon.

Here’s a screenshot for when it gets pulled (hopefully it won’t):

After a year of testing Microsoft finally seems to finalize testing, while they are preparing for launch.
The official downloadlink can be found here:
https://go.microsoft.com/fwlink/?linkid=2069324&Channel=Stable&language=en

Fun fact, it uninstalls the Edge version that comes with Windows, which likely means that next versions of Windows will get this version of Edge by default.

The Chrome import functionality works like a charm, so I would absolutely give it a shot.



[/et_pb_text][/et_pb_column] [/et_pb_row] [/et_pb_section]