Category: Azure & Cloud

Azure services and cloud architecture

  • CosmosDB Performance: Lightning-Fast Query Optimization Guide

    Picture this: your application is scaling rapidly, user activity is at an all-time high, and your CosmosDB queries are starting to lag. What was once a snappy user experience now feels sluggish. Your dashboards are lighting up with warnings about query latency, and your team is scrambling to figure out what went wrong. Sound familiar?

    CosmosDB is a powerful, globally distributed database service, but like any tool, its performance depends on how you use it. The good news? With the right strategies, you can unlock blazing-fast query speeds, maximize throughput, and minimize latency. This guide will take you beyond the basics, diving deep into actionable techniques, real-world examples, and the gotchas you need to avoid.

    🔐 Security Note: Before diving into performance optimization, ensure your CosmosDB instance is secured. Use private endpoints, enable network restrictions, and always encrypt data in transit and at rest. Performance is meaningless if your data is exposed.

    1. Use the Right SDK and Client

    Choosing the right SDK and client is foundational to CosmosDB performance. The DocumentClient class, available in the Azure Cosmos DB SDK, is specifically optimized for working with JSON documents. Avoid using generic SQL clients, as they lack the optimizations tailored for CosmosDB’s unique architecture.

    # Example: Using DocumentClient in Python
    from azure.cosmos import CosmosClient
    
    # Initialize the CosmosClient
    url = "https://your-account.documents.azure.com:443/"
    key = "your-primary-key"
    client = CosmosClient(url, credential=key)
    
    # Access a specific database and container
    database_name = "SampleDB"
    container_name = "SampleContainer"
    database = client.get_database_client(database_name)
    container = database.get_container_client(container_name)
    
    # Querying data
    query = "SELECT * FROM c WHERE c.category = 'electronics'"
    items = list(container.query_items(query=query, enable_cross_partition_query=True))
    
    for item in items:
        print(item)
    

    By using the Cosmos SDK, you leverage built-in features like connection pooling, retry policies, and optimized query execution. This is the first step toward better performance.

    💡 Pro Tip: Always use the latest version of the CosmosDB SDK. New releases often include performance improvements and bug fixes.

    2. Choose the Right Consistency Level

    CosmosDB offers five consistency levels: Strong, Bounded Staleness, Session, Consistent Prefix, and Eventual. Each level trades off between consistency and latency. For example:

    • Strong Consistency: Guarantees the highest data integrity but introduces higher latency.
    • Eventual Consistency: Offers the lowest latency but sacrifices immediate consistency.

    Choose the consistency level that aligns with your application’s requirements. For instance, a financial application may prioritize strong consistency, while a social media app might favor eventual consistency for faster updates.

    # Example: Setting Consistency Level
    from azure.cosmos import ConsistencyLevel
    
    client = CosmosClient(url, credential=key, consistency_level=ConsistencyLevel.Session)
    
    ⚠️ Gotcha: Setting a stricter consistency level than necessary can significantly impact performance. Evaluate your application’s tolerance for eventual consistency before defaulting to stronger levels.

    3. Optimize Partitioning

    Partitioning is at the heart of CosmosDB’s scalability. Properly distributing your data across partitions ensures even load distribution and prevents hot partitions, which can bottleneck performance.

    When designing your PartitionKey, consider:

    • High Cardinality: Choose a key with a wide range of unique values to distribute data evenly.
    • Query Patterns: Select a key that aligns with your most common query filters.
    # Example: Setting Partition Key
    container_properties = {
        "id": "SampleContainer",
        "partitionKey": {
            "paths": ["/category"],
            "kind": "Hash"
        }
    }
    
    database.create_container_if_not_exists(
        id=container_properties["id"],
        partition_key=container_properties["partitionKey"],
        offer_throughput=400
    )
    
    💡 Pro Tip: Use the Azure Portal’s “Partition Key Metrics” to identify uneven data distribution and adjust your partitioning strategy accordingly.

    4. Fine-Tune Indexing

    CosmosDB automatically indexes all fields by default, which is convenient but can lead to unnecessary overhead. Fine-tuning your IndexingPolicy can significantly improve query performance.

    # Example: Custom Indexing Policy
    indexing_policy = {
        "indexingMode": "consistent",
        "includedPaths": [
            {"path": "/name/?"},
            {"path": "/category/?"}
        ],
        "excludedPaths": [
            {"path": "/*"}
        ]
    }
    
    container_properties = {
        "id": "SampleContainer",
        "partitionKey": {"paths": ["/category"], "kind": "Hash"},
        "indexingPolicy": indexing_policy
    }
    
    database.create_container_if_not_exists(
        id=container_properties["id"],
        partition_key=container_properties["partitionKey"],
        indexing_policy=indexing_policy,
        offer_throughput=400
    )
    
    ⚠️ Gotcha: Over-indexing can slow down write operations. Only index fields that are frequently queried or sorted.

    5. Leverage Asynchronous Operations

    Asynchronous programming is a game-changer for performance. By using the Async methods in the CosmosDB SDK, you can prevent thread blocking and execute multiple operations concurrently.

    # Example: Asynchronous Query
    import asyncio
    from azure.cosmos.aio import CosmosClient
    
    async def query_items():
        async with CosmosClient(url, credential=key) as client:
            database = client.get_database_client("SampleDB")
            container = database.get_container_client("SampleContainer")
            
            query = "SELECT * FROM c WHERE c.category = 'electronics'"
            async for item in container.query_items(query=query, enable_cross_partition_query=True):
                print(item)
    
    asyncio.run(query_items())
    
    💡 Pro Tip: Use asynchronous methods for high-throughput applications where latency is critical.

    6. Optimize Throughput and Scaling

    CosmosDB allows you to provision throughput at the container or database level. Adjusting the Throughput property ensures you allocate the right resources for your workload.

    # Example: Scaling Throughput
    container.replace_throughput(1000)  # Scale to 1000 RU/s
    

    For unpredictable workloads, consider using autoscale throughput, which automatically adjusts resources based on demand.

    🔐 Security Note: Monitor your RU/s usage to avoid unexpected costs. Use Azure Cost Management to set alerts for high usage.

    7. Cache and Batch Operations

    Reducing network overhead is critical for performance. Use the PartitionKeyRangeCache to cache partition key ranges and batch operations to minimize round trips.

    # Example: Batching Operations
    from azure.cosmos import BulkOperationType
    
    operations = [
        {"operationType": BulkOperationType.Create, "resourceBody": {"id": "1", "category": "electronics"}},
        {"operationType": BulkOperationType.Create, "resourceBody": {"id": "2", "category": "books"}}
    ]
    
    container.execute_bulk_operations(operations)
    
    💡 Pro Tip: Use bulk operations for high-volume writes to reduce latency and improve throughput.

    Conclusion

    CosmosDB is a powerful tool, but achieving optimal performance requires careful planning and execution. Here’s a quick recap of the key takeaways:

    • Use the CosmosDB SDK and DocumentClient for optimized interactions.
    • Choose the right consistency level based on your application’s needs.
    • Design your partitioning strategy to avoid hot partitions.
    • Fine-tune indexing to balance query performance and write efficiency.
    • Leverage asynchronous operations and batch processing to reduce latency.

    What are your go-to strategies for optimizing CosmosDB performance? Share your tips and experiences in the comments below!

  • Make a Microsoft graph call using javascript

    Unlocking Microsoft 365 Data with JavaScript

    Imagine this: your team is building a productivity app that needs to pull in user calendars, emails, or OneDrive files from Microsoft 365. You’ve heard of Microsoft Graph, the unified API endpoint for accessing Microsoft 365 data, but you’re not sure where to start. The documentation feels overwhelming, and you just want to see a working example in JavaScript. Sound familiar?

    Microsoft Graph is a goldmine for developers. It allows you to interact with Microsoft 365 services like Outlook, Teams, OneDrive, and more—all through a single API. But getting started can be tricky, especially when it comes to authentication and managing API calls securely. In this guide, I’ll walk you through how to set up and make your first Microsoft Graph API call using JavaScript. Along the way, I’ll share some hard-earned lessons, gotchas, and tips to ensure your implementation is both functional and secure.

    Before We Dive In: Security Implications

    Before writing a single line of code, let’s talk security. Microsoft Graph requires OAuth 2.0 for authentication, which means you’ll need to handle access tokens. These tokens grant access to sensitive user data, so mishandling them can lead to serious security vulnerabilities.

    🔐 Security Note: Never hardcode sensitive credentials like client secrets or access tokens in your codebase. Use environment variables or a secure secrets management service to store them.

    Additionally, always request the minimum set of permissions (scopes) your app needs. Over-permissioning is not only a security risk but also a violation of Microsoft’s best practices.

    Step 1: Setting Up the Microsoft Graph JavaScript Client Library

    The easiest way to interact with Microsoft Graph in JavaScript is by using the official @microsoft/microsoft-graph-client library. This library simplifies the process of making HTTP requests and handling responses.

    First, install the library via npm:

    npm install @microsoft/microsoft-graph-client

    Once installed, you’ll also need an authentication library to handle OAuth 2.0. For this example, we’ll use msal-node, Microsoft’s official library for authentication in Node.js:

    npm install @azure/msal-node

    Step 2: Authenticating with Microsoft Graph

    Authentication is the trickiest part of working with Microsoft Graph. You’ll need to register your application in the Azure portal to get a client_id and client_secret. Here’s how:

    1. Go to the Azure Portal and navigate to “App Registrations.”
    2. Click “New Registration” and fill in the required details.
    3. Once registered, note down the Application (client) ID and Directory (tenant) ID.
    4. Under “Certificates & Secrets,” create a new client secret. Store this securely; you’ll need it later.

    With your app registered, you can now authenticate using the msal-node library. Here’s a basic example:

    const msal = require('@azure/msal-node');
    
    // MSAL configuration
    const config = {
      auth: {
        clientId: 'YOUR_APP_CLIENT_ID',
        authority: 'https://login.microsoftonline.com/YOUR_TENANT_ID',
        clientSecret: 'YOUR_APP_CLIENT_SECRET',
      },
    };
    
    // Create an MSAL client
    const cca = new msal.ConfidentialClientApplication(config);
    
    // Request an access token
    async function getAccessToken() {
      const tokenRequest = {
        scopes: ['https://graph.microsoft.com/.default'],
      };
    
      try {
        const response = await cca.acquireTokenByClientCredential(tokenRequest);
        return response.accessToken;
      } catch (error) {
        console.error('Error acquiring token:', error);
        throw error;
      }
    }
    

    In this example, we’re using the “client credentials” flow, which is ideal for server-side applications. If you’re building a client-side app, you’ll need to use a different flow, such as “authorization code.”

    Step 3: Making Your First Microsoft Graph API Call

    Now that you have an access token, you can use the Microsoft Graph client library to make API calls. Let’s fetch the authenticated user’s profile using the /me endpoint:

    const { Client } = require('@microsoft/microsoft-graph-client');
    require('isomorphic-fetch'); // Required for fetch support in Node.js
    
    async function getUserProfile(accessToken) {
      // Initialize the Graph client
      const client = Client.init({
        authProvider: (done) => {
          done(null, accessToken);
        },
      });
    
      try {
        const user = await client.api('/me').get();
        console.log('User profile:', user);
      } catch (error) {
        console.error('Error fetching user profile:', error);
      }
    }
    
    // Example usage
    (async () => {
      const accessToken = await getAccessToken();
      await getUserProfile(accessToken);
    })();
    

    This code initializes the Microsoft Graph client with an authentication provider that supplies the access token. The api('/me').get() call retrieves the user’s profile information.

    💡 Pro Tip: Use the select query parameter to fetch only the fields you need. For example, client.api('/me').select('displayName,mail').get() will return only the user’s name and email.

    Step 4: Handling Errors and Debugging

    Working with APIs inevitably involves error handling. Microsoft Graph uses standard HTTP status codes to indicate success or failure. Here are some common scenarios:

    • 401 Unauthorized: Your access token is invalid or expired. Ensure you’re refreshing tokens as needed.
    • 403 Forbidden: Your app lacks the required permissions. Double-check the scopes you’ve requested.
    • 404 Not Found: The endpoint you’re calling doesn’t exist. Verify the API URL.

    To debug issues, enable logging in the Microsoft Graph client:

    const client = Client.init({
      authProvider: (done) => {
        done(null, accessToken);
      },
      debugLogging: true, // Enable debug logging
    });
    

    Step 5: Scaling Your Implementation

    Once you’ve mastered the basics, you’ll likely want to scale your implementation. Here are some tips:

    • Batch Requests: Use the /$batch endpoint to combine multiple API calls into a single request, reducing latency.
    • Pagination: Many Microsoft Graph endpoints return paginated results. Use the @odata.nextLink property to fetch additional pages.
    • Rate Limiting: Microsoft Graph enforces rate limits. Implement retry logic with exponential backoff to handle 429 Too Many Requests errors.

    Conclusion

    By now, you should have a solid understanding of how to make Microsoft Graph API calls using JavaScript. Let’s recap the key takeaways:

    • Use the @microsoft/microsoft-graph-client library to simplify API interactions.
    • Authenticate securely using the msal-node library and environment variables for sensitive credentials.
    • Start with basic API calls like /me and gradually explore more advanced features like batching and pagination.
    • Always handle errors gracefully and implement retry logic for rate-limited requests.
    • Request only the permissions your app truly needs to minimize security risks.

    What will you build with Microsoft Graph? Share your thoughts and questions in the comments below!

  • How to use AZ command to control VMs

    Imagine this: your boss needs a new web server spun up right now—and you’re the go-to person. You could click around in the Azure portal, but let’s be honest, that’s slow and error-prone. Real pros use the az CLI to automate, control, and dominate their Azure VMs. If you want to move fast and avoid mistakes, this guide is for you.

    Step 1: Create a Resource Group

    Resource groups are the containers for your Azure resources. Always start here—don’t be the person who dumps everything into the default group.

    az group create --name someRG --location eastus
    • Tip: Pick a location close to your users for lower latency.
    • Gotcha: Resource group names must be unique within your subscription.

    Step 2: Create a Linux VM

    Now, let’s launch a VM. Ubuntu LTS is a solid, secure choice for most workloads.

    az vm create --resource-group someRG --name someVM --image UbuntuLTS --admin-username azureuser --generate-ssh-keys
    • Tip: Use --generate-ssh-keys to avoid password headaches.
    • Gotcha: Don’t forget --admin-username—the default is not always what you expect.

    Step 3: VM Lifecycle Management

    VMs aren’t fire-and-forget. You’ll need to redeploy, start, stop, and inspect them. Here’s how:

    az vm redeploy --resource-group someRG --name someVM
    az vm start --resource-group someRG --name someVM
    az vm deallocate --resource-group someRG --name someVM
    az vm show --resource-group someRG --name someVM
    • Tip: deallocate stops billing for compute—don’t pay for idle VMs!
    • Gotcha: Redeploy is your secret weapon for fixing weird networking issues.

    Step 4: Get the Public IP Address

    Need to connect? Grab your VM’s public IP like a pro:

    az vm show -d -g someRG -n someVM --query publicIps -o tsv
    • Tip: The -d flag gives you instance details, including IPs.
    • Gotcha: If you don’t see an IP, check your network settings—public IPs aren’t enabled by default on all VM images.

    Step 5: Remote Command Execution

    SSH in and run commands. Here’s how to check your VM’s uptime:

    ssh azureuser@<VM_PUBLIC_IP> 'uptime'
    • Tip: Replace <VM_PUBLIC_IP> with the actual IP from the previous step.
    • Gotcha: Make sure your local SSH key matches the one on the VM, or you’ll get locked out.

    Final Thoughts

    The az CLI is your ticket to fast, repeatable, and reliable VM management. Don’t settle for point-and-click—automate everything, and keep your cloud under control. If you hit a snag, check the official docs or run az vm --help for more options.

  • Python: Azure Service Bus Without SDK (REST API Guide)

    Want to send and receive notifications on Azure Service Bus using Python, but don’t want to rely on the official SDK? This guide shows you how to authenticate and interact with Azure Service Bus queues directly using HTTP requests and SAS tokens. Let’s dive in!

    Azure Service Bus (ASB) uses Azure Active Directory (AAD) or Shared Access Signature (SAS) tokens for authentication. In this example, we assume you have owner access and can generate a Send/Listen SAS key from the Azure Portal. Here’s how to create a valid SAS token:

    def get_auth_token(sb_name, eh_name, sas_name, sas_value):
        # generate SAS token
        uri = "https://{}.servicebus.windows.net/{}".format(sb_name, eh_name)
        sas = sas_value.encode('utf-8')
        expiry = str(int(time.time() + 10000))
        string_to_sign = (urllib.parse.quote_plus(uri) + 'n' + expiry).encode('utf-8')
        signed_hmac_sha256 = hmac.HMAC(sas, string_to_sign, hashlib.sha256)
        signature = urllib.parse.quote(base64.b64encode(signed_hmac_sha256.digest()))
        return  {"uri": uri,
                 "token":'SharedAccessSignature sr={}&sig={}&se={}&skn={}'
                         .format(uri, signature, expiry, sas_name)
                }

    Once you have generated the token, sending and receiving messages is straightforward. Below is a complete code snippet that generates a SAS token and sends your machine’s IP address via Azure Service Bus.

    import time
    import urllib
    import hmac
    import hashlib
    import base64
    import requests
    import socket
    
    h_name = socket.gethostname()
    IP_address = socket.gethostbyname(h_name)
    
    def get_auth_token(sb_name, eh_name, sas_name, sas_value):
        # generate SAS token
        uri = "https://{}.servicebus.windows.net/{}".format(sb_name, eh_name)
        sas = sas_value.encode('utf-8')
        expiry = str(int(time.time() + 10000))
        string_to_sign = (urllib.parse.quote_plus(uri) + 'n' + expiry).encode('utf-8')
        signed_hmac_sha256 = hmac.HMAC(sas, string_to_sign, hashlib.sha256)
        signature = urllib.parse.quote(base64.b64encode(signed_hmac_sha256.digest()))
        return  {"uri": uri,
                 "token":'SharedAccessSignature sr={}&sig={}&se={}&skn={}'
                         .format(uri, signature, expiry, sas_name)
                }
    
    def send_message(token, message):
        # POST http{s}://{serviceNamespace}.servicebus.windows.net/{queuePath}/messages
        r = requests.post(token['uri'] + "/messages",
            headers={
                "Authorization": token['token'],
                "Content-Type": "application/json"
            },
            json=message)
    
    def recieve_message(token): 
        # DELETE http{s}://{serviceNamespace}.servicebus.windows.net/{queuePath}/messages/head
        # 204 if no message
        r = requests.delete(token['uri'] + "/messages/head",
            headers={
                "Authorization": token['token'],
            })
        return r.text
    
    sb_name = "<service bus name>"
    q_name = "<service bus queue name>"
    
    skn = "<key name for that access key>"
    key = "<access key created in portal>"
    
    token = get_auth_token(sb_name, q_name, skn, key)
    print(token['token'])
    
    uri = "https://" + sb_name + ".servicebus.windows.net/" + q_name + "/messages"
    
    send_message(token, {'ip': IP_address})
    recieve_message(token)