Thursday, 31 July 2025

Filter the PurchTableListPage in D365FO on Dimensions: BusinessUnit and Division using DimensionProvider class.

 In D365FO, dimensions are a powerful tool for categorizing and analyzing transactional data. But what if you want to filter purchase orders based on specific dimension values—say, BusinessUnit and Division? This blog shows you how to do just that using X++ and the DimensionProvider class.

Scenario

You want to filter the PurchTableListPage (Purchase Orders) to only show entries where:

  • BusinessUnit = 02

  • Division = 31700

This is useful for reports, user-specific views, or tailored processing logic.

Let’s walk through the code and see how it works.

The Code: Filter PurchTable by Dimensions

internal final class TestDimFilter

{

    public static void main(Args _args)

    {

        Query query = new Query();

        QueryRun queryRun;

        QueryBuildDataSource queryPurchTable;

        PurchTable purchTable;

        DimensionProvider dimensionProvider = new DimensionProvider();

        DimensionAttributeValueSetStorage dimAttrValueSetStorage;

        int i;


        // Add PurchTable as root datasource

        queryPurchTable = query.addDataSource(tableNum(PurchTable));


        // Add dimension filter: BusinessUnit = "02"

        dimensionProvider.addAttributeRangeToQuery(

            query,

            queryPurchTable.name(),               // Data source alias

            identifierStr(DefaultDimension),      // Dimension field

            DimensionComponent::DimensionAttribute,

            "02",                                 // Filter value

            "BusinessUnit",                       // Dimension name

            true                                  // Add as a new range

        );


        // Add dimension filter: Division = "31700"

        dimensionProvider.addAttributeRangeToQuery(

            query,

            queryPurchTable.name(),

            identifierStr(DefaultDimension),

            DimensionComponent::DimensionAttribute,

            "31700",

            "Division",

            true

        );


        // Run query

        queryRun = new QueryRun(query);


        while (queryRun.next())

        {

            purchTable = queryRun.get(tableNum(PurchTable));


            // Read and print the dimension values

            dimAttrValueSetStorage = DimensionAttributeValueSetStorage::find(purchTable.DefaultDimension);

            for (i = 1; i <= dimAttrValueSetStorage.elements(); i++)

            {

                info(strFmt("PurchId %1 -> Dim Name %2: value %3",

                    purchTable.PurchId,

                    DimensionAttribute::find(dimAttrValueSetStorage.getAttributeByIndex(i)).Name,

                    dimAttrValueSetStorage.getDisplayValueByIndex(i)));

            }

        }

    }

}

Breakdown: What's Happening?

  • Query and QueryBuildDataSource are used to construct a query against the PurchTable.

  • DimensionProvider::addAttributeRangeToQuery() injects a dimension-based filter directly into the query.

  • The two dimension names used (BusinessUnit and Division) must match exactly what's defined in your D365FO instance.

  • Once the query runs, we iterate over results and print out each purchase order's dimension values for verification.

Real-World Use Cases

  • Security/Personalization: Limit purchase order views based on the user's assigned business unit.

  • Reporting: Generate targeted reports for specific departments or divisions.

  • Data Exports: Extract dimension-specific purchase order data for external systems.

Final Thoughts

This approach keeps your logic clean and reusable, especially in custom forms or batch jobs. The DimensionProvider class simplifies what used to be a tricky process—filtering ledger or default dimensions in queries—into readable and maintainable code.

Wednesday, 16 July 2025

Cleaning Up Duplicate Records in SQL Server: A Practical Walkthrough

🧹 How to Identify and Remove Duplicate Records in SQL Server

Duplicate records are a common headache in databases. They mess up reporting, inflate numbers, and introduce inconsistencies that can quietly break things. Whether you're cleaning legacy data or preventing future problems, knowing how to find and eliminate duplicates is an essential SQL skill.

Here’s a practical approach using a sample table we'll call xxyyzz

Step 1: Explore the Table

Start by reviewing what’s in the table:

SELECT * FROM xxyyzz;

This gives you the lay of the land. Before doing anything destructive, it's crucial to understand which columns matter when determining if two rows are considered duplicates.

Step 2: Detect Duplicates

Let’s say duplicates are defined by a combination of columns — for example, everything except a unique ID or timestamp. You can group by those columns and count how many times each combination appears.

SELECT 

    column1, column2, column3, ..., columnN,

    COUNT(*) AS DuplicateCount

FROM xxyyzz

GROUP BY 

    column1, column2, column3, ..., columnN

HAVING COUNT(*) > 1;

Replace column1 through columnN with the fields that define a unique row in your context.

Step 3: Remove Duplicates Using a CTE

Now comes the cleanup. You want to keep one row from each group and delete the rest. This is where a Common Table Expression (CTE) with ROW_NUMBER() comes in.

WITH DuplicatesCTE AS (

    SELECT *,

        ROW_NUMBER() OVER (

            PARTITION BY column1, column2, column3, ..., columnN

            ORDER BY ID -- or CreatedDate, or another tie-breaker

        ) AS rn

    FROM xxyyzz

)

DELETE FROM DuplicatesCTE WHERE rn > 1;

What this does:

  • The ROW_NUMBER() function assigns a row number to each record in each duplicate group.

  • Rows are grouped by the columns that define uniqueness.

  • The row with rn = 1 (usually the oldest or lowest ID) is kept.

  • Everything else (rn > 1) is deleted.

⚠️ Always back up your data or test in a staging environment before running deletes.

✅ Optional: Prevent Future Duplicates

If this table should never have duplicates in the first place, consider enforcing a constraint with a unique index:

CREATE UNIQUE INDEX UX_xxyyzz_UniqueRows

ON xxyyzz (column1, column2, column3, ..., columnN);

This will block any future insertions that would create another duplicate.

Final Thoughts

Cleaning duplicates is routine, but it pays off every time. It enhances data accuracy, streamlines queries, and optimises performance. The combo of GROUP BY, ROW_NUMBER(), and a targeted approach DELETE gives you a clean, reliable fix.

Let this be part of your standard SQL toolkit — and if you need help customising it for a specific dataset or situation, just ask. 

 

How to Disable “Advanced Filter or Sort” and Enforce Custom Filters on Any D365FO Form

 In Dynamics 365 Finance and Operations, users can apply filters through the “Advanced filter or sort” feature found under the Options tab...