Quick Reference

3 Core Pagination Concepts

📄

PageNumber

Type: Integer (1-indexed)

Default: 1

Range: 1 to unlimited

HTTP: ?PageNumber=2

SDK: filter.PageNumber = 2

⚠️ First page is 1, NOT 0

📊

PageSize

Type: Integer

Default: 100 (SDK), varies by endpoint

Maximum: 1,000 records

HTTP: ?PageSize=100

SDK: filter.PageSize = 100

Recommended: 25-50 for UI, 500-1000 for batch

🎯

Last Page Detection

Pattern: recordCount < PageSize

No total count available - must iterate page-by-page

HTTP: Parse JSON response recordCount field

SDK: Check response.RecordCount property

Iterate until recordCount drops below PageSize

All pagination happens server-side - API returns only the requested page of results.

Why This Matters

Server-side pagination prevents timeouts, reduces memory consumption, and enables efficient processing of large datasets by retrieving records in manageable chunks (100-500 at a time). Critical for registry platforms managing hundreds of thousands of entities.

💡

The Pagination Principle

The 7G API uses offset-based pagination with PageNumber and PageSize parameters. The API returns only the requested page of results, plus a recordCount field indicating how many records are in the current page (NOT total records).

How It Works:

  1. Client requests page with PageNumber and PageSize
  2. API calculates offset: skip = (PageNumber - 1) * PageSize
  3. Database executes query with SKIP and TAKE (LIMIT) clauses
  4. API returns current page with recordCount field
  5. Client checks recordCount < PageSize to detect last page

Critical: There is NO total count field in responses. To find total records, you must iterate until recordCount < PageSize.

⚙️

Pagination Parameters

Control which page and how many records to retrieve

Parameter Type Default Min Max Description
PageNumber integer 1 1 No limit Page to retrieve (1-indexed, first page is 1 not 0)
PageSize integer 100 1 1,000 Number of records per page

Important Constraints

  • 1-indexed - PageNumber starts from 1, not 0 (first page is 1)
  • Maximum PageSize - 1,000 records per page (enforced by API)
  • Default PageSize - 100 records (SDK default in filter objects)
  • Empty results - Return recordCount: 0 with empty data array, not an error
json
# First page (default PageSize=100)
GET /BizEntity?PageNumber=1&PageSize=100

# Specific page (page 3, 50 records per page)
GET /BizEntity?PageNumber=3&PageSize=50

# Maximum page size (1,000 records - API limit)
GET /BizEntity?PageNumber=1&PageSize=1000
📦

Understanding Paginated Responses

How recordCount reveals page information

Every paginated response includes a recordCount field showing how many records are in THIS page, not the total dataset size.

{
  "result": true,
  "message": "",
  "recordCount": 100,  // Records in THIS page (not total records in database)
  "data": [ ... ]      // Array of 100 records
}

Critical Insight

recordCount shows current page size, not total records in the database. When recordCount < PageSize, you've reached the last page. There is no totalCount field - use this pattern to detect pagination end.

🎯

Detecting the Last Page

Know when you've retrieved all records

To determine if you've reached the last page, compare recordCount to your requested PageSize:

json
using System.Net.Http;
using System.Net.Http.Json;

var response = await httpClient.GetAsync("/BizEntity?PageSize=100&PageNumber=1");
var result = await response.Content.ReadFromJsonAsync<APIResponse>();

// Check if last page
if (result.RecordCount < 100)
{
    Console.WriteLine("Last page reached");
    Console.WriteLine($"Retrieved {result.RecordCount} records on final page");
}
else
{
    Console.WriteLine("More pages available - continue to next page");
}

The Pattern

If recordCount < PageSize → Last page reached

  • Example 1: PageSize=100, RecordCount=100 → More pages may exist
  • Example 2: PageSize=100, RecordCount=47 → Last page (only 47 records left)
  • Example 3: PageSize=100, RecordCount=0 → No results (empty page)
🔁

Complete Pagination Loop

Iterate through all pages until the end

Here's how to paginate through all records using a while loop and last-page detection:

json
// Paginate through all results
using System.Net.Http;
using System.Net.Http.Json;
using System.Text.Json;

var pageSize = 100;
var pageNumber = 1;
var allEntities = new List<BizEntityDTO>();

while (true)
{
    var response = await httpClient.GetAsync(
        $"/BizEntity?PageSize={pageSize}&PageNumber={pageNumber}"
    );

    var result = await response.Content.ReadFromJsonAsync<APIResponse>();

    if (!result.Result)
    {
        Console.WriteLine($"Error: {result.Message}");
        break;
    }

    // Deserialize and collect data
    var entities = JsonSerializer.Deserialize<List<BizEntityDTO>>(
        JsonSerializer.Serialize(result.Data)
    );

    allEntities.AddRange(entities);
    Console.WriteLine($"Retrieved page {pageNumber}: {result.RecordCount} records");

    // Check if last page
    if (result.RecordCount < pageSize)
    {
        Console.WriteLine($"Finished! Total: {allEntities.Count} entities");
        break;
    }

    pageNumber++;
    await Task.Delay(100);  // Optional: Rate limit protection
}

Key Points

  • Start with pageNumber = 1 (1-indexed)
  • Check response.Result for business logic errors
  • Break loop when recordCount < PageSize
  • Increment pageNumber++ at end of each iteration
  • Consider adding delays (100-200ms) between requests to avoid rate limiting
🔗

Combining Pagination with Filters

Paginate only the records you need

Pagination works seamlessly with filtering - apply filters to reduce the dataset, then paginate through the filtered results:

json
# Filter SMSF entities (type 4), paginate results
GET /BizEntity?BizEntityTypeID.equal=4&PageSize=50&PageNumber=1

# Filter by date range with pagination
GET /BizEntity?CreatedDate.greaterThan=2024-01-01T00:00:00Z&PageSize=100&PageNumber=2

# Complex: Filter + text search + pagination
GET /BizEntity?BizEntityTypeID.in=1,2,4&Name.contains=Trust&PageSize=25&PageNumber=1

Best Practice: Always filter FIRST to reduce dataset size, then paginate. See Query & Filtering for complete filtering documentation.

🚀

Advanced Pagination Strategies

Production patterns for different scenarios

1

Strategy 1: Sequential Processing (Standard)

Process pages one at a time - safest approach for write operations or when order matters.

json
// Standard sequential pagination - safest approach
var pageNumber = 1;
var pageSize = 100;

while (true)
{
    var filter = new BizEntityFilter
    {
        PageSize = pageSize,
        PageNumber = pageNumber
    };

    var response = await client.BizEntity.GetAsync(filter);

    if (!response.Result) break;

    // Process this page's data sequentially
    ProcessEntities(response.Data);

    // Last page detection
    if (response.RecordCount < pageSize) break;

    pageNumber++;
    await Task.Delay(100);  // Rate limit protection
}

Use When: Updating records, maintaining order, low resource usage

2

Strategy 2: Parallel Processing (Fast Read-Only)

Fetch multiple pages concurrently for maximum throughput - read-only operations only.

json
// Parallel pagination - read-only operations ONLY
// Fetch pages 1-10 concurrently for maximum speed
var tasks = new List<Task<APIResponse<object>>>();
var batchSize = 10;  // Fetch 10 pages at once

for (int page = 1; page <= batchSize; page++)
{
    var pageNumber = page;  // Capture for closure
    tasks.Add(client.BizEntity.GetAsync(new BizEntityFilter
    {
        PageSize = 100,
        PageNumber = pageNumber
    }));
}

// Wait for all pages concurrently
var responses = await Task.WhenAll(tasks);

// Process all results
foreach (var response in responses.Where(r => r.Result))
{
    var entities = JsonSerializer.Deserialize<List<BizEntityDTO>>(
        JsonSerializer.Serialize(response.Data)
    );
    ProcessEntities(entities);
}

// Note: Only use for read-only operations!
// Avoid for writes due to race conditions

Caution: Only for read-only operations. Avoid for writes (race conditions). May trigger rate limiting if too aggressive.

3

Strategy 3: Resumable Pagination (Long-Running)

Checkpoint progress and resume if interrupted - critical for multi-hour batch operations.

json
// Resumable pagination with checkpointing
// Can resume from last checkpoint if process crashes
var checkpointFile = "pagination_checkpoint.txt";
var pageNumber = File.Exists(checkpointFile)
    ? int.Parse(File.ReadAllText(checkpointFile))
    : 1;

Console.WriteLine($"Resuming from page {pageNumber}");

while (true)
{
    var filter = new BizEntityFilter
    {
        PageSize = 500,
        PageNumber = pageNumber
    };

    try
    {
        var response = await client.BizEntity.GetAsync(filter);

        if (!response.Result) break;

        // Process page
        ProcessEntities(response.Data);

        // Save checkpoint AFTER successful processing
        File.WriteAllText(checkpointFile, pageNumber.ToString());
        Console.WriteLine($"Checkpoint saved: page {pageNumber}");

        // Last page detection
        if (response.RecordCount < 500) break;

        pageNumber++;
    }
    catch (Exception ex)
    {
        Console.WriteLine($"Error on page {pageNumber}: {ex.Message}");
        Console.WriteLine("Checkpoint preserved - can resume later");
        throw;  // Re-throw to stop process
    }
}

// Delete checkpoint when complete
File.Delete(checkpointFile);
Console.WriteLine("Pagination complete!");

Use When: Processing millions of records, overnight jobs, unreliable networks

⚠️

Error Handling in Pagination

Robust patterns for production reliability

Common Pagination Errors

Scenario What Happens How to Handle
Page doesn't exist Returns recordCount: 0, data: [] Not an error - treat as last page reached
PageSize > 1000 API limits to 1000 automatically Use PageSize=1000 explicitly to avoid confusion
PageNumber = 0 Returns result: false with error message Always use PageNumber ≥ 1 (1-indexed)
Network timeout mid-pagination Request fails, progress lost Use resumable strategy with checkpointing
Data changes during pagination May see duplicates or miss records Accept eventual consistency or use snapshot filtering (filter by CreatedDate range)

Retry Logic Pattern

json
// Retry logic for transient failures
public async Task<APIResponse<object>> GetPageWithRetry(
    int pageNumber,
    int maxRetries = 3)
{
    for (int attempt = 1; attempt <= maxRetries; attempt++)
    {
        try
        {
            var filter = new BizEntityFilter
            {
                PageSize = 100,
                PageNumber = pageNumber
            };

            var response = await client.BizEntity.GetAsync(filter);

            if (response.Result)
            {
                return response;  // Success
            }

            // Business logic error
            Console.WriteLine($"Page {pageNumber} error: {response.Message}");
            return response;
        }
        catch (HttpRequestException ex)
        {
            Console.WriteLine(
                $"Attempt {attempt}/{maxRetries} failed for page {pageNumber}: {ex.Message}"
            );

            if (attempt < maxRetries)
            {
                // Exponential backoff: 1s, 2s, 4s
                var delay = TimeSpan.FromSeconds(Math.Pow(2, attempt - 1));
                Console.WriteLine($"Retrying in {delay.TotalSeconds}s...");
                await Task.Delay(delay);
            }
            else
            {
                throw;  // Max retries exceeded
            }
        }
    }

    throw new Exception("Should never reach here");
}

Important: Data Consistency During Pagination

The API does not provide pagination cursors or snapshots. If data changes while you're paginating (new records added, records deleted), you may encounter duplicates or miss records. For critical operations, filter by a stable date range (e.g., CreatedDate.lessThan=2024-01-01) to create a consistent snapshot.

Best Practices

Guidelines for effective pagination

Recommended

  • Choose appropriate page size based on use case (25-50 for UI, 500-1000 for batch processing)
  • Check recordCount < PageSize to detect last page reliably
  • Combine with filters to paginate only relevant records (filter first, then paginate)
  • Add delays (100-200ms) when looping through many pages to avoid rate limiting
  • Handle empty results gracefully (recordCount = 0 means no matches, not an error)
  • Use consistent page sizes throughout pagination loop (don't change mid-loop)

Avoid

  • Requesting PageSize > 1000 (API will reject or limit to 1000)
  • Using PageNumber=0 (1-indexed, will cause errors)
  • Changing PageSize mid-pagination (causes duplicate or skipped records)
  • Assuming fixed total count (use recordCount detection pattern instead)
  • Infinite loops without recordCount check (always check for last page)
  • Paginating without filters on very large datasets (slow, unnecessary)

What's Next?

Continue your journey with these related concepts: