Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
The Launchpad serves as the central hub for all HotWax Commerce apps, providing easy access to development, user acceptance testing (UAT), and production versions of the applications all in one place.
Loading...
Discover how HotWax Commerce's Job Manager App streamlines order, product, and inventory operations with its workflow management features.
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Discover the two methods for order creation on Shopify: self-service order placement and assisted order placement.
Customers have two options for placing orders:
Self-Service Order Placement: Customers can browse the Shopify storefront, select products, provide shipping and payment details, and complete the order independently. Upon successful payment, customers receive an order confirmation email with a unique order ID and delivery information.
Assisted Order Placement: Customers can call a Customer Service Representative (CSR) for assistance. The CSR collects product details, creates a customer profile, and can add additional information like delivery dates or packaging statuses directly from the Shopify admin panel. Payment can be processed within Shopify or marked as paid if already completed externally.
Learn how HotWax Commerce seamlessly imports orders from Shopify, ensuring users have access to real-time order information.
HotWax Commerce periodically checks for new orders on Shopify, ensuring that users have access to the most up-to-date order information.
Furthermore, you can utilize features such as scheduled jobs for importing new orders and updating order statuses. The Job Manager app allows users to set up automated tasks, such as importing new orders at specified intervals and updating order statuses based on predefined parameters. This ensures that orders are processed promptly and accurately.
Step-by-Step Usage Instructions:
Setting Up Scheduled Job for Importing Orders:
Log in to your HotWax Commerce user account and navigate to the Job Manager app.
Select the Orders
option from the menu, Navigate to the Import Orders job to create a new scheduled job for importing orders.
Configure the job settings, including the frequency of import (e.g., daily, hourly) and any specific parameters required for the import process.
Save the job settings to activate the scheduled import of orders from Shopify to HotWax Commerce.
Automatically Approving Orders:
Set up a scheduled job named Approved Orders
within the Job Manager app.
Configure the job to check the approval status of Shopify orders based on parameters defined by Shopify Merchants.
Save the job settings to enable automatic approval of orders in HotWax Commerce.
Updating Orders:
Create a scheduled job for updating orders within the Job Manager app.
Configure the job settings to run at specified intervals to ensure that order information remains current.
Save the job settings to activate automatic updates for orders in HotWax Commerce.
This document outlines the guidelines and best practices for using Shopify, OMS Instances, and Launchpad Apps within Hotwax Commerce. These guidelines are designed to ensure data integrity, security, and optimal system performance.
Order Creation: Do not create draft orders in the DEV-OMS or any Production Instance.
Webstore Usage: Never use your email address, credit card details, or phone number on the webstore. Use a fake address instead.
Shopify Data: Do not edit existing data on Shopify Shop and Product Store. If you create any Shopify Shop and Product Store, delete it carefully if necessary.
Draft Orders: Always add a shipping address when creating draft orders and Valid customer names and avoid using any special characters in the mail like (#$%&*).
Webstore URL: Double-check the webstore URL; it usually includes a "sandbox" for UAT instances.
Login Credentials: Use only your credentials. Avoid using hotwax.user credentials. If your user is not created, create the user first and then use those credentials to perform any changes in OMS.
Dummy Data: Do not create any dummy data on the dev or demo instance, such as using random or test as customer name or facility as any person name.
Documentation: Ensure you read the documentation before using any application.
Web Tools: Carefully access web tools. Do not edit or delete any data unless instructed.
SQL Queries: Do not write any SELECT or ALTER queries in the SQL entity processor.
CSV Format: Double-check the CSV format before uploading it in any instance.
Inventory Recording: When recording inventory from the UI, always use "no variance."
Order Processing: Avoid manually brokering, approving, or importing any order.
UAT Instances: Do not edit any client orders, such as approving, brokering, or fulfilling them manually.
Testing: Do not use Demo OMS for Testing Purposes. Always use Dev Oms for testing.
Login Credentials: Use only your credentials. Avoid using hotwax.user credentials.
Job Execution: Do not run any job continuously. Run the job, wait approximately 5 minutes to see updated changes, then proceed. Run the bulk import job only after running any job for create or import jobs.
Job Management: Do not disable any job unless explicitly told so. If you did then reschedule it again.
User Creation: Ensure there are no spaces in the username when creating any user.
Security Groups: Do not create or edit any security group permissions.
Parking: Do not archive any parking.
Facility Creation: Do not create any facility unless instructed. Ensure proper naming conventions are followed, avoiding the use of your name as a facility name.
User Permissions: When creating any user, add the user to the product store and facility with the appropriate permissions.
Order Routing: Do not change the status of any brokering runs.
If you encounter any difficulties, please ask your mentor for guidance before proceeding.
Use the production instance with extreme care. Do not edit any existing data. Access production web tools only if necessary; otherwise, avoid using them. Do not run SQL queries on the production instance unless explicitly instructed to do so. { % endhint %}
These ChatGPT prompts are designed to enhance the communication and support capabilities of HotWax Commerce. By providing structured guidelines for creating social media posts, blog entries, product updates, user manuals, and troubleshooting documents, the prompts ensure clear, concise, and professional messaging tailored to the needs of eCommerce businesses. They help HotWax Commerce effectively convey the benefits of their Order Management System, keep users informed about new features and updates, and provide comprehensive support, ultimately improving user experience and operational efficiency.
Craft a compelling LinkedIn post tailored for an audience involved in eCommerce businesses in 200 words. The style should be informative, concise, and solution-focused. Maintain a professional tone, demonstrating expertise in the field, while keeping the content accessible. Highlight recent updates, product releases, or industry insights relevant to the HotWax Commerce Order Management System. Use clear and actionable language, and if applicable, include relevant industry terms and buzzwords for credibility. Avoid excessive excitement, buzzwords, and adjectives, and ensure each post emphasizes the practical benefits of the information shared. Conclude with a clear call-to-action, encouraging readers to explore further details or visit relevant pages.
Create a blog post on a topic related to the topic, with a focus on making the content accessible to a general audience. Incorporate insights on the topic, while keeping the tone professional and free of jargon. Discuss the challenges and solutions related to the topic, and include real-world examples of anecdotes to illustrate key points. Aim for a well-structured and engaging post with a length of around 1500-2000 words. Feel free to explore different aspects of the topic, ensuring the content remains relevant and informative for readers with varying levels of expertise in the field.
Develop a product update that emphasizes the recent enhancements or features introduced in the HotWax Commerce platform. Highlight the significance of the update in simplifying tasks, improving user experience, or addressing specific challenges faced by users. Ensure that the context behind the update, the problem it resolves, and the solution are provided clearly and concisely with the same flow. Emphasize the benefits gained by users and how the update enhances overall efficiency or functionality within the system.
Explain the significance, benefits, and relevance of the feature within the HotWax Commerce platform. Illustrate its importance in users' tasks and how it contributes to their workflow efficiency.
Offer clear and concise step-by-step guidance on how users can access and effectively utilize this feature within the HotWax Commerce interface. Ensure each step is easy to follow and includes context on why this step is crucial and the typical user who would employ it.
Include guidance on how to address and resolve potential errors or issues relevant to this feature. If specific errors commonly occur during the usage of this feature, provide troubleshooting steps or solutions.
Summarize the overall significance of the feature, highlighting its positive impact on users' tasks and the enhancements it brings to their workflow efficiency within HotWax Commerce.
Create a detailed troubleshooting document to resolve issues between HotWax Commerce and Shopify or other relevant platforms. Ensure the document is clear, concise, and solution-focused, adhering to a professional tone. The document should start with a title clearly stating the issue being addressed and an objective briefly describing the purpose and desired outcome. Set up the context first and then list specific scenarios for the issue. For each scenario, outline step-by-step processes to diagnose and resolve the issue, using subheadings to organize steps logically. Detailed instructions on how to verify if the issue exists and the initial checks to perform, including steps for checking both HotWax Commerce and Shopify (or the relevant platform). Provide clear instructions on actions to take within HotWax Commerce, including navigation to relevant sections. Explain how to identify errors, provide guidance on common issues and their fixes, and include steps for rectifying errors in Shopify (or the relevant platform) and re-running jobs if necessary. Offer instructions to diagnose technical errors and suggest tools or methods non-technical users can employ to understand error messages. Add relevant examples wherever possible.
You are the CEO of HotWax Commerce, an IT/Software company that offers omnichannel order management solutions to retail brands across the globe. Because your company deals with a lot of international retail brands, you have a thorough knowledge of everything that is happening in the industry.
As a retail expert, you regularly share your opinions on various online forums where various experts share their views on certain topics.
On one of those forums, the topic was: (Content of the article)
Now, as a retail expert, you're required to answer the following question: (Question asked in the article)
Note 1: Answer the question in 130 words in very simple English without using any jargon or buzzwords.
Note 2: Make it a personal answer, so use words like "In my opinion" or "I think".
Explore the streamlined process of order fulfillment in HotWax Commerce, from receiving order details to final shipment.
Upon receiving order details from the HotWax Commerce, the designated fulfillment center facility is promptly notified, initiating the next phase of the order fulfillment process. Store associates efficiently receive the order instructions in the fulfillment app, ensuring seamless communication and coordination, where the regular orders will move to the fulfillment app and customers who have opted for the BOPIS will move to the BOPIS app.
Subsequently, the ordered items meticulously go for Pick, Pack and Shipping.
Navigate to the Open Orders
page to view all pending orders sent to this facility for fulfillment. You'll find the total number of queued orders prominently displayed at the top of the page, giving you an immediate overview of the workload.
Orders are listed in a First In, First Out
sequence, but you can apply service level agreement based on shipping methods to prioritize fulfillment. To generate a picklist, simply click on the Print Picksheet
function.
If you need to assign a new picker to the orders, Navigate to the user management app and create a new user and there we can find an option of show as a picker
and can add the user as the picker.
To reprint the picklist, go to the "In Progress" tab and click on the Print Picklist
button located at the bottom right corner of the page. If you need to change the assigned picker, you can do so on the same page by selecting the "Edit Picker" button next to the "Print Picklist" button.
Store managers have the additional option to generate a QR code by clicking on the GENERATE QR CODE
button located in the bottom-left corner. Pickers can scan this QR code with their mobile devices to access their picklist directly. Pickers can pick the orders assigned to them in the picking app.
This happens in the in-progress tab, once orders are picked and ready to pack, store associates can select boxes in which the order items will be packed and generate the packing slip
and shipping label.
Navigate to the specific order details within the In Progress tab of the Fulfillment App.
Within the order details section, locate and click on the Add Boxes option.
Add the required number of boxes corresponding to the order items, ensuring adequate packaging space without excess boxes. Store associates can choose to pack multiple order items into one box, reducing shipping costs and environmental impact.
Upon adding boxes, store associates can further specify box types for individual order items. Click on the select box option against the order item and navigate through the dropdown menu to select the appropriate option corresponding to each item's size and packaging requirements.
After appropriately packing all items and selecting box types click Pack to update the shipping carrier for shipping label generation with the least shipping charges for the selected boxes.
When the order is ready to ship, the packed orders are transferred to the shipping area where carrier partners can pick the orders for delivery.
Go to the Completed tab, you can see all the orders on a FIFO basis.
Click Ship Orders function at the top to mark the orders as shipped in bulk. If Ship Packed Orders is enabled, all packed orders will be shipped. Enable the isTrackingRequired
setting on shipping methods that should not be automatically be shipped unless they have tracking codes.
In the event that the packing slip or shipping label is damaged after packing an order, it can be regenerated from the completed tab.
Managing orders seamlessly from creation to fulfillment on Shopify.
This document explains how to handle orders from when they're first created on Shopify to updating their fulfillment status.
Discover how HotWax Commerce ensures real-time order fulfillment notification, seamlessly updating Shopify with accurate order statuses.
When orders are packed and shipped in HotWax Commerce, the fulfillment status is promptly updated. This real-time synchronization ensures that Shopify receives accurate updates on order statuses, automatically marking them as Fulfilled.
This feature is vital for maintaining transparency and providing customers with real-time visibility into their order status. It also streamlines workflow efficiency by automating the fulfillment process and reducing manual tasks.
Log in to your HotWax Commerce admin dashboard.
Navigate to the Apps
section and select Job Manager.
Once in the Job Manager app, locate the menu and click on Orders.
In the Orders section, you'll find a list of orders awaiting fulfillment.
Select the orders that have been successfully packed and shipped.
Look for the option to schedule a job, typically labeled Scheduled Jobs
or similar.
Within the scheduled jobs options, locate and select the Completed
job.
This action triggers the fulfillment status update for the selected orders.
Double-check the selected orders to ensure accuracy.
Confirm the scheduling of the Completed
job.
After scheduling the job, verify that the fulfillment status has been updated successfully.
Check for any tracking information associated with the order, if available.
Log in to your Shopify admin dashboard.
Navigate to the orders section or dashboard where order statuses are displayed.
In Shopify, locate the recently fulfilled orders.
Confirm that the order status has been automatically updated to Fulfilled.
Guide to verify job frequency in HotWax and NiFi.
In HotWax Commerce, two types of services handle data processing: Order Management System (OMS) jobs and NiFi jobs. Understanding the runtime and scheduled frequency of these jobs is crucial for maintaining smooth operations.
OMS jobs manage core functions such as product, order and inventory data imports and exports. When these jobs do not run as scheduled, it can disrupt operations and cause data inconsistencies.
Verify Job Existence:
Navigate to the Job Manager
app in HotWax Commerce.
Go to the relevant page (e.g., Orders, Inventory).
Locate the job by its name and description.
Check Job Schedule:
Identify the schedule of the job listed against its name.
Click on the job card to view job details.
Review Job History:
Click on the history
text to ensure the job is running at the scheduled frequency.
Napita jobs manage data flows and data transformations between HotWax Commerce and External Systems. When these jobs do not run as scheduled, it can affect data synchronization and integration.
Open Relevant Napita Instance:
Access the Napita instance associated with your HotWax Commerce OMS.
Navigate to Parent Processor Group:
Find the parent processor group for your OMS instance.
Drill down through the processor groups to locate relevant processors.
Locate Relevant Processor Groups:
Enter the processor groups until you find the relevant processors with flowfiles.
Check Processor Scheduling:
Identify the first processor with the scheduling expression.
Note the cron-driven expression for the first processor.
Convert and Verify Cron Expression:
Use a cron expression generator or ChatGPT to convert the cron format and verify the schedule.
Review Processor Settings:
Ensure each processor is set to the correct schedule.
Verify that subsequent processord are time-driven and have the appropriate schedule after concurrent tasks.
Incorrect Cron Expression:
Verify and correct the cron expression using a converter tool.
Ensure the expression matches the desired schedule.
Processor Scheduling Errors:
Check each processor's scheduling settings.
Adjust schedules to ensure proper timing and concurrency.
By following this guide, you should be able to diagnose and resolve job frequency issues within HotWax Commerce, ensuring smooth and efficient data processing.
The ability to create and manage spaces and collections within GitBook is essential for organizing and maintaining documentation. This feature allows users to structure their content efficiently, ensuring that related information is grouped logically. For users who need to manage extensive documentation, such as developers, technical writers, and project managers, this feature is crucial. It ensures that their work is well-organized, easily accessible, and synchronized with source code repositories like GitHub, enhancing workflow efficiency.
Log in to your GitBook account: Ensure you are logged into your GitBook account to access your documentation projects.
View Existing Collections and Spaces: In the left-hand menu, you will see all the existing collections and spaces within GitBook. This step allows you to understand the current structure and decide where new spaces or collections should be added.
Add a New Space or Collection: Click on the Add
button next to "Spaces" for expanding your documentation by either creating new content areas (spaces) or grouping related spaces (collections).
Create a Space: Provide a Title
for your new space. Start writing content by adding pages if you plan to create documentation directly within GitBook.
Configure GitHub Sync: Click on the Configure
button at the top right. Synchronizing with GitHub ensures that your documentation is always up-to-date with the latest changes in your codebase.
Enable GitHub Sync: Click on GitHub Sync
or Gitlab Sync
and confirm the action. Enabling synchronization ensures that your GitBook content is in sync with your GitHub or Gitlab repository.
Authenticate GitHub Sync: Select the space and click on Connect with GitHub
to authenticate. This step links your GitHub account with GitBook, allowing for seamless integration.
Specify Sync Path: Add the correct path by selecting the account
, repository
, branch
, and directory
(if any). Use the format ./folder1/folder2
for directories. Ensuring the correct path is crucial for accurate synchronization of your content.
Commit Message Template (Optional): Select a template for commit messages or pull request reviews if required. Using a consistent template helps maintain clarity and uniformity in your version history.
Select Sync Direction: Choose whether you want to sync from GitHub to Gitbook
(if you write content on GitHub) or from Gitbook to GitHub
(if you write content on GitBook). This determines the direction of the synchronization process based on your workflow.
Initiate Synchronization: Click on Sync
and wait for the synchronization to complete. This final step ensures that your documentation is synchronized and up-to-date.
Verify Synced Pages: The pages will appear as per the summary if you are syncing from GitHub to GitBook
. Verifying the synchronized pages ensures that the content is accurately updated and organized.
Share and Publish the Space: Once the content is available on GitBook, click on the Share
button. Sharing allows others to access your documentation, enhancing collaboration and information dissemination.
Navigate to Publish to the Web: Navigate to the Publish to the Web
button and turn the toggle on for Publish Space to the Web
. Publishing to the web makes your documentation publicly accessible.
Customize and Save the URL: A custom URL will appear, which you can change as per your preference and save the URL link. A customized URL ensures that your documentation is easily accessible and memorable for users.
Access the Published Pages: The GitHub pages will now appear on the web with the custom URL you set. This final step ensures that your documentation is publicly available and can be accessed by anyone with the URL.
By following these steps, users can efficiently create, manage, and publish their documentation within GitBook, ensuring a streamlined and organized workflow.
In HotWax Commerce, jobs are pivotal for executing various operations like data importing or exporting. Each job comprises two essential components: service and runtime. The service component determines which service will execute the job operation, while the runtime component configures job details such as scheduling and frequency. Services are stored in the service engine, while runtime data resides in a separate entity known as runtimedata.
The combinations of runtime and service are stored in the jobsandbox
entity, which creates a draft job for every scheduled job, serving as a benchmark copy for new job runs.
One of the issues encountered in HotWax Commerce Job Management is the occurrence of runtime configuration errors, disrupting the scheduling and execution of jobs. These errors stem from the corruption of runtime data, resulting in incorrect job configurations and subsequent failures during execution.
Troubleshooting Steps:
Access Webtools: Navigate to the webtools section of your specific HotWax Commerce instance.
Open Entityengine: Within webtools, locate and click on entityengine
.
Find Jobsandbox Entity: In the entityengine menu, locate the jobsandbox
entity.
Search for Job: Use the search functionality within the jobsandbox
entity to find the job with the relevant jobid
. Set its status as service_draft
.
View Job Details: Once you've located the job, you'll see a table displaying various details, including the runtime ID of the job.
Navigate to Runtimedata: Return to the entityengine menu and search for the runtimedata
entity.
Check XML Content: Within the runtimedata entity, locate the entry corresponding to the runtime id of the job. Check the XML content associated with it.
Modify XML Data: Using appropriate tools, extract the XML data and remove any extraneous content or rich formatting. You can utilize generative AI tools like ChatGPT for this purpose.
Update Runtime XML: Paste the refined XML data back into the runtimeid field of the job entry within the jobsandbox entity.
Save Changes: Confirm the changes and save the updated runtime data.
Schedule the Job: With the runtime data corrected, you can now successfully schedule the job without encountering any runtime configuration errors.
By following these steps, users can efficiently rectify runtime configuration errors within the HotWax Commerce platform, ensuring uninterrupted job scheduling and smooth workflow operation.
Napita is a data integration tool designed to automate data flow between systems in real-time. It provides an intuitive interface for designing, controlling, and monitoring data flows, making it ideal for simple data ingestion tasks and complex data transformation scenarios. HotWax Commerce OMS relies on Napita as a pivotal component for seamless communication with external systems like Netsuite ERP, enabling smooth data exchange and integration within the ecosystem.
Discover how to effectively manage Napita's Processors' operations using the Scheduling tab, influencing data flow within the platform.
The Scheduling tab within the Processor Configuration dialog in Napita offers crucial settings for managing how a Processor operates, impacting the flow of data within the platform. Let's break down its significance and provide step-by-step instructions on how users can utilize this feature effectively.
Accessing the Scheduling Tab:
Right-click on the Processor within Napita.
Select the Configure
option from the context menu. Alternatively, double-click on the Processor.
Navigate to the Scheduling
tab within the Configuration dialog.
Step-by-Step Usage Instructions:
Accessing the Scheduling Tab:
Right-click on the Processor within Napita.
Select the Configure
option from the context menu. Alternatively, double-click on the Processor.
Navigate to the Scheduling
tab within the Configuration dialog.
Selecting Scheduling Strategy:
Choose a scheduling strategy based on processing needs:
Time Driven: Timer Driven scheduling, operates by scheduling the Processor to execute at regular intervals. This straightforward approach is suitable for tasks requiring periodic processing, such as batch data updates or routine maintenance activities. Users can configure the timing of execution using the Run Schedule option, defining the frequency at which the Processor operates based on predefined intervals.
Event Driven: For scenarios demanding real-time responsiveness and dynamic processing, the Event Driven scheduling mode presents an experimental yet intriguing option. In this mode, the Processor is triggered to run by specific events, typically initiated when FlowFiles enter connections linked to the Processor. While offering potential benefits in terms of real-time data handling, users should exercise caution with this mode, as its experimental nature means it may not be supported by all Processors and could introduce unpredictability into production environments.
CRON Driven: The CRON Driven scheduling mode provides the utmost flexibility, enabling users to define precise scheduling patterns using CRON expressions. This approach is particularly well-suited for complex scheduling requirements where specific timing and periodicity are essential. With CRON expressions, users can specify intricate schedules, encompassing various time intervals and patterns for Processor execution. However, it's important to note that the CRON Driven mode introduces increased configuration complexity compared to the other scheduling strategies, requiring users to understand the intricacies of CRON syntax. You can check cron expressions here.
Configuring Concurrent Tasks:
Determine the number of threads the Processor will use simultaneously with the Concurrent Tasks
option.
Increasing this value can enhance data processing speed but may impact system resources.
Defining Run Schedule:
Specify how often the Processor should run:
For Timer-driven strategy: define a time duration (e.g., 1 second, 5 minutes).
For CRON-driven strategy: refer to CRON expression format for scheduling details.
Managing Execution:
Choose between All Nodes
or Primary Node
for Processor execution.
All Nodes
schedules the Processor on every node in the cluster, while Primary Node
limits it to the primary node only.
Adjusting Run Duration:
Slide the Run Duration
slider to balance between lower latency and higher throughput.
Prioritize lower latency for quicker processing or higher throughput for more efficient resource utilization.
Applying Changes:
After configuring settings, click Apply
to implement changes or Cancel
to discard them.
PreviousView and Manage ProcessorsNextFlow Definitions
Last updated 14 days ago
Support requests occasionally arise where orders remain stuck in a store’s queue, even after being fulfilled by the store itself or another facility.
These issues are typically reported by stores to the customer support team. The team then escalates the issue to HotWax support for resolution, as the affected orders remain in the store queue despite being fulfilled. Consequently, the store continues to see these orders in their queue, even when there are no active orders assigned to them.
Orders become stuck because they are repeatedly rejected and reassigned (re-brokered) to the same facility.
Log in to WebTools and navigate to the Service Engine.
Search for the service: deleteSolrDocumentByQuery
.
Schedule the service with the following parameters:
coreName
: enterpriseSearch
query
: docType:OISGIR AND orderId:[HcOrderId] AND orderItemSeqId:[orderItemSeqId]
Note: Replace orderId
and orderItemSeqId
with the relevant values for the specific order you are addressing.
Execute the service.
This will delete the OISGIR Solr document and remove the order from the store’s queue.
Discover a glossary of Napita terms.
DataFlow Manager (DFM)
In NiFi, a DFM has the authority to manage the flow of data. This includes tasks like adding, removing, and modifying various components within the data flow.
Canvas
In NiFi, the canvas refers to the graphical interface where DataFlow Managers (DFMs) design and visualize their dataflows. It's the workspace where components are added, connected, and configured to create data processing pipelines.
Component
Components in NiFi are the building blocks used to construct dataflows on the canvas. These include Processors, Ports, Connections, Process Groups, Remote Process Groups, Funnels, and others. Each component serves a specific function within the data flow and can be configured to tailor its behavior according to the data processing requirements.
FlowFile
A FlowFile in NiFi represents a piece of data. It consists of two main parts: FlowFile Attributes, which provide context or metadata about the data, and FlowFile Content, which is the actual data being processed.
Attributes
In NiFi, attributes provide metadata or contextual information about the data being processed. Each FlowFile in NiFi carries a set of attributes along with its content. These attributes are key-value pairs that describe various characteristics of the data. Common attributes include UUID (a unique identifier for the FlowFile), filename (a human-readable name for the data file), and path (a hierarchical value indicating the storage location). Attributes play a crucial role in routing, transformation, and decision-making within the data flow.
Processor
Processors are components responsible for performing actions on FlowFiles, such as listening for incoming data, transforming it, or routing it to different destinations.
Relationship
Each Processor in NiFi has Relationships associated with it, indicating the possible outcomes of processing a FlowFile. These relationships determine where the FlowFile should be routed next.
Connection
Connections in NiFi link components together, allowing the flow of data between them. Each connection has one or more Relationships, and it includes a FlowFile Queue to manage the data being transferred.
Controller Service
Controller Services provide reusable configurations or resources for other components in NiFi. For example, the StandardSSLContextService can be used to configure SSL settings across multiple processors.
Reporting Task
Reporting Tasks in NiFi generate background reports on various aspects of the data flow, providing insights into system performance and activity.
Parameter Provider
Parameter Providers supply external parameters to Parameter Contexts in NiFi, allowing for dynamic configuration of components.
Funnel
A Funnel component in NiFi merges data from multiple Connections into a single stream, simplifying the data.
Process Group
Process Groups allow for the organization and abstraction of components within the data flow. They enable DFMs to manage complex dataflows more effectively.
Port
Ports in NiFi provide connectivity between Process Groups and other components in the data flow, facilitating data exchange.
Remote Process Group
Remote Process Groups enable the transfer of data between different instances of NiFi, useful for distributed data processing scenarios.
Bulletin
Bulletins provide real-time monitoring and feedback on the status of components within NiFi, helping DFMs identify issues or concerns.
Template
Templates in NiFi allow DFMs to save and reuse portions of the data flow, streamlining the development process and promoting code reuse.
flow.xml.gz
The flow.xml.gz file stores the configuration of the dataflow in NiFi. It is automatically updated as changes are made and can be used for rollback purposes if needed.
Discover how HotWax Commerce's Order Brokering feature streamlines order fulfillment by intelligently assigning orders to the most suitable fulfillment location.
Within the HotWax Commerce platform, the Order Brokering feature plays a crucial role in optimizing the fulfillment facility for orders. By automatically analyzing order priorities and determining the most suitable fulfillment location based on factors like proximity and inventory availability, this feature significantly enhances workflow efficiency and order management capabilities.
Step-by-Step Usage Instructions:
Log in to your HotWax Commerce launchpad.
Navigate to the Apps section and locate the Job Manager app.
Within the Job Manager app, locate and click on the Brokering
option from the menu.
Click on the tab labeled Create New Brokering
Tab to access the brokering options for new orders.
A new window will open displaying details of the Order Brokering process, including runtime and scheduling options.
Review the order details, including SLA (service level agreement), and fulfillment criteria.
Once we have reviewed the details, click on the save changes
button to initiate the brokering process for the approved orders, the scheduled job will run, use the run now
feature if you want to run the brokering engine only once.
By following these steps, users can effectively utilize the Order Brokering feature within the HotWax Commerce interface, optimizing order fulfillment processes and enhancing workflow efficiency.
Learn how managing processors in Napita streamlines dataflows. Organize them in groups for efficient monitoring and modification.
The ability to view and manage processors within the Napita platform is crucial for users engaged in managing complex dataflows and ensuring the smooth processing of data between various systems. Processors, as components in Napita, play a vital role in tasks like data ingestion, transformation, and routing. They act as workers, handling incoming data and performing actions on it based on defined configurations.
Organizing processors within process groups and parent process groups offers users a structured approach to managing their data workflows. This feature enhances efficiency by allowing users to easily locate, monitor, and modify processors as needed within Napita. By providing a clear overview of the processors involved in specific functions, users can quickly identify areas for optimization or troubleshooting.
Understanding the hierarchy of processors within process groups and parent process groups is essential for users to grasp the overall dataflow architecture within NiFi. It helps them comprehend how data moves through different stages of processing and where specific actions are performed. This visibility is invaluable for maintaining data integrity and ensuring the reliability of the overall system.
Access Napita Interface: Begin by accessing the Napita interface, where processors are managed. This is typically done through a web browser by entering the URL of your Napita instance.
Locate Parent Processor Group: Within Napita, navigate to the parent processor group associated with the instance you're managing. These groups are often named according to their function or the systems they interact with. For example, you may find a parent processor group named demo-oms
for managing data flows related to the demo-oms instance.
Navigate Through Hierarchy: Double-click on the parent processor group to explore its contents. Inside, you'll find process groups organized based on specific functions or tasks, such as data ingestion, transformation, or routing.
Identify Relevant Process Group: Locate the process group that corresponds to the specific function or task you want to manage. For instance, if you're interested in monitoring flow for approved orders, look for the process group labeled Approved Orders Flow.
View Processors: Double-click on the identified process group to view all the processors contained within it. Processors are represented as individual components responsible for performing various tasks on data as it flows through the NiFi system.
The feature to manage Process Groups within Napita is integral for users orchestrating complex data workflows. By offering a multitude of options through the context menu, users gain control over the configuration, monitoring, and optimization of their data pipelines.
Access Process Group Options:
Right-click on the desired Process Group within Napita to open the context menu.
Configure:
Choose this option to establish or modify the configuration of the Process Group, enabling customization according to specific business requirements.
Variables:
Select this option to create or configure variables within Napita, providing flexibility in managing dynamic data processing scenarios.
Enter Group:
Use this option to enter the Process Group and access its contents for configuration or monitoring purposes.
Start/Stop:
Start or stop the Process Group based on operational requirements, ensuring efficient resource utilization and workflow execution.
Run Once
Execute a selected Processor exactly once, based on configured execution settings. However, this only works with Timer-driven and CRON-driven scheduling strategies.
Enable/Disable:
Enable or disable all processors within the Process Group to control data processing activities and optimize system performance.
View Status History:
Open a graphical representation of the Process Group's statistical information over time, aiding in performance monitoring and troubleshooting.
View Connections:
Navigate to upstream or downstream connections to visualize and analyze data flow within the Process Group, facilitating troubleshooting and optimization efforts.
Center in View:
Center the view of the canvas on the selected Process Group for improved visibility and navigation within the interface.
Group:
Create a new Process Group containing the selected Process Group and any other components selected on the canvas, facilitating organizational management of data workflows.
Download Flow Definition:
Download the flow definition of the Process Group as a JSON file, enabling backup, restoration, and version control of configurations.
Create Template:
Generate a template from the selected Process Group, allowing for reuse and standardization of data processing workflows.
Copy:
Copy the selected Process Group to the clipboard for duplication or relocation within the canvas, providing flexibility in designing data workflows.
Empty All Queues:
Remove all FlowFiles from all queues within the selected Process Group, facilitating maintenance and resource optimization.
Delete:
Permanently delete the selected Process Group, enabling users to clean up outdated or unnecessary components from the system.
PreviousGlossaryNextSchedule Processors
Last updated 14 days ago
Learn how to configure and verify crucial properties like Database Connection Pooling (DBCP) and Secure File Transfer Protocol (SFTP) for efficient data handling in Napita.
Processors are components designed to execute tasks on data within a system's dataflows. They handle tasks like data ingestion, transformation, routing, and interaction. Properties within processors are settings dictating how a processor operates and handles data. These settings allow users to customize processor behavior, including parameters like database connections (DBCP), SFTP details, etc.
Users configure these properties through Napita during processor setup. Verifying processor properties during creation ensures that entered values are acceptable. While additional properties may need configuration based on specific requirements, database connection (DBCP) and SFTP properties are mandatory for processor execution. If a property's value is invalid, the processor cannot be executed or utilized until the value is verified.
Database Connection Pooling (DBCP) within Napita is crucial for efficient management and reuse of database connections. By implementing DBCP, users can enhance workflow efficiency, particularly when using processors like ExecuteSQLRecord
and QueryDatabaseTableRecord
.
This feature reduces the overhead of creating new database connections for each operation, optimizing resource utilization and improving performance. DBCP streamlines database operations by managing and sharing connections among different processors, reducing the time and resources needed for connection establishment.
Verifying the DBCP service at the parent level ensures consistency and validity of connection properties across different processors, minimizing errors or inconsistencies in the database configuration.
Access Processor Configuration: Right-click on the desired processor (e.g., ExecuteSQLRecord
or QueryDatabaseTableRecord
) and select Configure
to open the Configure Processor
window.
Select Database Pooling Service: Within the configuration window, specify the Database Pooling Service in settings related to database connections.
Choose Service from Dropdown: Select the appropriate Database Connection Pooling service from the dropdown menu to manage and reuse database connections efficiently.
Verify Properties: After selecting the Database Pooling Service, verify associated properties, by clicking the Verify Properties
button to ensure the correctness of specified values, identifying potential issues or inconsistencies in the database configuration.
The SFTP (Secure File Transfer Protocol) service in Napita facilitates secure file transfer between the platform and remote servers. By using SFTP, users can exchange files securely with external systems, ensuring data integrity and confidentiality. SFTP enables seamless and secure file transfer operations within the HotWax Commerce. Whether retrieving files from remote servers or uploading files securely, SFTP provides a reliable method for data exchange with external systems. This is relevant for integrating HotWax Commerce with other systems or performing data exchange operations with external partners. Verifying SFTP properties confirms that connection details are correctly configured, preventing data corruption or loss during file transfers.
Access Processor Configuration: Right-click on the processor associated with SFTP operations (e.g., GetSFTP or PutSFTP) and select Configure
to open the Configure Processor
window.
Enter SFTP Properties: Locate the fields for SFTP properties in the configuration window, including Hostname, Port, Username, Password, and Remote Path. Input relevant values for each property based on file transfer requirements.
Verify Properties: Once necessary SFTP credentials are entered, verify properties by clicking on the Verify Properties
button to ensure correct and valid configuration.
The Bulletin feature in Napita provides users with real-time notifications about the status and events occurring within the data flow. This feature significantly enhances users' ability to track the health and performance of their data pipelines, enabling them to promptly address any issues or concerns.
Significance and Benefits:
Real-time Monitoring: The Bulletin feature offers users immediate visibility into events and issues happening within their data flow, allowing for proactive monitoring and management.
Enhanced Visibility: By displaying bulletins at both the component and system levels, users gain comprehensive insights into the status and health of their data flow, empowering them to make informed decisions.
Troubleshooting Assistance: Bulletins provide valuable context and information about warnings, errors, and other noteworthy events, facilitating efficient troubleshooting and problem resolution.
Customizable Alert Levels: Users can configure the bulletin level to suit their monitoring needs, ensuring they receive notifications for events of specific severity levels, such as warnings and errors.
Step-by-Step Usage Instructions:
Accessing Bulletin Settings:
Navigate to the Processor Configuration dialog by selecting the desired processor.
Click on the Settings tab within the Processor Configuration dialog.
Configuring Bulletin Level:
Scroll down to locate the Bulletin level
option.
Choose the desired bulletin level (e.g., DEBUG, INFO, WARN, ERROR) based on your monitoring requirements.
This setting determines the minimum severity level of bulletins that will be displayed in the User Interface.
Monitoring Bulletins:
Observe the bulletin icons displayed on components in Napita
Hover over the icon with your mouse to view a tooltip providing details such as the time, severity, message, and node (if clustered) associated with the bulletin.
Viewing System-Level Bulletins:
Check the Status bar near the top of the page for system-level bulletins.
Hover over the system-level bulletin icon to view relevant information.
Accessing the Bulletin Board Page:
Open the Global Menu.
Select the Bulletin Board Page to view and filter bulletins from all components.
By following these steps, users can effectively utilize the Bulletin feature within Napita to monitor their data flow and ensure smooth operation.
With the Bulletin feature, Napita users can maintain the reliability, performance, and efficiency of their data pipelines by staying informed about critical events and taking proactive measures to address them.
In Napita a queue acts as a temporary storage buffer that facilitates the seamless transfer of data between processors within a data flow. As data moves from one processor to another, it is temporarily stored in these queues, allowing for efficient management of data flow and ensuring smooth processing. While queues offer several advantages, they may occasionally require maintenance to ensure the integrity and efficiency of the data flow. One common scenario that necessitates attention is when a processor fails due to a corrupted file present in the queue. For example, consider a data flow scenario where files are retrieved from an SFTP location from one processor and further processed by subsequent processors. If a file retrieved from the SFTP location have invalid file format, it can prevent the subsequent processor from executing successfully. In such instances, simply changing the invalid file on the SFTP location may not suffice, as the processor will still attempt to process the previous file, resulting in repeated failures. To address this issue effectively, it becomes essential to empty the queue containing the invalid file and replace it with a valid file from the data source. By doing so, the data flow can resume its operation with the latest and valid data, ensuring accurate processing and preventing further disruptions.
Identify the Queue: Determine which queue in your data flow is holding the invalid file. This queue is typically located between the processor that retrieved the file and the subsequent processor that failed to execute.
Empty the Queue:
Right-click on the queue located before the failing processor.
Select the "Empty Queue" option to remove the corrupted file from the queue.
Re-run the Processor:
Right-click on the processor located prior to the emptied queue.
Choose the "Run Once" option to execute the processor again and list a new file from the source (e.g., SFTP location).
This action ensures that the latest file is listed in the queue for processing.
Verify Queue Contents:
To verify that the queue is now holding the correct file, right-click on the queue.
Select the "List Queues" option. This will display all files currently listed in the queue.
Click on the eye icon next to the file name to view and verify the data of the latest file.
Process the File:
After confirming that the correct file is in the queue, right-click on the subsequent processor that processes the file and click on Run Once button.
Ensure that the file is successfully processed by monitoring the processor's status and logs.
Schedule Processors:
Once the file is successfully processed, you can schedule the processors in your Napita flow as needed.
Verify that the data flow is functioning as expected by monitoring subsequent data processing steps.
By following these troubleshooting steps, you can effectively manage queues in Napita and ensure smooth data processing within your workflows, addressing issues such as Invalid files promptly and efficiently.
Hotwax Commerce uses two platforms for bug reporting: GitHub (primarily for front-end issues) and ClickUp (for all other issues). Here’s a detailed guide on how to use each platform:
Log in to your GitHub account.
Navigate to your repository.
Go to the "Issues" section.
Click on the "New issue" button in the top right corner.
Short and accurate.
Include various elements based on the tag.
For "Bug" label:
Labels: As per the requirements.
bug: Indicates an unexpected problem or unintended behavior.
documentation: Indicates a need for improvements or additions to documentation.
duplicate: Indicates similar issues, pull requests, or discussions.
enhancement: Indicates new feature requests.
good first issue: Indicates a good issue for first-time contributors.
help wanted: Indicates that a maintainer wants help on an issue or pull request.
invalid: Indicates that an issue, pull request, or discussion is no longer relevant.
question: Indicates that an issue, pull request, or discussion needs more information.
wontfix: Indicates that work won't continue on an issue, pull request, or discussion.
For "Enhancement" label:
Current Behavior: Describe the current state of the feature.
Objective of Proposal or Motivation for Adding/Enhancing the Feature: Explain the purpose or need for the enhancement.
Acceptance Criteria: Define the criteria for considering the enhancement complete.
Impact: Describe the impact of the enhancement.
Additional Information: Provide any additional information relevant to the enhancement.
Choose Priority, Size, Milestones, Expected Go Live Date, Severity.
Use milestones to track progress on groups of issues or pull requests in a repository.
Link a branch or pull request by selecting a repository.
ClickUp is generally used for issues other than front-end issues like backend, documentation, etc.
Log in to ClickUp.
Navigate to Your Workspace:
Select the workspace (e.g., Hotwax Commerce).
Go to Your Space, Folder, and List:
Select the relevant space, folder, and list where the task will be added.
Create a Task:
Click on the “Add Task” button.
Fill in Task Details:
Title: Clear and concise title.
Description: Detailed description of the task.
Choose the Task Location:
Specify space, folder, and list (e.g., "Product Management" space, "Documentation" folder, and "Backlog" list).
Select Task Type:
Define the task type (e.g., task, milestone, bug, report).
Additional Options:
Status: Set the task status as open.
Assignee: Assign the task to the relevant person.
Due Date: Set a deadline.
Priority: Indicate priority (e.g., Low, Medium, High).
Tags: Add relevant tags.
Create Sub-tasks:
If necessary, create sub-tasks.
Create the Task:
Click on the “Create Task” button to finalize.
Jam can be used for reporting issues directly in various applications like GitHub, Clickup, Jira, GitLab,etc. For more details on using Jam, refer to Jam's documentation.
Using clear and structured issue templates significantly improves the quality of bug reports and feature requests, making it easier for the team to address and prioritize them effectively.
Steps to create issues using Jam:
Open plugins/extensions in your browser.
Select the option (e.g., capture Screenshot, Record tab, Record Desktop, Instant replay).
Reproduce the issue or bug with a screen recording.
Authorize the GitHub integration from Jam.
Add details like Issue Title, description, issue info (repository, assignees, labels, milestone).
Click on the "Create issue" button to be redirected to the issue tab.
Jam can be integrated with ClickUp for direct issue creation.
Steps to integrate Jam with ClickUp:
Authorize the ClickUp integration from Jam.
Follow the similar steps as described for GitHub issues using Jam.
This document outlines the Standard Operating Procedure (SOP) for diagnosing and resolving issues where the data exported from HotWax Commerce does not match the required data specifications.
HotWax Commerce uses Napita to transform and export data. If the SQL query in NiFi (Napita) is incorrect, it can result in exporting data that does not meet the client's requirements. This SOP will guide you through the steps to identify and rectify such issues.
Access the Exported Data:
Navigate to the location where the exported data is stored (e.g., SFTP location).
Download and review the exported data file.
Compare with Required Data:
Obtain the data requirements from the client.
Compare the exported data against the required data specifications to identify discrepancies.
Check the Last Sync:
Verify the last sync time to ensure that the latest data has been exported.
Review Recent Changes:
Check for any recent changes in the data requirements or the Napita setup.
Access NiFi:
Log in to Napita Instance
Locate the Relevant Process Groups:
Identify the parent process groups related to the data export.
Drill down to the relevant root process groups where the data transformation occurs.
Stop the Processors:
Right-click on the Napita canvas.
Stop the processors to prevent further data export during troubleshooting.
Access Parameters:
Select the parameters option to open a new module with all existing parameters of the group.
Search for the SQL Query:
Look for the parameter named source.sql.query
.
Review and Modify the SQL Query:
Study the current SQL query to understand its logic.
Modify the SQL query as per the client’s data requirements.
Run the Processors:
Run the processors once to generate a new data export.
Check the results in the SFTP location.
Verify the Data:
Compare the newly exported data with the required data specifications.
Ensure that the data now matches the client's requirements.
Resume Processors:
Monitor the first few exports to ensure continued accuracy.
Discover how Napita's Data Provenance feature enables users to monitor, troubleshoot, and optimize dataflows by tracking the journey of data objects in real-time.
Napita's Data Provenance feature is a critical tool for users involved in monitoring and troubleshooting dataflows. It provides detailed information about the journey of data objects (FlowFiles) as they move through the system, enabling users to track, analyze, and understand data transformations, routing decisions, and processing events in real-time. By offering insights into data lineage, event details, and attribute modifications, Data Provenance empowers users to ensure dataflow compliance, optimize performance, and swiftly identify and resolve issues.
Step-by-Step Usage Instructions:
Access Data Provenance Page:
Right-click on the desired dataflow within Napita.
Select the View Data Provenance
option from the menu.
Explore Data Provenance Information:
In the Data Provenance dialog window, review the most recent Data Provenance information available.
Utilize search and filter options to locate specific items or events within the dataflow.
View Event Details:
Click the View Details
icon (i) for each event to open a dialog window with three tabs: Details, Attributes, and Content.
Review event details on the Details tab, including event type, timestamp, component, and associated FlowFile UUIDs.
Analyze Attributes:
Navigate to the Attributes tab to view the attributes present on the FlowFile at the time of the event.
Optionally, select the Only show modified
checkbox to display only the attributes that were modified as a result of the processing event.
Replaying FlowFiles in Napita empowers users to inspect, troubleshoot, and validate data processing within their workflows. Whether it's verifying the correctness of data transformations or testing configuration changes, the ability to replay FlowFiles provides users with a powerful tool for ensuring the reliability and efficiency of their dataflow.
Step-by-Step Usage Instructions:
Access FlowFile Details:
Right-click on the desired processor within Napita
Select the View Details
option from the context menu.
Navigate to Content Tab:
In the View Details dialog window, navigate to the Content
tab.
Replay FlowFile:
Review information about the FlowFile's content, such as its location and size.
Click the Submit
button to replay the FlowFile at its current point in the flow.
Optionally, click the Download
button to download a copy of the FlowFile's content.
Replay Last Event from Processor:
Right-click on the desired Processor within Napita
Select the Replay last event
option from the context menu.
Choose whether to replay the last event from just the Primary Node or from all nodes.
Last updated 14 days ago
Learn how to efficiently manage and reuse data flow designs in Napita using Flow Definitions.
Flow Definitions are akin to Templates, referring to reusable data flow components and configurations for saving and reusing in various instances. They enable users to create reusable flow templates, which can be shared, imported, and customized across different Napita instances, fostering consistency, standardization, and reuse of data flow designs.
Users can customize imported Flow Definitions by adjusting parameter values, modifying connections, or adding components to meet specific requirements. The existing flows can be used both in the same instance or in altogether a new instance
In Napita, you can download the flow definition by right-clicking on the processor desired for other instances and selecting 'Download Flow Definition'. A JSON file containing the flow definition will be downloaded.
Ensure downloading without external services
, as their defined schemas for controller services may not suit all flow definitions. For instance, DB connection details are external services differing for clients. Hence, such details should not be saved in flow templates.
Once the flow definition is downloaded, import it into the processor for reuse in other instances.
Navigate to the parent process group where you want to create a new processor.
Drag Process Group
from the menu to the canvas.
Click the Browse
icon, upload the file, and provide appropriate naming as per the required process flow.
When creating a processor using the flow definition within the same file, it is important to create a new parameter context for the new flow. If the parameter context remains the same, any changes in the parameter will be reflected in the source flow file. To avoid this, follow these steps:
Right-click on the Canvas and Select Configure to edit
the processor configuration. Locate 'Process Group Parameter Context' in the General Tab:
Switch to the General
tab. Here, you'll find the option labeled as Process Group Parameter Context.
Click on the dropdown menu next to "Process Group Parameter Context." Scroll down to the bottom of the list where you'll find the option to "Create New Parameter Context." Choose this option to create a new parameter context for the flow.
Add the Name of the Parameter Context. Choose a descriptive name that reflects the purpose or function of the flow to maintain clarity and organization.
Click on Apply
to Save the Parameter Context:
This ensures that any modifications made to parameters within this flow will be isolated to its specific context, preventing unintended effects on other parts of the system.
Parameter contexts manage dynamic values shared across processors or components within a data flow, containing details from the original flow definition. When transferring the flow definition between instances, replace the parent parameter context with the correct parent processor's parameter context for inheritance. Follow these steps:
Right-click on the processor's canvas.
Select parameter
from the options.
Navigate to the Inheritance
tab and remove the parameter contexts of the source processor.
Select the desired parameter context from the left.
Click Apply
to save the inheritance of the parameter context.
Once the parameter context is inherited, you can verify through the following steps:
Right-click on the canvas and select Parameter
from the options.
Navigate to the Parameters tab, where all the parameters of that processor are listed. Parameters with the edit icon belong to that processor, while parameters with an arrow icon belong to the parent processor.
Click on the arrow icon for any parameter. This action will lead you to the parameter context of the parent parameter.
Navigate to the settings page to verify that the correct parameter context is inherited.
While most parameters are inherited from the parent processor groups, some parameters are specific to process groups. The following parameters need to be added to the processors:
Destination Path: This specifies the path for the flowfile where SFTP files will be placed. The destination path property needs to be further added in the remote path of the flow that puts the file in SFTP. Click on configure and add the remote path name.
Feed File Name with Prefix: Here, a meaningful file name with a prefix such as time needs to be added for easy identification.
Source SQL Query: This parameter contains the SQL query required for the processor to perform its action.
Date Time Format: Specifies the date time format for the files. It's crucial for accurate representation.
File Name Extension: Select whether the file is .csv or .json to ensure compatibility with other systems and accurate file reading.
To configure the file name, locate the processor named Update file name.
Right-click and select the configure option. Go to properties and enter the query in the filename field.
The Database Connection Pooling (DBCP) service enables efficient and reliable connections to relational databases. It essentially acts as a pool manager for database connections, allowing processors to reuse existing connections instead of creating new ones for each operation. DBCP services are not part of the parameters; therefore, Configure properties and select the DBCP service from the dropdown menu.
The Record Writers service facilitates writing data records to various data storage systems or destinations in a structured format. Since default controller services are removed when downloading flow definitions, configure the record writer property through the following steps:
Right-click on the processor and select Configure.
Navigate to the property Record writer.
Select the appropriate record writer from the dropdown menu.
Configure the record writer service by clicking the arrow against the service. Click on the settings
icon to configure the service.
Navigate to the properties tab and update the service as per your requirements.
Services can only be updated once disabled. Be cautious as disabling the service affects all associated processors. Once the services are updated, right-click on the canvas and select enable all controller services to enable the services.
Once all settings are set up and verified, run the processor to verify:
Right-click on the canvas of the processor group, then click on Start.
All flows will start processing.
If you want to run each flow manually, right-click on the flow file.
Click on Run Once,
and repeat the same for each flow file.
Proper documentation is essential for any GitHub repository as it provides clear instructions, context, and information about the project.
This guide outlines three methods for adding documentation to a GitHub repository: using the GitHub UI, GitHub Desktop, and Git Bash/Terminal. Each method caters to different levels of familiarity with Git and GitHub, ensuring that users can choose the approach that best suits their needs.
The GitHub UI is a highly accessible and straightforward method for adding documentation. It requires no additional software and allows users to perform tasks from any web browser. This method is ideal for users unfamiliar with Git commands, offering a simple, streamlined process for managing repositories.
Fork the Repository (if needed):
Go to the repository page on GitHub.
Click Fork
at the top right to create a copy in your account.
Create a Branch:
Go to your repository.
Click on the Branch: main
dropdown.
Type a new branch name and click Create branch
.
Add Your Documentation:
Navigate to the directory where you want to add the file.
Click Add file
> Create a new file
.
Name your file with the .md
extension and add your content in markdown format.
Commit Changes:
On the main page of the repository, above the file list, click Commits
.
Choose the option to commit directly to the new branch.
Click Commit new file
.
Open a Pull Request:
Go to the Pull Requests
tab.
Click New pull request
.
Select your new branch as the source branch and your main branch as the destination branch.
Click Create pull request
, enter a title and description, and submit the pull request.
GitHub Desktop offers a user-friendly graphical interface that simplifies Git operations, making it particularly accessible for beginners. Its visual representation of changes (diffs) and built-in tools for merging conflicts provide a clear and intuitive workflow, reducing the learning curve associated with Git commands.
Install GitHub Desktop:
Download and install GitHub Desktop from the GitHub Desktop website.
Follow the installation instructions and sign in with your GitHub credentials.
Clone the Repository:
Open GitHub Desktop.
Go to File
> Clone Repository
.
Select the repository you want to clone and click Clone
. Also You can directly open GitHub Desktop from Github.
Create a Branch:
Go to the Current Branch
dropdown at the top.
Click on New Branch
.
Enter a name for your new branch and click Create Branch
.
Add Your Documentation:
Navigate to the repository directory on your local machine.
Create a new Markdown file (.md) and add your content.
Stage and Commit Changes:
Go back to GitHub Desktop.
You should see your new file listed under Changes
.
Write a commit message in the Summary
field.
Click Commit to <branch_name>
.
Push Changes:
Click Push Origin
to push your committed changes to the remote repository.
Create a pull request through the GitHub web interface.
Using Git Bash/Terminal provides precise control over all Git operations, making it ideal for advanced users who need flexibility and efficiency in their workflows. This method is also highly efficient, as it allows experienced users to perform actions quickly without the need to switch contexts or use additional tools.
Clone the Repository:
You have two methods to get your repository on your local system:
Direct Cloning:
HTTPS: git clone https://github.com/username/repository.git
Replace username
and repository
with your GitHub username and the repository name you want to clone.
Forking the Repository:
Fork the repository to your personal GitHub account.
Clone your fork using the method above.
Create a Branch:
Open your Integrated Development Environment (IDE) or terminal.
Navigate to your project directory.
Create a new branch: git branch <branch_name>
Switch to the new branch: git checkout -b <branch_name>
Add Your Documentation:
Create a new file with a .md
extension, for example, README.md
or documentation.md
.
Add your content to the file.
Stage and Commit Changes:
Add all changes to the staging area: git add .
Commit your changes with a message: git commit -m "Add documentation"
Push Changes:
Push your committed changes to the branch on the remote repository: git push origin <branch_name>
Create a pull request through the GitHub web interface.
For more detailed instructions on using Git Bash/Terminal, refer to GitHub's official guides.
The HotWax Commerce Documentation Guidelines provide a a comprehensive framework for creating clear, consistent, and accessible documentation.
All Titles and Headers should be written in “Title Case”
Title: Use a clear, concise title that accurately reflects the content.
Headers: Use hierarchical headers (H1, H2, H3) to organize content. Make headers descriptive so that they help readers navigate the document.
Learn more about Gitbook Headings
# for Heading 1 ## for Heading 2 ### for Heading 3
The TOC should list all major sections and subsections.
Use headings to create a clear structure. This helps users with screen readers navigate the document.
When documenting on GitBook, TOC is indexed on the right side. Make sure to use the correct heading levels. Use H1 for title, H2 for main sections, and H3 for subsections.
For example: GitBook Headings will appear on right side of documentation as TOC
Meta Description: Add meta description with the following format on the top of the page:
---
description:>- HotWax Commerce's BOPIS fulfillment app enables retailers to efficiently manage Buy Online Pick-Up In Store (BOPIS) functionality and handover store pick-up orders to customers.
---
Introduction: Brief overview of the document's purpose.
Main Content: Detailed information, instructions, and guidelines.
Conclusion: Summary of key points and any additional resources.
Bold: Use bold to highlight important terms or phrases.
Code: Use monospace font for code snippets, commands, and file names.
Numbered Lists: Use for ordered lists or steps in a process.
Bulleted Lists: Use for unordered lists.
Internal Links: Use relative URLs for links within the same documentation.
External Links: Open external links in a new tab.
Use tables to present data clearly.
Images: Add images directly from the bottom of the GitHub markdown. Use high-quality images. Provide alternative text for accessibility.
Images Names: All images should be named in a specific format which is lowercase and words should be separated by a hyphen ( - ), such as images-name
Videos: Embed drive videos where necessary and properly caption them. Copy the public-view Google Drive video link and use the following format to embed drive videos:
{% embed url ="(Google Drive URL)" %} caption {%endembed%}
Double Quotes: Use “double quotations” when using any title, such as when documenting order status: “Created”, or “Approved”.
Backticks: Use backticks ( ` ) for highlighting actions. Such as when documenting a `job name`, `application`, or `button` which will be shown as job name
, application
, button
.
Use clear and concise language. Avoid jargon and technical terms where possible. Provide definitions and examples where necessary.
Content should flow naturally, shouldn’t feel heavy, or fluffy.
Abbreviations
Write abbreviations in full the first time they appear, followed by the abbreviation in parentheses. Use only the abbreviation thereafter.
Example: Buy Online Return In Store (BORIS).
Words that look like they were AI-generated should not be used repeatedly. Minimize the use of overly dramatic adjectives such as seamless, ensure, ensuring, crucial, essential, critical, game-changer, streamlined, comprehensive.
Use active voice instead of passive voice.
Example: “Install the app” instead of “The app should be installed.”
Use Markdown formatting. Learn more about Markdown here.
Maintain consistency in terminology, tone, and style throughout the document. You can refer to ChatGPT Prompts for consistency in the documents when using ChatGPT.
Use different types of hints to draw your reader’s attention to specific pieces of important information. Here’s markdown for different types of hints:
{% hint style="info" %} Add your content here { % endhint %}
{% hint style="success" %} Add your content here { % endhint %}
{% hint style="warning" %} Add your content here { % endhint %}
{%hint style="danger" %} Add your content here { % endhint %}
This markdown will look like this for hints:
Add your content here
Add your content here
Add your content here
Add your content here
Use Annotations: With annotations, you can add extra context to your words without breaking the reader’s train of thought. You can use them to explain the meaning of a word, insert extra information, and more. Readers can hover over the annotated text to show the annotation above the text.
Create an Annotation: To create an , select the text you would like to annotate and click the Annotate option in the context menu. Once you’ve written your annotation, click outside of it to continue writing in the text block.
Provide descriptive alt text for images.
All media should be accessible to readers.
Review and update documentation regularly to maintain accuracy.
Encourage feedback from users to improve the documentation.
BOPIS App
Fulfillment App
Pre-Orders App
Available to Promise App
Job Manager App
Order Routing App
Receiving App
Cycle Count App
Picking App
Import App
Users App
Facilities App
Company App
When writing the full name of an app, such as the Fulfillment App, "A" in "App" should be capitalized.
HotWax Commerce
Shopify (We write e-commerce as “eCommerce”)
Shopify POS
NetSuite
RetailPro
EasyPost
This document aims to resolve the time zone mismatch issue between Shopify and Hotwax Commerce OMS order dates.
If the time zone settings for the instance’s server and scheduled jobs do not match, it can lead to data time discrepancies between Hotwax Commerce and Shopify. For example, let’s say a customer placed an order on Shopify today, but after importing, when you look at the Hotwax Commerce Sales Orders
page, the order date appears as yesterday. The reason behind this discrepancy is the mismatched time zone between the Hotwax Commerce instance and the scheduling time zone of the Import Orders
job.
Check the Time Zones in the Job Sandbox Entity:
Visit: https://{instance}.hotwax.io/webtools/control/FindGeneric?entityName=JobSandbox
Ensure the Recurrence time zone
in the Job Sandbox entity matches the instance’s Server time zone
.
Check the Instance’s Server Time Zone:
Navigate to: Hotwax Commerce OMS > Dashboard > About
Verify the instance’s Server time zone
.
Open the Job Manager Application:
From the launchpad, open the Job Manager
application in Hotwax Commerce.
Navigate to Settings:
Click on Settings
within the Job Manager
.
Change the Selected Time Zone(If required):
Navigate the App section within the settings.
Identify the two time zones:
Browser Time Zone
Selected Time Zone
Adjust the Selected Time Zone
to match the instance's Server time zone (if necessary).
Save Changes to the Job (Using JOB_IMP_ORD as an Example):
Navigate to Orders
in the left menu.
Go to New Order
under the import section inside Orders.
Inside the Import Order
box, find and click on SAVE CHANGES
at the bottom right of the same job.
By following these steps, you can correct the time zone settings and resolve the time mismatch issue between Shopify and Hotwax Commerce. This will ensure that data is consistent and accurate across both platforms, enhancing the reliability of order management system.
Guideline on how to handle feature requests by the client.
In Hotwax Commerce, client feature requests are logged and tracked in Jira within the HotWax OMS/Shopify POS Migration
project, ensuring a structured workflow from request to development for continuous platform improvement and client satisfaction.
Login to Jira using your credentials.
Navigate to Projects
on the navbar.
Select the project where clients demand features or enhancements (e.g., HotWax OMS/Shopify POS Migration
project).
Use the Kanban board such as the POSSOMS Board
.
Utilize options like search, users, and filters (e.g., epic, type).
Click on the epic
filter and select the HotWax Enhancements/Feature Requests
epic.
Pick up the issue from the Backlog
and move it to Select for Development
.
Add the link of the internally created ticket to the corresponding Jira issue.
Move the issue from Select for Development
to In Progress
.
This streamlined process ensures that feature requests are tracked, managed, and developed efficiently, integrating both Jira and internal tools like ClickUp and GitHub.
If the data is accurate, to resume regular operation.
Before executing any flow, check the and connections to ensure credentials are accurate.
Create internal tickets in Hotwax Commerce using ClickUp
and GitHub
. For more .
Duplicate products may appear on the View Sales Order
page due to the same variant product ID being associated with two different parent product IDs. This issue causes confusion in the sales order management process.
As it may result in an item being shipped twice because it appears multiple times on the View Sales Order
page. Additionally, the support team encountered difficulties canceling the item, as it does not exist in the database.
The merchandising team and support team have identified the issue.
Product Identifier Change: Recently, the product identifier was transitioned from using the original Product ID to a Barcode-based identifier. This change caused issues when the parent product handles or URL endpoints were updated. Specifically, a new parent product was created, and the existing variants were incorrectly linked to both the old and new parent products.
For barcode and SKU identifiers, the system prepends the handle name with a "V_" prefix and uses this value as the internal name for the parent product.
When the parent product's URL endpoint is updated, it no longer matches the internal name stored in the system.
Consequently, when the updated parent product JSON is processed, the system checks for an existing handle of the same name. If no match is found, a new parent product is created, leading to the duplication issue.
This mismatch between the updated handle and the stored internal name causes the system to treat the updated product as a new entity, resulting in multiple associations for the same variants. As a result, products appear multiple times on the View Sales Order page.
Navigate to the View Sales Order
page and identify the duplicate products.
Note down the hcProductId
associated with the variant product showing duplicates.
Run the following SQL query to return the multiple parent product associations linked to the same variant product ID:
From this query, identify the duplicate parent product associations for the same variant product ID.
It is recommended to delete the association with the parent product that was created first, as the latest parent product will have the revised name and updated details. Run a MySQL delete query using the PRODUCT_ID
and PRODUCT_ID_TO
values retrieved from the previous step. This ensures that only one parent product association is removed.
After performing the deletions, check the View Sales Order
page to ensure that the duplicate products no longer appear.
The order status discrepancy between the find Order page and vieworder page is likely due to an issue with Solr indexing, which can occur if the Solr instance is down when the order status is updated.
Go to the WebTools for your OMS instance using the provided sample link: https://{instanceName}.hotwax.io/webtools/control/ServiceList
To index the updated status of the order when the solr is down or by any reason the indexing is not done on the find order page we need to run createOrdersIndexFromStatus service.
Locate the service: createOrdersIndexFromStatus.
In the service input parameters, set the following:
statusFromDate: Specify the start date/time for the range.
statusToDate: Specify the end date/time for the range.
persist : false
.
Execute the service.
Time Format: yyyy-MM-dd HH:mm:ss
Example: 2024-01-01 14:00:00
If you want to create a reservation in the order item ship group inventory reservation entity, you need to run the createOISGIRIndexes service
Locate the service: createOISGIRIndexes.
In the service input parameters, set the following:
reservedDateFrom: Specify the start date/time for the range.
reservedDateTo: Specify the end date/time for the range.
persist: false
.
Execute the service.
In the OMS application, go to the Find Order
section.
Use the previously noted order ID to search for the order.
Confirm the order appears with the updated status.
Check if the status changes are indexed correctly.
Verify that no errors occurred during the process.
If the order is not indexed:
Confirm that the statusFromDate and statusToDate parameters cover the time range of the status update.
Check the Service Engine logs for errors or warnings during the execution.
Verify that the Webtools status update was saved correctly.
If errors persist, escalate the issue to the technical team with the following details:
Order ID.
Service Engine logs.
Time range used for indexing.
Retailers operating globally often receive payments in the customer's local currency, which differs from the retailer's base currency (Shopify shop currency). The Multi-Currency Order and Return Management feature by HotWax Commerce significantly enhances operational efficiency and financial clarity for retailers dealing with international transactions. By automating currency conversion and reconciliation, retailers can manage global sales more effectively, ensuring accurate financial reporting and simplified accounting processes. This feature not only improves the day-to-day operations of retailers but also enhances the overall customer experience, making it a valuable addition to the HotWax Commerce platform.
Accurate Currency Representation:
In HotWax Commerce, the createUpdateOrder
is used to import orders from Shopify. When an order is imported, the presentmentCurrencyUom
field in the OrderHeader
entity is populated with the currency in which the customer made the payment (e.g., MXN for Mexico). This ensures that all transaction details accurately reflect the customer's local currency, enhancing transparency.
Detailed Payment Preferences:
During the order creation process, OrderPaymentPreference
saves the payment preference currency and conversion rate. For example, if the payment preference currency is USD, the converted amount is stored in the currentPaymentAmount
field, while the original amount paid in the local currency (e.g., MXN) is saved in the presentmentAmount
field. This detailed record helps in maintaining clear financial records and simplifies financial reconciliation.
Fallback Mechanism for Exchange Rates:
If the exchange rate is missing during order import, the system uses the UomConversion
to check for conversion details in the Order Management System (OMS). This ensures that even in the absence of an immediate exchange rate, the order can still be processed correctly, minimizing disruptions in workflow.
ERP Sync:
During ERP synchronization, the system retrieves the converted order amount from the OrderPaymentPreference
entity, where it was stored during the order creation process. This converted amount, already in the retailer’s base currency, is synchronized with the ERP system. Additionally, the exchange rate used for the conversion is stored in the OrderPaymentPreference
entity, ensuring accurate financial records.
Order Header Currency Maintenance:
The order header is updated and uses the customer's currency (e.g., MXN) instead of the product store currency. This maintains the integrity of the original transaction details and provides accurate data for both the customer and the retailer.
Enhanced Financial Reporting: For financial reporting and accounting, the feature provides a dual view capability, showing both the original payment currency and the base currency equivalent. The order detail page in HotWax Commerce displays the converted amount, local amount, and conversion rate in the order payment preference section. This dual reporting capability aids in better financial analysis and reporting, crucial for businesses with international transactions.
The automation of currency conversion and detailed recording of payment preferences simplify accounting processes. Retailers can easily reconcile their accounts and generate accurate financial reports.
This Standard Operating Procedure (SOP) outlines the steps required to create a new variance reason in HotWax Commerce, for retailers to record variances in the cycle count app.
Open your web browser.
Navigate to the HotWax Commerce Webtools
login page.
Enter your credentials (username and password).
Click the Login
button to access the Webtools dashboard.
Once logged in, locate the Webtools
section on the dashboard.
Click on the Import/Export
button.
In the Import/Export section, find and click on the XML Data Import
button.
Scroll down to the bottom of the page to locate the input box under the Import File
button.
Enter the XML data in the input box. Use the sample data provided as a template
Make sure to enter the data between the <entity-engine-xml>
tags. The complete structure should look like this:
Replace the enumId
, enumName
, and description
values with the new variance reason details. For example
Ensure the new details accurately reflect the specific variance reason you intend to create.
After entering the modified data in the input box, double-check for accuracy.
Click the Import Text
button to import the data into the system.
After importing the data, verify that the new variance reason has been successfully created.
Go to the Enumeration entity in Webtools to ensure the new variance reason appears as expected.
Following these steps, you can create a new variance reason in HotWax Commerce for recording variances through the cycle count app. Ensure all data entered is accurate and double-check each step to prevent errors during the import process. For any issues or further assistance, please contact the support team.
This SOP outlines the steps required to configure and manage SFTP Retry for Fetch SFTP and Put SFTP processors in Apache NiFi, ensuring adherence to best practices. URL: https://napita.hotwax.io/nifi/
Navigate to Apache NiFi > Processor Group > Fetch/Put SFTP Processor.
Set the comms.failure
relationship to Retry. Configure the following values:
Number of Retry Attempts: 2
Retry Back Off Policy: Penalize
Retry Back Off Duration: 10 min (default)
Penalty Duration: 30 sec (default)
Add a funnel to the Fetch SFTP Processor.
Redirect the following relationships to the funnel:
comms.failure
permission.denied
not.found
Name the connected relationship: SFTP Fetch Fail
.
The relationship name must match exactly
Set the [failure, reject] relationship to Retry. Configure the following values:
Number of Retry Attempts: 2
Retry Back Off Policy: Penalize
Retry Back Off Duration: 10 min (default)
Penalty Duration: 30 sec (default)
Add a funnel to the Put SFTP Processor.
Redirect the following relationships to the funnel:
failure
reject
Name the connected relationship: SFTP Put Fail
.
The relationship name must match exactly
---
Access the SFTP processor where the files are queued.
Redirect the funnel relationships (SFTP Fetch Fail
or SFTP Put Fail
) back to the original processor by connecting the funnel to the respective processor.
This will create a loop to re-run the failures.
Process all the queued files.
Perform this action for both the Fetch and Put SFTP processors as applicable
Once the queue has been processed and cleared, remove the connection between the funnel and the original processor to prevent an infinite loop in case of future failures.
Ensure the queued files are correctly processed after redirection.
Click on the hamburger icon in NiFi's main navigation bar.
Select Summary.
A new pop-up window titled "NiFi Summary" will appear.
Go to the Connections tab.
Search for the relationships "SFTP Fetch Fail" or "SFTP Put Fail" in the list.
Select By Name.
Sort the Queue (Size) column in descending order by clicking the column header.
Click on the Arrow Icon corresponding to the desired relationship to directly navigate to the associated processor.
Review the queued files for the processor and follow the resolution steps mentioned above to ensure proper processing.
Ensure all relationship names and funnel configurations strictly adhere to the specified formats:
SFTP Fetch Fail
SFTP Put Fail
Check for queued files periodically to prevent bottlenecks in data flow.
HotWax Commerce automates pre-order and back-order management by leveraging purchase orders (POs), aiding in the automatic listing and de-listing of products, which is beneficial for retailers with extensive catalogs. POs provide crucial details about item specifics and upcoming inventory arrivals. Retailers can integrate their ERP systems with HotWax Commerce for automatic PO synchronization or use the Import App to manually import PO CSV files.
To know more about how the pre-order process is automated in HotWax refer to this .
In HotWax, we have two relevant entities:
Product Category: Defines different types of product categories (e.g., shop-tops, shop-outerwear, shop-bottoms). The PREORDER_CAT
category is used to manage pre-order products.
Product Category Member: Tracks the members or products in a category. When a new product is added to the PREORDER_CAT
category, a record is created in this entity.
The presell catalog synchronization job synchronizes pre-order products to Shopify whenever a product is added to the PREORDER_CAT category. Here's an overview of the process and some key points to understand:
Hotwax Commerce downloads products from the parent Shopify store.
These downloaded products are initially not associated with any child shops within Hotwax Commerce.
A separate job, known as the Associate products with sub-catalog
job, is responsible for associating these downloaded products with various child shops. This step is crucial for ensuring that the products are associated with the child shops.
There are instances where purchase orders for these downloaded products are imported into Hotwax Commerce before the products get associated with the child shops.
This means that the products exist in Hotwax Commerce, but their association with the child shops is pending.
Auto Refresh Pre-sell Catalog job automatically manages adding or removing pre-sell products from the HotWax Pre-order/Backorder category.
The presell synchronization job is triggered to sync these pre-order products back to Shopify. When there is a new product added in the pre-order category then a record will be created in the product category member entity.
This job successfully synchronizes the products to the parent Shopify store. However, it fails to synchronize these products to the child shops because they have not yet been associated with the child shops in Hotwax Commerce.
If a product record is created in the Product Category Member entity, the presell synchronization job will not reconsider the product unless there is an update to the record.
This means that even if the product is later associated with the child shops, the synchronization job won't attempt to sync it again unless a change is made to the Product Category Member record.
Login to Webtools - Access the HotWax Commerce webtools.
Search for the Product Category Member Entity - Go to the Entity Engine and search for the Product Category Member
entity.
Enter the Product ID - Input the product ID of the pre-order product that failed to sync to the Shopify child shop.
Update the Record - Click on the view icon next to the record and edit the entry by updating any small detail in the comments field (e.g., adding a full stop).
Updating the record will make the product eligible for the sync job again, ensuring it gets picked up and synchronized with Shopify. By following these steps, we can troubleshoot and resolve issues where pre-order products fail to sync to Shopify due to missing child shop associations.
This document provides steps to resolve the "POS Order Refresh Failure" issue in Hotwax Commerce OMS.
In HotWax Commerce, "POS orders" are downloaded as fulfilled from Shopify. Occasionally, an order is downloaded into the OMS before it is marked fulfilled in Shopify. Retailers must refresh these orders to update their status, ensuring the fulfilled version of the order is downloaded while cancelling the older version.
Orders may fail to refresh due to missing shipping addresses, triggering this error: Could not complete the createOrderContactMech process: The following required parameter is missing: [createOrderContactMech.contactMechId]
.
Follow these steps to resolve these errors:
Log in to OMS: Use your username and password to log in to Hotwax Commerce OMS
.
Navigate to Sales Orders: Click the hamburger navbar icon if the left slider is not visible. Go to Sales Orders
under Order Management
in the left slider.
Identify the Order: Find the order exhibiting the error. Open the order by clicking on its ID.
Cancel the Order: On the order page, click the Cancel
button at the top.
Access Webtools’ Entity Engine: Use the link: https://{instance}.hotwax.io/webtools/control/entitymaint
.
Remove the Order from Entities:
Open the OrderItem
entity and remove the order.
Open the OrderItem
entity; a form will open.
Search for the order using the orderID
(enter the orderID in the orderID field of the form).
Hit Enter or click on the search button below the form.
Click on the view option at the start of the orderID
under the Search result
.
Click on the Delete this value
button at the top of the view under the view value.
Repeat the above steps for each order item.
Remove from OrderHeader and OrderIdentification:
Open the OrderHeader
entity and remove the same order.
Open OrderIdentification
and remove the externalId value.
Log in to OMS: Use your username and password to log in to Hotwax Commerce OMS
.
Navigate to Import Section:
Go to MDM
> EXIM
in the left slider.
Navigate to Shopify Jobs
and select Import Shopify Order
under the Order Management tab.
Re-import the Order:
Enter the details of the order, including the Shopify Order ID.
Run the job by clicking the Run
button.
Verify Successful Import:
To verify the order's successful import, go to MDM
> EXIM
.
Navigate to Shopify Jobs and click on Shopify Order MDM
under the MDM tab.
Following these steps will resolve the "POS Order Refresh Failure" issue by addressing the root cause—missing shipping details from unfulfilled Shopify orders. This guide provides a structured approach to diagnosing and rectifying the issue, ensuring minimal disruption to your order management process.
Web Tools are a resource for backend and development teams, providing functionalities for data management, log viewing, data import/export, job execution, and more within an Order Management System (OMS) instance.
Accessing Web Tools is straightforward. Users can conveniently navigate to the following URL in their web browsers: https://user-instance.hotwax.io/webtools. For example, for demo-oms, the corresponding URL would be https://demo-oms.hotwax.io/webtools.
Upon reaching the Web Tools portal, users are prompted to log in using their credentials.
The Entity Engine in HotWax Commerce Web Tools is essential for viewing and managing data. It offers a platform for handling entities within the system, particularly benefiting backend and development teams by enabling efficient data management.
Entity Engine is accessible through the Entity Engine
button on the second row of tabs on the main web tools page. Alternatively, users can also find a list of Entity Engine Tools
directly on the web tools main page.
Click on the Entity Engine
button to be redirected to the Entity Data Maintenance
page.
Use filters such as Group Name
and Entity Name
to find specific records.
An alphabetical list of all entities is also displayed on this page.
Click on the name of a specific entity to view its dataset.
Explore functions and filters to search data within the entity.
For a comprehensive view of all records of an entity, click the Search
button.
Utilize the View Relations
option to explore relationships with other entities. For example, if we consider the entity ‘Facility’, we can see that it is related to entities ‘FacilityGroup’, ‘ProductStore’, ‘Party’ and so on.
The Entity SQL Processor in Web Tools interprets and executes SQL commands, improving viewing and management efficiency by providing users with the capability to execute SQL queries in the system.
Users must always use the `Select` query first, and then use subsequent queries to perform relevant actions. Furthermore, it is recommended to refrain from using `Delete` queries.
Navigate to the Entity Engine
page within Web Tools. Click on Entity SQL Processor
within the Entity Engine
page. Alternatively, users can find the Entity SQL Processor
option under Entity Engine Tools
on the main Web Tools page and click on it.
This opens the Entity SQL Processor
page.
Change the group to 'org.apache.ofbiz’.
Input the required SQL query in the SQL command
field.
If required, use the Limit Rows
function to limit the number of results displayed.
Click on Send
to initiate the execution of the SQL query.
The search results are presented in chronologically descending order, providing users with the output of the executed SQL command.
The Service Engine
is a useful component for running and managing services within the OMS. This functionality provides users with a platform for searching, running, and scheduling various jobs and services.
Service Engine is accessible through the Service Engine
button on the second row of tabs on the main web tools page. Alternatively, users can also find a list of Service Engine Tools
on the web tools main page.
Clicking the Service Engine
button directs users to the Service Reference
page.
An alphabetical list of all the services is available on this page. Use the alphabets displayed at the top of the page to quickly locate a service by its name.
Click on a specific service to open a dedicated page containing details, in parameters, and out parameters of the selected service.
Depending on the requirements, users can run or schedule a service or job.
The Job List
tab assists users to view and manage jobs associated with the OMS instance. This functionality provides users with a comprehensive view of the job details and their current status, allowing for efficient tracking and management of various tasks within the OMS environment.
The search functionality allows users to find specific jobs by selecting a function from the drop-down menu and entering relevant data. This feature streamlines the process of locating specific jobs within the OMS instance. The search results obtained with this action are displayed below, and the users can click on any particular job to view its details.
Users can also click on the Find
button to view a complete list of all jobs within the OMS instance.
The Schedule Job
tab is a feature that enables users to manage and automate the execution of specific jobs or services within the system.
Clicking on the Schedule Job
tab, directs the users to a page where they can schedule a specific job or service.
Here, users can input details such as date, time, frequency, and more to schedule a job at a specific time and set intervals for repetition.
Additionally, users have the option to check the Run As System
checkbox to execute the job as a system.
A Reader is a type of plugin that allows users to import data from various integrations that clients use for automation, and more.
Clicking on the XML Data Import Readers
button within the Import/Export
tab redirects the users to the XML Import to DataSource(s)
page.
Users should input 'ext-name', where 'name' represents the name of the integration from which data needs to be imported in the Enter Readers
field.
Clicking on the Import
button initiates the data import process into the designated data sources. The Results
section displays file names and a summary of the import process, allowing users to quickly verify the success of the operation.
Logs, accessible through the Logging
button on the second row of tabs in Web Tools, provides users with a location for viewing logs of various actions being performed within the OMS instance.
Users can use 'Command+F' or 'Ctrl+F' within the logs to locate specific logs.
Error logs are highlighted in red and enclosed within a red box, making them easily identifiable.
The Facilities
page enables users to view, search, and manage both physical and virtual facilities associated with the OMS instance.
Click on the Facility
button located on the first row of tabs. This action opens the Facilities
page. Once on the Facilities
page, options are available for searching specific facilities.
In the search section, use the drop-down menu to select a specific function and enter relevant data to filter the facilities.
Click the Find
button to initiate the search based on the selected function and entered data. The search results will be displayed below.
If you click the Find
button without selecting any function or entering data, an alphabetical list of all facilities is displayed as a result.
Click on any facility to view the Edit Facility
page.
On the Edit Facility
page, users can update the details about the selected facility as needed.
There are two methods for adding data into an entity. One approach involves navigating to the entity using the Entity Engine
and utilizing the Create New
button to input data directly into the entity. Alternatively, data can be imported using XML.
Create New
button in Entity Engine
Go to the Entity Engine
and select the specific entity where you want to enter data.
Click on the Create New
button.
Fill in the relevant fields with the data you want to input.
After entering the data, click on the Create
button to create the dataset for the selected entity.
The system will create the relevant dataset and redirect you to the View Value
page for the newly created data.
At the bottom of the View Value
page, find the Entity XML Representation
section. Here, you can view the XML format for the dataset.
After obtaining the XML data format, users can import the data directly from the XML Import to DataSource(s)
page using these steps:
Click on Import/Export
and select the XML Data Import
button to access the XML Import to DataSource(s)
page.
In the Complete XML document
field on the page, insert the data in the correct XML format. The data has to be placed between the <entity-engine-xml>
and </entity-engine-xml>
tags.
Once the XML data is inserted, click on the Import Text
button to initiate the data import process.
Ensure that the XML data adheres to the required format for successful import.
Learn how to configure your database for Tathya, a data exploration and visualization platform that connects to your existing SQL-speaking database or data store.
Tathya is a data exploration and visualization platform that lets users connect to various data sources, explore data, and create interactive dashboards.
Tathya itself doesn't have a storage layer to store your data but instead pairs with your existing SQL-speaking database or data store. So, to be able to query and visualize data from Tathya, you first need to add the connection credentials of your database.
Skip this step if you want to create charts for a project that has a pre-configured database.
Login into Tathya with your credentials, now on the homepage go to the top right corner, go to settings and click on the "Data" menu to access the data source configuration.
In the "Data" menu, select "Databases" to manage your database connections. Click on the "+" button to initiate the setup process.
From the resulting modal “connect a database”, select the type of database you are connecting to (e.g., MySQL, PostgreSQL, SQLite).
We usually use the MySQL database type, once selected in the next step you have to enter the essential connection details and credentials to establish a connection with the MySQL server.
The required MySQL credentials includes the following fields:
The "host" refers to the network address or hostname of the MySQL server where your database or data source is located. It is the address that Tathya will use to reach the database.
For example, if the database is hosted on a MySQL server with the IP address 172.20.20.40, that would be the host.
The "port" is a specific endpoint on the host machine. The port number is essential for Tathya to know where to communicate with the database service.
Different types of services use different default ports, the default port for MySQL is 3306.
The “database name” is the name of the specific MySQL database you want to connect to.
The “username and password” are associated with the account you want to use for the connection.
The “display name” is how the database will display in Tathya.
Choose a descriptive name for the connection. For example, if a project were named Wasatch Ski, the display name should be “Wasatch Ski OMS”
Database configuration details are usually maintained by the DevOps team so connect with them for any additional information.
You're now ready to connect your database. Click “Connect” in the modal to proceed.
Discover how to create import jobs and manage data flow with HotWax Commerce's Data Manager Configurations.
This guide provides step-by-step instructions for creating a job through HotWax Commerce's Data Manager Configurations. It includes a guide on accessing and adding configurations, connecting with the SFTP server, and creating jobs using the webtools.
Access SFTP
Log in to your HotWax commerce instance with your user credentials
Navigate to the hamburger menu
Select Settings
Click on the General page and navigate to the FTP Connection Settings section
Copy the SFTP credentials from there i.e, host, username, port, and password
To connect to the SFTP server and manage data transfer, you can utilize FTP software such as FileZilla. Here's how to proceed:
Download and Install FileZilla: If you haven't already, download and install FileZilla from the official website.
Accessing SFTP with FileZilla:
Open FileZilla.
Click on "File" in the top menu, then select "Site Manager."
Click on "New Site" and enter a name for the connection.
Input the Host, Username, Port, and Password obtained from the HotWax Commerce settings into the corresponding fields in FileZilla.
Click "Connect" to establish the connection.
Navigating Remote Site:
After connecting, you'll see the Remote Site section in FileZilla.
Navigate through the directory to locate or create the path where your data will be transferred.
Copying the Path:
Once you've found or created the desired path, right-click on it and select "Copy."
This copied path will be needed for configuring data transfers in subsequent steps.
This step is only essential if you are unsure about the service that will be used to import data
Navigate to the webtools of the HotWax Commerce instance (https://<instance-name>.hotwax.io/webtools)
in your browser (Replace the <instance-name> with the required instance name)
Login to Webtools via your credentials for that instance.
Click on Service Engine and search to find the relevant service. For Example:
Let's say we want to use a service involved in importing the features of the product
Then you can search for the relevant keywords like import or product to find the relevant service
Copy the relevant service name
The OMS Data Manager Configurations page provides a way to effectively manage the flow of data in and out of OMS. In this guide, you'll find step-by-step instructions for adding and editing configurations, as well as integrating SFTP details into the configurations.
Adding a new data configuration in OMS enables users to specify how data is imported and exported.
Navigate to the settings section in the hamburger menu of HotWax Commerce
Click on Data Manager Configurations
to open the configurations page
Click the Add
button on the Configurations page.
In the modal that appears, provide information for fields such as Config ID*, Description*, Import Service* (enter the service here that was copied above), Import Path* (enter the SFTP path that was created above), Export Content ID, Export Service, Export Path, File Name Pattern, and Multi-threading. (To learn more about these fields click here)
Required fields marked with (*) : Config ID* (enter the name according to requirement), Description* (enter a small description about this), Import Service* (enter the service here that was copied above), Import Path* (enter the SFTP path that was created above)
Example:
Config Id*
Description*
Import Service*
Import Path*
IMP_PROD_FETR
Import Product Features
importProductFeatures
<replace-path>
Click Add
again to save the new configuration.
HotWax Commerce’s Job Manager App lets you view, schedule, and update job workflows running in HotWax Commerce's Order Management System for orders, products, inventory, and more operations.
To create and view a new job within HotWax Commerce's Job Manager Application, you'll need to access the webtools of your instance and set up the job details as follows:
Log in to the webtools of your HotWax Commerce instance using your credentials.
Click on Entity Engine and search to find the EnumType entity. Filter the result for the Parent Type Id field as ‘SYSTEM_JOB’.
Look for the corresponding Enum Type ID field that resembles your service. From the example above we will take the Enum Type Id as ‘PRODUCT_SYS_JOB’. Since we are using the service for importing product features.
Search for the Enumeration entity in the entity engine and click on the Create New button to create a new record with the required fields Enum Id, Enum Type Id, Description, and Enum Name. Enum Id (enter the enumeration id), Enum Type Id (enter the id that we searched in the above point), Description (enter relevant description), and Enum Name (enter relevant enum name).
For example:
Enum Id
Enum Type Id
Enum Code
Description
JOB_IMP_PROD_FETR
PRODUCT_SYS_JOB
<optional>
Import product feature
Search for the Runtime Data entity in the entity engine and click on the Create New button to create a new record with the fields Runtime Data Id, and Runtime Info. The Runtime Data ID (create a new ID here), and Runtime Info
The XML Structure for Runtime Info field (Replace the ENTER_HERE with the configId that was created in the data manager config)
Search for the Job Sandbox entity in the entity engine and click on the Create New button to create a new record with the fields
Job ID (create a relevant ID)
Job Name (create a relevant job name for the above id)
Pool Id (enter “pool”)
Status Id (enter “SERVICE_DRAFT”)
Parent Job ID (optional)
Service Name (enter “ftpImportFile”)
Run As User (enter “system”)
Runtime Data ID (enter the ID from the runtime entity that was created above)
Max Recurrence Count (enter “-1”)
System Job Enum Id (enter the Enum Id from the Enumeration entity that we created above)
Go to the Job Manager application via Launchpad. Select the category of the job on the left side panel to find that particular job. The job will be visible in the ‘miscellaneous’ section of the job category.
Explore this step-by-step guide introducing Tathya, covering database configuration, chart creation, dashboard setup, automated alerts, reports, user listing, and role assignment.
This document contains a step by step guide that will walk you through the process of configuring a new database, creating charts, dashboards, configuring automated alerts & reports, listing users, and assigning them roles in Tathya.
The Shopify POS (Point of Sale) app is a mobile application developed by Shopify that allows retailers to sell products in-store. It integrates seamlessly with the Shopify online store, providing a unified platform to manage both online and offline sales.
We do not have access to Shopify POS initially; we need to obtain this access from the client for testing purposes. When an order is placed in the Shopify POS app, there are multiple scenarios to consider and various ways this will reflect in the (OMS).
Open the App Store:
Find and open the App Store on your iPhone or iPad.
Search for Shopify POS:
Use the search bar to find the "Shopify POS" app.
Download and Install:
Tap the download icon or "Get" button to download and install the app on your device.
Open the App:
Once installed, open the Shopify POS app from your home screen.
Open Google Play Store:
Find and open the Google Play Store on your Android device.
Search for Shopify POS:
Use the search bar to find the "Shopify POS" app.
Download and Install:
Tap "Install" to download and install the app on your device.
Open the App:
Once installed, open the Shopify POS app from your app drawer or home screen.
Enter Your Shopify Account Credentials to Log In:
Begin by entering your email address.
Then, enter your password.
Receive the Authorization Code:
After entering your email and password, you will receive an authorization code for verification.
Google Authentication App Required:
To receive the authorization code, you must have the Google Authentication app installed.
Ensure that the app is logged in with the same email address you entered in step 1.
Select Store:
After successfully entering the authorization code, you will be prompted to select a location.
Choose your Retail Store from the list.
Select a Location:
Choose a location from the available options.
Allow App Permissions:
Grant permissions for location services, cameras, notifications, and Bluetooth.
Stay Updated: Ensure your app is always updated to the latest version for the best performance and features.
By following these steps, you will successfully log in.
After the initial login, you must enter a PIN for subsequent logins. The client provides this PIN, and each time you want to log in, you need to enter it to access your Shopify POS.
Additionally, here is the link for how to place an order on Shopify POS: Shopify POS Help Center
For detailed guidance and troubleshooting, you can refer to the Shopify POS Help Center or contact Shopify support.
A draft order in Shopify is an order created manually by CSRs or store admins. This is useful in various scenarios, such as when a customer wants to place an order over the phone or in person.
This usually occurs when customers directly contact a CSR to place a new eCommerce order on their behalf or request to cancel a previous order and get a new one placed as a replacement.
Draft orders are created to test order flow from Shopify to HotWax OMS.
Make sure that draft orders are not created on production/test/dev instances, if a draft order is placed on a test or dev OMS, the associated Shopify shop should be shut down.
If specific testing needs to be conducted on UAT, draft orders can also be created there.
Open your web browser and navigate to Shopify Admin.
Enter your credentials to log in to the Shopify admin interface.
Once logged in, you'll land on the homepage.
Navigate to the Orders page to view the complete list of placed orders.
Click on the Create Order button located at the top-right corner.
This action will open a new window for order creation.
Use the product search bar to find products quickly.
Alternatively, browse products by clicking the Browse button and select based on:
Popular Products
Collections
Product Type
Tags
Vendors
If any specific product details are not provided to you. You can by default select a popular product, and a pop-up window will display parent products and variants with pricing and inventory availability.
Click the Add button to include the selected product.
In the right section under the Customer bar, use the search bar to find an existing customer.
you can create a new customer by clicking "Create a new customer" for new orders.
For new customers, a pop-up window will appear to input customer details such as first name, last name, email, shipping address, etc.
(This step is optional.)
Include notes in the designated section at the top right.
CSRs can use this section to provide insights into the reasons behind the draft order creation.
Shopify offers multiple payment options:
Mark as Paid
Click the Collect Payment dropdown button (bottom-right) to proceed.
Click on Mark as paid
Cash on Delivery (COD)
Click Payment due later to set payment terms, such as:
Due on receipt
Due on fulfillment
Within 7, 15, 30, 45, 60, or 90 days
On a fixed date
Credit Card Payments
Only accept payments via the Bogus Test Payment Gateway for hc-demo(DEMO OMS). Using any other gateway may result in Shopify suspending the store.
When entering credit card details, a pop-up will prompt you to enter the customer’s card information.
Use the following test card details:
Click the Create Order button at the bottom right to confirm and place the order.
Explore the functionality of dashboards in Tathya, where you can organize and interpret data, creating a narrative by combining different types of charts.
Dashboards in Tathya are practical spaces where you can organize and interpret your data. They have the unique capability to tell a story by combining different types of charts to form a narrative.
If you're starting from scratch, you can craft a new dashboard with a set of charts to visualize your information. Alternatively, if you already have charts that tell a story, you can assemble them into a dashboard for a consolidated view.
Prerequisites:
Ensure that you have access to the necessary data sources and have individual slices (charts or visualizations) already created in Tathya.
Your created dashboards will be automatically set to "Draft" status. In this phase, your dashboard is saved but not yet visible to other users. Progress from the draft stage to a published state by clicking on the "Drafts" in the Tathya interface. By converting the dashboard to "Published" status, you enable other users to view and interact with it.
Discover how to grant access to charts in Tathya for seamless collaboration and editing, as well as adding charts to dashboards without visibility issues.
In Tathya, it's essential to grant access to charts to other users to facilitate seamless collaboration and editing processes. Moreover, granting access allows users to add the chart to dashboards without encountering visibility issues. Here's how to grant access:
Navigate to the charts section within Tathya.
Find the specific chart you wish to grant access to.
Go to the chart and locate the "Actions" section, located on the right-hand side.
Click on the "Edit" option within the actions menu.
A popup window will appear, providing various options for chart management. Navigate to the "Access" tab within this popup.
In the "Access" tab, you'll find a section where you can add relevant users who are allowed to alter the chart.
Enter the names or usernames of the users you wish to grant access to. This list should include individuals who may need to edit the chart or add it to dashboards.
Once you've added the relevant users, save your changes. This ensures that the specified users have the necessary permissions to edit the chart and include it in dashboards.
Editing a chart without access rights creates a new chart instead of modifying the original, leading to duplication. Access is crucial to maintain chart integrity and avoid unnecessary duplication. Additionally, lacking access to a chart prevents it from appearing in the list of options when adding charts to an existing dashboard.
Explore additional settings in Tathya's database configuration, including the 'Advance' tab for fine-tuning aspects like SQL Lab and Performance.
Once you have established the connection, you can check the “Advance” tab to access additional settings. These allow you to fine-tune various aspects of the database connection.
The two important configurations here are SQL Lab and Performance:
Enable “Expose database in SQL Lab” to make the database available in SQL Lab and Enable “Allow this database to be explored” to allow database exploration and interaction.
Set the CHART CACHE TIMEOUT. This is a configuration parameter related to how Tathya caches (stores temporarily) the results of chart queries. Caching is a technique used to improve performance by storing and reusing previously fetched data.
The recommendation is to configure the "CHART CACHE TIMEOUT" value to be at least 500 seconds (or 8 minutes) to prevent the display of outdated or repetitive data in your charts. This means that once a chart is generated and cached, it will be considered valid and displayed without re-querying the data for at least 500 seconds.
Save the configured settings to establish the connection between Tathya and your MySQL database.
Now, proceed to click on the "Test Connection" button to ensure that Tathya can successfully connect to your MySQL database. Address any issues that arise.
Once the test is successful, click "Save" to confirm the configuration and finalize the setup.
Now, you can use the configured MySQL database for data exploration in Tathya.
Databases are configured once for a project.
Learn how to navigate the SQL Lab interface in Tathya and execute SQL queries by selecting the appropriate database and schema.
To effectively navigate through the SQL Lab interface in Tathya and execute SQL queries, follow these steps for selecting the database and schema.
In the top navigation menu, locate and click on the SQL Lab
option. This action will direct you to the SQL Lab interface designed for crafting and executing SQL queries.
In SQL Lab, Tathya provides dropdown menus where you can choose the desired database and schema before writing and executing your SQL queries. This step is crucial for accurately pinpointing the location of your data and ensuring that your queries fetch information from the correct database and schema.
We have access to multiple databases as we are dealing with data from different projects and sources. All the configured databases will be visible here.
Additional information: In MySQL, the term schema
is synonymous with a database,
while in PostgreSQL, schemas represent different categorizations within a project. If your data source supports schemas, selecting the correct schema ensures that your SQL queries target the specific subset of data you intend to analyze.
Utilize various clauses such as SELECT, FROM, WHERE, GROUP BY, and others to shape the logic of your SQL query.
Tathya offers a multi-tab environment, enabling you to work on multiple queries simultaneously.
Based on the output you desire, insert the query here and click the Run Query
button to execute the query.
Upon executing the SQL query, Tathya sends the query to the connected database. The database parses the query, breaking it down into its structural components, checking for syntax errors, and understanding the logical flow.
Result Set Interpretation
Post successful parsing, the database retrieves a result set—a table of data that matches the specified criteria. Tathya then automatically interprets the structure of this result set, analyzing the columns and their data types.
Dynamic Dataset Creation
Leveraging the information obtained from the parsed result set, Tathya dynamically generates a dataset. This dataset mirrors the structure of the result set, capturing the columns and their data types.
Column Mapping
Each column in the result set is mapped to a corresponding field in the dataset. This mapping ensures that the dataset accurately represents the data retrieved by the SQL query.
Ensure the dataset returned aligns with your expectations. This is the dataset that will be used to create the chart.
Now, navigate to Save and then select “Save dataset” from the dropdown menu, so that you can use the same dataset to create multiple charts in future.
There are a number of query errors that can occur due to a misalignment between your query and the database. Some examples include:
Bad Reference: A query can fail because it is referencing a column and/or table that no longer exists in the datasource. You can either modify the query accordingly or remove the column from the query.
Unsubmitted Query: A query will not even be submitted to the database if it is missing required parameters. You should define all the parameters referenced in the query in a valid JSON document.
Once satisfied with the derived output, click on Create Chart.
The ability to create charts based on specific queries within Tathya empowers users to derive actionable insights from their data. By allowing users to visualize specific subsets of data, this feature enhances decision-making processes and enables users to identify trends, patterns, and anomalies efficiently.
Access SQL Lab: Log in to Tathya and navigate to the SQL Lab feature.
Craft SQL Query: Select the database for which you want to create the Query. Write an SQL query with a WHERE clause specifying the desired criteria. For example, to create a chart showing orders created in the past hour:
Execute Query: Click the Run Query
button to execute the SQL query and generate the dataset.
Save Dataset: Once the dataset is generated, save it by following the prompted steps. This will open a new page with options to customize the dataset.
Add Chart Details: Enter a descriptive name for the chart, select the desired chart type, and configure additional settings as needed.
Create Chart: Click on the Create Chart
button to generate the chart based on the dataset created from the specific query.
Creating charts in Tathya without specific queries is essential for flexible data analysis. This feature allows users to prepare for future data scenarios by initially including all available data, even when subsets may not exist at the time of dataset creation.
Access SQL Lab: Navigate to the SQL Lab within Tathya.
Craft General SQL Query: Select the database and write a general SQL query to retrieve all available data without specific conditions. For example:
Execute Query: Click the Run Query
button to execute the SQL query and generate the dataset containing all available data.
Save Dataset: Once the dataset is generated, save it by clicking on the save button. This will open a new page with options to customize the dataset.
Create Chart: Proceed to create a chart from the dataset to visualize the overall data trends.
Unlock Dataset: Click on the options icon against the dataset's name to open a new form. Unlock the dataset by clicking on the lock icon to make changes.
Add WHERE Clause: If necessary, add a WHERE clause to the SQL query to filter the dataset based on specific criteria whenever there is data that follows the WHERE clause. For example:
Save Dataset: Save the dataset by clicking on the save button to retain the modifications made.
Update Chart: After saving the dataset, update the chart to reflect the changes made to the dataset and visualize the updated insights.
Creating datasets with empty data is crucial in Tathya. It prepares charts for future data arrivals, enabling users to proactively set up their analytics. By allowing users to create charts with empty datasets, this feature fosters adaptability to changing data conditions.
Create Empty Dataset: Open SQL Lab and create an SQL query that returns no data.
Save Dataset: Once the query is selected or created, click on the Save Dataset
button to save the empty dataset.
Redirect to Chart: After saving the dataset, you will be redirected to the chart creation page automatically.
Name the Chart: Provide a descriptive name for the chart to identify its purpose or intended data source.
Custom SQL: Under Custom SQL,
paste the names of columns one by one to define the dataset structure accurately.
Save Chart: Click on the save chart button to save the chart configuration for future use.
Learn how to view and save charts in Tathya for insightful data visualization.
You will now be redirected to the “Main Chart Panel” where your chart is ready to view.
When creating charts using SQL queries, Tathya by default provides visualization in a “table” format, but you can customize it based on your requirement.
Navigate to the "data" tab. This is where you can transform the query results into a visual representation.
Here you will encounter two options under the "Data" tab, specifically under the "QUERY MODE" section: "Aggregate" and "Raw Records." These options determine how the data is processed and presented in the resulting chart. Let's delve into each option:
Aggregate Mode Aggregate mode is used when you want to perform aggregate functions (e.g., COUNT, SUM, AVG) on your dataset. It is suitable for summarizing and visualizing data at a higher level.
Aggregation Functions Allows you to apply aggregation functions to your selected columns. For example, you can count the number of records, calculate the sum of a numeric column, or find the average.
Aggregate mode is commonly used when creating charts like bar charts, pie charts, or line charts where you want to visualize summarized information.
Raw Records Mode Raw Records mode is used when you want to retrieve individual, unaggregated records from your dataset. It provides a detailed view of each record.
No Aggregation Functions Does not require the use of aggregation functions. The query retrieves raw, unprocessed records from the specified columns.
Raw Records mode is useful when you need a detailed, record-level view of the data. It's suitable for creating charts that display individual data points without summarization.
How the Raw Mode works:
Column Names The column names in your SQL query result set become the headers or fields in the table. Each column in the result set is mapped to a corresponding column in the visualization.
Data Types Tathya attempts to infer the data types of each column based on the values in the result set. This helps in appropriately formatting and displaying the data.
Automatic Table Creation When you execute the query in "Raw Records" mode, Tathya automatically creates a table or chart with the mapped columns, displaying the individual records retrieved by the query.
Dynamic Mapping The mapping is dynamic, meaning that if your SQL query result set structure changes (e.g., adding or removing columns), Tathya adjusts the mapping accordingly when you execute the query.
No Aggregation Since "Raw Records" mode is focused on displaying individual records without aggregation, each row in the result set is treated as a separate data point.
You also have additional configurable options such as Filters, Ordering, and Row Limit.
Filters Filters allow you to narrow down the rows displayed in your result set based on specific conditions. You can filter the data to show only rows that meet certain criteria.
Ordering Ordering allows you to sort the result set based on one or more columns. You can specify the order (ascending or descending) for each column.
Row Limit Row Limit allows you to control the number of rows displayed in the result set. This is particularly useful when dealing with large datasets, allowing you to view a manageable subset of the data.
Be cautious when using row limits, especially when conducting analysis or reporting. Setting a too-low limit might lead to incomplete insights, and it's essential to balance performance considerations with the need for comprehensive data.
In your SQL queries, you can write specific details and conditions to retrieve data that matches your requirements without the need for additional filters in the chart.
For example, you can use the WHERE clause in your SQL queries to filter data at the database level before it even reaches Tathya. This can be more efficient as it reduces the amount of data transferred between the database and Tathya.
Similarly, you can use ORDER BY in your SQL queries to specify the sorting order of the results, and LIMIT to control the number of rows returned.
Writing precise and optimized SQL queries can streamline the data retrieval process, ensuring that the results align with your expectations without relying heavily on post-processing in the visualization chart.
Adjust other settings such as colors, labels, and tooltips as needed from the “Customize” option, present right next to the data tab.
You can also choose the type of chart that best suits your data. (e.g. A Line Chart can be ideal for displaying trends over time, a Bar Chart can be useful for comparing values across different categories, a Pie Chart can be effective for illustrating parts of a whole.)
For example, if you choose a Line Chart, specify the columns you want on the x-axis and y-axis. the date might be on the x-axis, and total sales on the y-axis.
Once you have made the required modifications, click on “Update chart”.
In the top-left corner, give your chart a descriptive name for easy identification within Tathya.
Navigate to the top-right corner and click on the "SAVE" button. A Save Chart panel will appear with the Chart Name field auto-populated.
After saving, find your chart in the Charts tab for quick access.
HotWax Commerce facilitates the exchange of data between systems, but errors can occur during import/export processes, potentially leading to inaccuracies. To address this, the platform provides LogInsights reports, which are stored in Solr-index core. By leveraging the powerful indexing capabilities of Solr, HotWax Commerce enables efficient data retrieval and analysis, supporting performance monitoring, troubleshooting, and reporting activities. These reports offer insights derived from system logs, allowing users to easily identify any data transfer failures.
LogInSight charts can be set with the following steps:
The logInsights
core within HotWax Commerce provides users with valuable insights derived from system logs, facilitating the generation of superset reports. This feature is crucial for users who need to analyze system performance, troubleshoot issues, and make informed decisions based on data-driven insights.
Steps to set LogInsight Core:
Accessing the Search Admin Page:
Navigate to the hamburger menu in the HotWax Commerce interface for the specific instance.
Click on Search Admin
to access the page for managing Solr indexing.
Managing Solr Cores with Core Operations:
Within the Search Admin page, locate the Core Operations
section.
This feature allows users to effectively manage Solr cores, ensuring optimal performance and organization of indexed data.
Refreshing the logInsights Core:
Identify the logInsights
core within the list of Solr cores.
Click on the Refresh Core
button associated with the logInsights core.
This action updates the Solr index with the latest data from system logs, ensuring synchronization with any recent changes or updates.
NiFi, an open-source data integration tool, is utilized to automate data flow between systems in real time. A flow is to be configured in NiFi to filter out failed JSON files from SFTP locations and redirect error-prone data to logInsights core for logging purposes.
After the flow setup, it is imperative to insert dummy data. This step is crucial for querying as fields are dynamically indexed in the Solr core based on the dummy data. The dummy data should be inserted with a docType of TEST
to ensure exclusion from Superset charts.
System administrators can utilize the Solr database creation feature to set up and manage databases for log data, facilitating effective monitoring and troubleshooting of system performance.
Step-by-Step Usage Instructions
Access Settings:
Navigate to the Settings
section within the HotWax Commerce interface.
Select Database Connections:
Within Settings, locate and select the Database Connections
option.
Add a New Database:
Click on Add a New Database
located in the top right corner of the interface.
Choose Database Type:
Under the supported dashboard search bar, select the Others
option.
Name the Database:
Provide a descriptive name for the database under the Display Name
category to easily identify it.
Construct URL:
Create the database URL in the following format: solr://test-oms.hotwax.io:443/search/logInsights?&use_ssl=true&token=<JWT_token>
Generate JWT Token:
Replace Token Placeholder:
Replace <JWT_token>
in the URL with the generated token.
Specify Instance Name:
Write the instance name of the brand for which you are creating the dashboard. For example, if the instance name is test-oms
, input it accordingly.
Test Connection:
Paste the constructed URL as a SQLAlchemy URL and test the connection.
Set Chart Cache Timeout:
In the advanced tab -> performance, set the CHART CACHE TIMEOUT
property to a desired value, such as 10000, to manage cached data effectively.
Connect:
If the connection test is successful, click on the Connect
button to finalize the database creation process.
Creating Solr queries within Tathya requires a different syntax compared to traditional SQL queries used in Tathya dashboards. Here's how you can create Solr Queries:
Define Time Range:
Use the appropriate time syntax to specify the desired time range for the query. For example:
Specify Fields:
List each field required in the query's result set. Provide aliases if necessary. For example:
Handle Special Characters:
If any field name contains special characters, enclose it within back-quotes (`).
Order Results (Optional):
If sorting is needed, include the attribute in the select clause and specify the desired sorting order. For example:
Limit Results:
Ensure to include the limit
method to restrict the number of returned records, especially for large result sets.
Solr Dashboard Creation is a crucial feature within the HotWax Commerce platform, providing users with the capability to visualize data from Solr databases through intuitive charts. This feature significantly enhances users' ability to analyze and interpret data, empowering them to make informed decisions and optimizations within their business operations.
Step-by-Step Usage Instructions:
Navigate to SQL Lab: From the home page, access the SQL Lab section within the HotWax Commerce interface.
Select Solr Database: Choose the Solr database you have created to run the query from. This ensures that the query retrieves data from the correct source.
Write Solr Query: Write the Solr query in the SQL Lab editor to fetch the desired data from the selected Solr database.
Retrieve Data: Once you have formulated the query and retrieved the desired data, proceed to the next step.
Create Chart: Click on the Create chart
button to initiate the chart creation process.
Name and Save Chart: Give the chart a descriptive name that reflects its content or purpose, and save it for future reference.
Add to Dashboard: Optionally, add the created chart to the dashboard of your choice for easy access and visibility alongside other relevant data visualizations.
HotWax Commerce sends data with external systems by sending data files over an SFTP server. Various data flows exist between HotWax and other systems. This document will help you assist the client with the issue of not seeing the files on the SFTP location and the common issues around SFTP.
To know about different data flows with the external systems and the SFTP locations where Hotwax kept the file, refer to this .
To begin resolving the issue, we first need to determine the specific file the client is referring to. Typically, clients provide the name of the file they cannot locate or mention that a particular feed has not been placed at the SFTP.
To troubleshoot the issue, we'll need to access the SFTP server using the FileZilla application. If you haven't already, download FileZilla onto your machine using this .
Once FileZilla is installed, follow these steps to log in to the SFTP server and inspect the data files:
Retrieve the SFTP user credentials from the OMS
Access the OMS and click on the hamburger menu.
Scroll down to the Settings section and click on it.
Find and click on the General option.
Scroll down the General Settings page to locate the FTP connection settings.
Copy down the SFTP user credentials.
Open FileZilla and use the obtained credentials to log in to the SFTP server.
Once logged in, navigate through the directories to locate the relevant feed based on the information provided by the client. You can use the integration document for the SFTP location. Here is an example of file path looks like: /home/{sftp-username}/netsuite/customer/export
.
When a file is found in the SFTP archive folder, it indicates that the external system has successfully consumed the file. Communicate this to the client and attach the file and a screenshot for their reference. Files not successfully consumed by the external system will be in the failed folder.
There may be various reasons for files not being exported and placed by HotWax on the SFTP. The reasons should be as follows:
Failed Export Validations
The data may not pass the required export validations and therefore doesn't get placed at the location for the external system. In this case, contact the integration team for assistance.
Integration Platform Issue
Right-click on the canvas of the processor group in Napita.
Check the status of the processor.
If the processor is enabled, you will see an option to Stop
.
If the processor is not enabled, you will see an option to Start
. These steps will help you determine whether the flow is currently enabled.
After exploring all possible reasons why the file isn't making it to the SFTP, and confirming it's not on our end, we can take the next step by kindly asking the client for their SFTP credentials to delve deeper. We can use the following templated reply:
Could you please share the SFTP credentials you're using so that we can trace the root cause of the issue?
This allows us to log in to the SFTP server with their credentials and identify the root cause of the issue. Sometimes, the problem arises because the credentials they are using are outdated, preventing them from accessing the files. In such cases, we can provide them with new credentials.
Additionally, there are instances where users are unable to upload or download files from the SFTP server due to insufficient permissions. To resolve this, we can contact the admin team to check and adjust the users' permissions as needed.
Generate a JWT token from the OMS using an integration user. Refer to to know how to generate the JWT token.
The flow in platform which handles placing files on the SFTP, may not be running. Please check with the integration team to ensure the flow is running properly. Alternatively, you can follow these steps:
When you create charts using SQL queries, Tathya lets you use the generated dataset to create new charts. (Dataset serves as the primary source of your data. It contains the information you want to visualize)
This way you can create multiple charts to represent the same dataset in different ways. This is especially beneficial when you want to represent different facets of the data or tailor visuals for specific user groups.
Navigate to the "Charts" section in Tathya.
Choose the Saved Dataset
When creating a new chart, you'll have the option to choose an existing dataset. Look for the option "Datasets" and select the dataset you saved earlier.
Select the Chart Type
Choose the chart type that best suits the insights you want to convey.
Configure the New Chart
Configure the new chart using the selected dataset. You can define metrics, dimensions, and customize the chart settings.
When creating charts directly from SQL queries, columns are pre-mapped based on the query's output. However, using the "Create Chart" option allows you to drag and drop individual columns so that you have fine-grained control over what data is visualized. This is especially useful when dealing with large datasets where not all columns are relevant to every analysis.
Save the New Chart
Once satisfied with the new chart configuration, save it.
Learn how to set multi-day filter in Tathya
The Multi-day Filter
feature in Tathya provides retailers with flexible options to view reports on various frequencies beyond daily reports. This functionality is pivotal for retailers who must analyze their operations on different time scales, such as weekly or monthly data.
By allowing users to set custom date filters, Tathya enhances their ability to derive insights from the platform's reports for their decision-making processes. Whether it's monitoring performance over specific periods, identifying trends, or making strategic adjustments based on historical data, this feature streamlines users' workflows and amplifies efficiency in data analysis.
Access the chart you wish to apply filters to within the Tathya
Dashboard.
For existing charts, the date filter should be removed from the SQL query to ensure that when the user sets the filter view, the data is fetched dynamically for the chart.
To remove the date filter, click on the options
overflow menu, and select the edit chart
option. This will open a new page to edit the chart.
Click on the options
menu in the Chart Source
column located on the left and select Edit dataset
from the options.
This will open a new form, click on the lock icon to make changes, edit the query to remove the date filter, and save the changes. Finally, save the chart by clicking the save icon available at the top right.
Once the SQL query is updated, Look for the Filter
icon positioned on the left side of the page. Click on it to open a sidebar, presenting options for filter configuration.
In the sidebar, click the + Add/Edit Filters
button. This action will open a new form where you can manage filters.
In the filter form, designate the filter type as Time Range
and provide a descriptive name, such as Date Filter,
to distinguish it.
Add relevant details such as a description for clarity and set the default filter value. For example, you can choose Last Day
as the default value to automatically display data from the previous day when users access the chart.
Once all necessary details are configured, save the filter. The filter will now be available in the sidebar for selection.
With the filter created, choose the desired time range from the available options in the sidebar. This action will adjust the displayed data accordingly, enabling users to analyze information within the specified timeframe.
Setting email frequency in Tathya provides retailers with tailored email reports at their desired frequency, enabling them to efficiently monitor their daily or weekly performance and address any issues promptly. While Tsthys offers various date filters for on-screen data viewing, this feature ensures that the time range for email reports remains independent.
Navigate to the dashboard and choose the chart you wish to adjust. Click on the options icon at the top right corner of the chart.
From the options menu, select Edit Chart.
This action will take you to a new page where you can modify the chart data.
Locate the filter section in the editing interface's data column.
From the dropdown menu within the filter section, pick the Error Date
column. This choice ensures that the email report focuses on error and discrepancy-related data.
Within the filter options, choose the time range filter and set it to display data for the desired frequency such as Last Day.
This setting ensures the email report provides insights into the previous day's performance.
After configuring the filter settings, click on the Update Chart
button to apply the changes to the chart.
Save your changes by clicking on the save icon. This action ensures that the configured time range for email reports is saved and applied consistently for future email communications.
Customize your dashboards to improve the user experience and better align with the branding guidelines.
Tabs in Tathya act as a handy way to organize and structure your dashboard. They serve as separate sections, allowing you to group related charts or information together, making navigation a breeze.
Click the "+" button to add a new tab.
Rename tabs to provide clarity and relevance.
Easily switch between different sections of the dashboard using tabs.
Rows and columns are the building blocks of your dashboard layout. They define how charts and elements are arranged on the canvas, giving you control over the visual structure.
Create a row by dragging the "Row" element onto the canvas.
Adjust the width and height of rows and columns to control the layout.
Place charts, headers, or other elements within rows and columns for an organized dashboard.
Headers provide a descriptive title or label for a specific section of your dashboard. They enhance the visual hierarchy and help users quickly understand the content.
Drag the "Header" element onto the canvas.
Customize the header text to convey the purpose or theme of the section.
Headers can be placed within rows or columns to introduce or label groups of charts.
Markdown elements enable you to add formatted text, images, or hyperlinks to your dashboard. They enhance communication by providing context, instructions, or additional information.
Drag the "Markdown" element onto the canvas.
Use Markdown syntax to format text, insert images, or create hyperlinks.
Position Markdown elements within rows or columns to complement charts and visualizations.
Dividers act as visual separators, creating a clear distinction between different sections of your dashboard. They contribute to a clean and organized layout.
Drag the "Divider" element onto the canvas.
Place dividers between rows or columns to visually separate content.
Adjust the divider style to match the aesthetics of your dashboard.
As part of the chart creation process, creators specify a color palette for it. On the dashboard level, it is possible to specify a single categorical color palette that would be used by all charts on your dashboard.
To set up a dashboard-level color palette:
Access your dashboard.
Click on EDIT DASHBOARD, on the top right corner.
Click on the three ellipses on the top right corner > Edit properties.
Choose a palette under the COLOR SCHEME drop-down.
Save changes.
For more control over the columns that are applied to each dimension, you can manually specify the colors using hexadecimal or RGBA code on the dashboard JSON metadata.
To do so:
Access the dashboard.
Click on EDIT DASHBOARD, on the top right corner.
Click on the three ellipses (...) > Edit properties.
Expand the ADVANCED section.
Create a new section inside the JSON METADATA named label_colors, specifying the desired colors for each metric/dimension.
Drag-and-Drop Functionality: Use the drag-and-drop feature to rearrange tabs, rows, columns, headers, markdown, and dividers effortlessly. This enables quick adjustments to your dashboard layout.
Responsive Design: Keep in mind that your dashboard layout is responsive, ensuring an optimal viewing experience across different devices. Test the layout on various screen sizes to ensure consistency.
By understanding and utilizing these dashboard layout elements, you'll be able to craft visually appealing and well-organized dashboards in Tathya.
Discover how to configure dashboards in Tathya for comprehensive data visualization and analysis.
While in the process of saving a chart, you have the option to directly add it to a dashboard.
In the "Add to Dashboard" field, you can specify whether you want to add the chart to an existing dashboard or create a new one.
If you would like to save the chart to an existing dashboard, then select the dashboard from the drop-down list.
If your new chart does not align with the existing dashboards, go ahead and enter the dashboard name and create a new dashboard.
Select Save or, to browse directly to the defined dashboard, select Save & Go To Dashboard.
The added chart becomes easily shareable. Options include downloading data or a PNG file, copying the chart link, or setting up regular email reports for automated sharing.
Locate and click on the "Dashboards" option in the top navigation menu. This will take you to the dashboard management page.
Click on the "+ Dashboard" button to initiate the creation of a new dashboard.
Provide a meaningful and descriptive name for your dashboard. Add any relevant tags or categories.
A list of existing charts will be displayed. Choose the charts you want to include in your dashboard. These could be charts, graphs, or any visualizations you've previously created.
Use the drag-and-drop functionality to arrange the selected charts on your dashboard. This allows you to create a customized layout that visually communicates your insights effectively.
Fine-tune the layout by adjusting the size and position of each slice using the drag-and-drop feature. Customize the dashboard title, description, and other settings. Consider adding annotations for clarity.
Annotations provide additional context or explanations for specific data points, enhancing the interpretability of your dashboard.
Publish your dashboard once satisfied with the layout and content.
Navigate to the Dashboards screen and go to the dashboard to which you want to add charts.
Select the Edit dashboard icon.
Select the Charts tab.
The Charts tab contains a list of all charts that you have access to (i.e., charts that you have created or a team member has shared ownership access).
To add a chart, just drag & drop the chart card into a space on the dashboard.
Publish your dashboard once you have added all the required charts.
Navigate to the Dashboards screen and select + Dashboard.
In the Content panel, select + Create a new chart.
At this stage, you can proceed with the process of creating a new chart by following the steps discussed under the section “creating a new chart”.
To remove a chart from the Dashboards screen, select the Edit dashboard icon.
While your cursor is hovering over a chart, select the trash icon in the upper-right corner of the chart.
To delete an entire row of charts, select the trash icon on the far left side of the row.
Selecting the trash bin icon merely removes the chart(s) from your dashboard, it does not delete them from Tathya.
A dashboard can be deleted from the Dashboards screen by clicking on the delete icon in the Actions column header.
On clicking the delete icon, you will see a popup that confirms the deletion of a dashboard. Type 'DELETE' to delete the dashboard.
Explore the various dashboard options in Tathya for editing and viewing, enabling comprehensive data analysis and visualization.
Select the ellipsis icon (3 dots) while in editing mode to view the following options:
Edit dashboard properties
Title: The title of the dashboard.
URL Slug: Customize the end of the URL (slug) to a more memorable name.
Owners: Assign/remove access to the dashboard.
JSON Metadata: By expanding the Advanced header, the JSON Metadata panel appears. This area is for power users who may wish to alter specific dashboard parameters.
Edit CSS
Loads the Live CSS Editor, which can be used to make ad hoc stylesheet changes in a live editing environment (i.e., changes applied in real time).
Save as
Duplicate a dashboard, this dashboard will contain all of the same charts as the existing dashboard. This is useful to allow you to create a dashboard from an existing dashboard.
Share
Copy permalink to clipboard: Copies a shareable dashboard link to the system's clipboard.
Share by email: Launches the system's default email client and composes a new message featuring your dashboard URL.
Set auto-refresh interval
Select to specify an automatic refresh rate for the dashboard. Options include seconds (10 or 30), minutes (1, 5, or 30), or hours (1, 6, 12, or 24). Note that this will be only applied to the current session.
Setting a permanent auto-refresh
If you would like to set a permanent auto-refresh instead, you would need to modify the JSON metadata in the Dashboard Properties. For that, set a value (in seconds) to the "refresh_frequency" parameter (default value is 0). For example, to set a 1-hour refresh rate, set the parameter value as 3600.
Refresh dashboard
Select to refresh the chart's data. The time duration since the last cache is also provided.
Enter fullscreen
View the chart in full-screen mode (i.e., just the chart occupies the entire screen).
Download as an image, CSV, or Excel
Download the entire chart in jpeg, CSV, or xlsx format.
Share
Copy permalink to clipboard: Copies a shareable chart link to the system's clipboard.
Share permalink by email: Launches the system's default email client and composes a new message featuring your chart URL.
Set auto-refresh interval
Select to specify an automatic refresh rate for the dashboard. Options include seconds (10 or 30), minutes (1, 5, or 30), or hours (1, 6, 12, or 24). Note that this will be only applied to the current session.
The Link Configuration feature in Tathya offers users the ability to embed clickable links within charts generated in Superset, facilitating seamless redirection from the Dashboard page. Users can navigate to relevant pages within the HotWax Commerce platform directly from the Superset Dashboard, eliminating the need to switch between multiple tabs or applications. This saves time and improves overall productivity. Users can set up link configuration in two ways:
Locate and Edit the Chart:
Locate the chart in which you want to configure the links. Click on the ellipses on the right of the chart and select Edit chart.
Set Query Modes and Configure Columns:
Select the Query Modes as Raw Record in the Data section. Navigate to the Columns Section to configure the columns of your chart in Superset.
Add a New Column:
Click on the Add Column option to initiate the creation of a new column.
Customize Link with Custom SQL:
Upon clicking, you'll encounter a model with three sections: saved-columns, Simple columns, and Custom SQL. Click on Custom SQL and paste the link specifying the targeted link and the source column. For example, if you want to create a link for facility ID, you need to create the link similar to this:
The CONCAT Function is used to generate HTML code for creating a hyperlink dynamically based on the facility_id for the above example. The concat function follows the following pattern:
<a href="
: Static part of the anchor tag indicating the start of a hyperlink.
CONCAT("https://{{instance_name}}/commerce/control/ViewFacility?facilityId=", facility_id)
: Dynamically generates a URL by concatenating a static URL template with the facility_id. here facility_id is a column name which is included in link.
'" target="_blank">
: Static part of the anchor tag specifying that the link should open in a new tab or window.
facility_id
: Value inserted into the anchor tag as visible text.
</a>
: Static part of the anchor tag indicating the end of the hyperlink.
Edit Column Header:
Afterward, you can edit the header, which serves as the label for your column.
Set Query Mode and Configure Columns:
Select the Query Mode as Aggregate in the Data section. Navigate to the Columns Section to configure the columns of your chart in Superset.
Add a New Column:
Navigate to the Metrics Section and Click on the Add Column option to initiate the creation of a new column.
Customize Link with Custom SQL:
Upon clicking, you'll encounter a model with three sections: Saved-columns, Simple columns, and Custom SQL. Click on Custom SQL and paste the SQL specifying the targeted link and the source column.
Edit Column Header:
Edit the header, which serves as the label for your column in the chart.
By following these steps, users can effectively configure the Link Configuration feature within Tathya, enhancing their data visualization capabilities and workflow efficiency.
Learn how to grant access to dashboards in Tathya, allowing other users to edit them and add charts without encountering visibility issues.
In Tathya, it's essential to grant access to dashboards to other users so that they can add their charts to them without encountering visibility issues. Here's how to grant access:
Navigate to the Dashboards section within Tathya.
Find the specific dashboard you wish to grant access to.
Go to the dashboard and locate the "Actions" section, located on the right-hand side.
Click on the "Edit" option within the actions menu.
A popup window will appear, providing various options for dashboard management. Navigate to the "Access" tab within this popup.
In the "Access" tab, you'll find a section where you can add relevant users who are allowed to alter the dashboard.
Enter the names or usernames of the users you wish to grant access to. This list should include individuals who may need to edit the dashboard.
Once you've added the relevant users, save your changes. This ensures that the specified users have the necessary permissions to edit the dashboard and add charts to it.
Lack of access to a dashboard restricts its visibility in the list of options when attempting to add your created chart to it.
Discover how to configure Alerts & Reports in Tathya, enabling event-triggered notifications and scheduled notifications for effective data monitoring and analysis.
Now that you've successfully created charts and added them to dashboards, the next crucial step is to set up Alerts and Reports. The Alerts & Reports feature in Tathya enables creating event-triggered notifications (Alerts) or scheduled notifications (Reports).
An alert provides a custom link to a chart or an entire dashboard and is triggered when a predefined event occurs. This event is a logical condition within your data.
A report offers a snapshot of a chart or an entire dashboard, accompanied by a link for further exploration and slicing & dicing of the query. Unlike alerts, reports run on a defined schedule (e.g., daily at 7 pm, weekly, etc.).
In the Toolbar, hover over Settings.
From the drop-down menu, select Alerts & Reports.
Upon reaching the Alerts & Reports screen, the Alerts interface is displayed by default. You can easily toggle between the Alerts and Reports tabs to control the content they wish to view.
Below the Alerts and Reports tabs is “Last Updated” information, this simply conveys when the screen was last updated with data. You can force refresh the page by selecting the circular “Refresh” icon.
The filters and search features enable you to quickly find the alert or report that you're looking for, which is invaluable when there are many entries.
Created By:
Select a user to display alerts or reports created by that individual.
Status:
Select an option to display alerts or reports that match the selected status. Available status options include:
Default: Displays all entries regardless of status.
Success: Displays entries that ran successfully.
Working: Displays entries that are currently being processed.
Error: Displays entries that did not successfully run.
Not Triggered: Displays entries with a trigger that has not yet been activated.
On Grace: Displays entries that are currently in a defined grace period.
Search:
To use the Search feature, simply enter a term in the text-entry field and hit or select the magnifying glass icon. A list of entries that include your search criteria will appear in the table.
The Alerts and Reports tables include the following column headers:
Last Run:
Displays the date, time, and UTC hour difference when the entry last ran.
Name:
Represents the name of the alert or report, as defined when adding or editing the entry.
Schedule:
Indicates the defined schedule of the alert or report (e.g., every hour, every minute, etc.).
Notification Method:
Displays an icon indicating that notifications will be sent via email.
Owners:
Icons indicating the owner(s) of the alert or report.
Active:
A toggle switch indicating whether the alert or report is currently enabled or not.
Actions (visible on cursor hover):
Icons that enable you to access the execution log, edit the entry, or delete the entry.
By leveraging these features, you can efficiently manage and monitor alerts and reports, ensuring timely and relevant notifications based on predefined conditions or schedules.
Learn how to create alerts in Tathya to stay informed about important data changes and trends.
On the Alerts interface, select + Alert
to create a new alert.
The Add Alert window appears.
In the Alert Name field (required), enter a name for your new alert.This will also serve as the subject of the email.
In the Owner's field (required), select one or more owners for the alert. Owners have the ability to edit an alert and are notified in case of any execution failures.
In the Description (optional) field, enter a short but meaningful description of the alert, to be included in the alert message.
The Active toggle switch is automatically enabled.
Move to the Alert Condition panel. This area is used to define the event that triggers the activation and notification of the alert.
In the Database field (required), select the database in which the SQL query should be executed.
In the SQL Query field (required), enter a SQL statement that defines the nature of the alert (the metric you want to monitor).
In the Trigger Alert If... field (required), define the condition, and in the Value field (required), enter the associated value of the condition.
This panel is used to define the frequency at which the data is checked to see if the defined condition has been met.
The first schedule option enables you to specify a highly granular schedule based on your specific requirements. Data can be checked every minute, hour, day, week, month, or year.
After setting a schedule, the subsequent CRON field will automatically populate with an equivalent CRON expression that represents your defined schedule.
Alternatively, you can also directly enter a CRON expression by selecting the secondary radio button and entering the expression in the CRON Schedule field.
A CRON expression is a string representing a schedule. It is used to define the timing of recurring tasks or jobs in systems where periodic execution of tasks is required.
The basic structure of a CRON expression consists of five fields, representing minute, hour, day of the month, month, and day of the week. Each field can have a specific value or a wildcard (*) to represent any possible value. Here's the general format:
| | | | | | | | | +----- Day of the week (0 - 6) (Sunday is both 0 and 7) | | | +------- Month (1 - 12) | | +--------- Day of the month (1 - 31) | +----------- Hour (0 - 23) +------------- Minute (0 - 59)
0 0 * * *
: Check data at midnight every day.
*/15 * * * *
: Check data every 15 minutes.
0 2 * * 1-5
: Check data at 2:00 AM every weekday (Monday to Friday).
In the Timezone field, select the drop-down menu and choose your timezone.
In the Log Retention field (required), enter the number of days that the alert will be stored in the execution log. By default, this is set to 90 days.
In the Working Timeout field (required), enter the number of seconds that the alert job is allowed to run before it results in an automatic timeout. By default, this is set to 3600 seconds.
In the Grace Period field, enter the number of seconds that should pass before the alert can trigger relative to when a previous alert was triggered. If an alert triggers within this period, its status will be On Grace, and the alert's evaluation will commence when this period concludes. By default, this is set to 14400 seconds.
In the Message Content section, select either the Dashboard or Chart radio button. Then, in the drop-down field, select the relevant dashboard or chart, a custom link will be prepared and sent based on the defined notification method.
Screenshot Width: An optional parameter that allows you to customize the width (in pixels) for your alert dashboard/chart screenshot.
Ignore cache when generating screenshot: Checkbox to produce real-time data (invalidating cache).
In the Notification Method section, select Add notification method. The Select delivery method drop-down field appears. Select either Email or Slack, as needed. On selection, you will be prompted to enter an email address or the channel name. You can also configure it to be sent to both recipient types.
To finalize your alert, select Add.
You can create an alert for whenever the count of unfillable orders in a day exceeds the defined threshold of 20.
Set conditions for the count of unfillable orders in a day to be greater than 20 and choose the daily time granularity for the evaluation. Specify the email addresses of the operations and customer support teams so that they receive immediate email notifications when facing a higher volume of unfillable orders.
Tathya will automatically trigger the alert and notify the designated team members via email whenever the daily count of unfillable orders exceeds 20.
You can create “Organizational Units” (OU) and “Customer Units” (CU) in the LDAP directory. For a new project, before creating a user (CU), it is recommended to create an OU.
OUs allow for the logical grouping and organization of users within the LDAP directory. Each OU typically represents a specific category, department, or project.
Login into the LDAP directory with your credentials. On the phpLDAPadmin default dashboard page, locate the domain components on the left corner and click on the "+" icon. Now, select "Create new entry here".
Choose "Generic: Organizational Unit" as the template for creating the object.
“Create Object” refers to the process of creating a new object, that is, organizational unit or user account. “Create Entry” refers to establishing a new entry in the LDAP directory that represents the created object.
In the main pane, provide the name of the organizational unit, typically representing the entire project.
For example, if a project were named Wasatch Ski, the OU name should be “tathya-wasatch-ski”.
Click on the "Create Object" button and confirm the creation of the entry by clicking on "Commit".
You have now successfully created an OU that represents the specific project.
In the LDAP directory, navigate to the newly created Organizational Unit ("tathya-wasatch-ski" in our example). Below the OU, click on “Create new entry here” to add a user.
In the main pane, click on "Create a child entry" and then select "Generic User Account" as the template for creating the user account.
Input user details such as First and Last Name, Common Name, UserID, Password, UID Number, GID Number, and Login Shell.
First and Last Name: The first and last name of the user that you want to log in to on Tathya.
Common Name: Common Name (CN) is the full name of the user.
A preferred CN would be “firstn.lastn” and it is recommend to keep all the initials in lowercase. This is the same ID that will be used on Tathya for login.
UserID: UserID is an auto-generated unique identifier for the user. It serves as a key attribute for identifying and distinguishing each user within the LDAP directory.
Password: The Password is a secure string of characters chosen by the user to authenticate and access the LDAP.
This is the same password that will be used on Tathya for login.
GID Number: The GID number defines a search space where administrators or developers can perform LDAP searches specifically targeted to retrieve information related to various accounts.
There would be two options displayed “Users” and “Admin”. When creating user accounts for a project, select the "Users" option.
Login Shell: The Login shell is the shell or program the user interacts with after login. It influences the user's experience after logging in, defining the command-line environment.
We usually choose “Bash” (Bourne Again SHell) as the login shell. If a user's login shell is set to Bash, their interaction with the system after logging in will involve the Bash command-line interface.
Once you enter all the details, click on the "Create Object" button and confirm the creation of the entry by clicking on "Commit".
You have now successfully created a user in the LDAP directory. These user credentials can be used on Tathya for an automated login.
To add additional users under an OU, follow the same steps and create a child entry for each new user.
Roles define access levels, ensuring that users with specific roles can view tailored dashboards containing multiple charts designed for them, directly in Tathya.
Roles serve as a way to organize and control access to dashboards within Tathya. By associating charts with specific roles, you can manage who sees what, providing a personalized and secure data visualization experience.
You can also create multiple roles for a project and add similar charts under one role.
With roles set up to manage access to dashboards and charts, the next step is to create user profiles and assign the appropriate roles.
Skip this step if the user is already listed on Tathya.
To initiate this process, it is essential to first “list” the user on the Tathya platform.
User access to Tathya requires prior registration on LDAP (Lightweight Directory Access Protocol).
LDAP serves as a centralized user management platform which ensures authentication across various systems. The LDAP integration allows for a single sign-on (SSO) experience, where a registered user can use their LDAP credentials to access multiple systems, including Nifi and Tathya.
Discover how to schedule reports in Tathya using alternate interfaces, either via a dashboard or a chart, and configure the scheduled report settings for frequency and content.
A report can also be scheduled via a dashboard or via a chart.
View a dashboard and, in the top right corner, select the ellipsis icon and then Manage email report > Set up an email report.
View a chart and, in the top right corner, select the ellipsis icon and then Manage email report > Set up an email report.
Report Name (required): Enter a memorable name for the new report and, in the Description field, enter a brief description of the report.
After the word Every, select the frequency of the report. Options include:
Year
Month
Week
Day
Hour
Minute
After you select one of the above, the scheduler will present options that are relevant to your choice.
For dashboards, only Image (PNG) is supported so this is automatically selected. For charts, choose between:
Image (PNG) embedded in the email field to receive the chart as an image that is directly embedded within the email message body.
Formatted CSV attached in the email field to receive the chart's raw data as a comma-separated value file that is attached to the email.
Your schedule has now been created. To verify, select Settings → Alerts & Reports → Reports
tab.
Learn how to list and modify roles in Tathya to manage permissions and access levels for dashboards and charts.
Click on "Settings" located in the top-right corner of the Tathya interface. Under the "Security" section within Settings, select "List Roles."
On the List Roles page, you'll find an overview of existing roles. To add a new role, click on the “+” icon.
In the role creation interface, you'll encounter two key fields: "Name" and "Permissions."
Name: Begin by giving the role a descriptive and easily identifiable name. Consider using a camel case convention for clarity. For example, if your project is named "Wasatch-Ski," you might name the role "Wasatch-SkiProd-DatabasePermission."
Choose a name that clearly indicates the purpose or project associated with the role, making it easy to understand at a glance.
Permissions: In Tathya, permissions play a crucial role in determining access levels to dashboards and charts.
Start by searching for and listing all the charts created for your project. This ensures that users with this role can access dashboards containing these specific charts.
A single chart can be present in multiple dashboards. If a chart is listed in the permissions of a role, any dashboard containing that specific chart becomes eligible for access by users assigned to that role.
Once you've filled in the necessary details, click "Save" to create the new role.
If there's a need to revise permissions for an existing role, for example, when new charts are added to an existing project, follow these straightforward steps:
Identify and select the role for which you want to modify permissions.
Once you've located the role, click on the edit icon associated with that role.
Inside the role editing interface, navigate to the "Permissions" field. Here, you can add new charts or remove existing ones to adjust access levels.
To include new charts in the permissions, start typing the names of the additional charts. They will be automatically suggested for selection.
After making the necessary modifications, click "Save" to update the permissions for the existing role.
Regularly review and update permissions to align with any changes in the project, such as the creation of new charts.
Modifying permissions allows you to grant access to the latest project developments, ensuring users with the role can view the most up-to-date dashboards.
Learn how to create and automate reports in Tathya to track important metrics at specified intervals.
After successfully setting up alerts, you may also want to generate regular reports to keep track of important metrics. Tathya allows you to automate the process of creating and sending reports at specified intervals.
Follow the steps below to create and schedule reports:
Navigate to the Alerts & Reports screen. By default, you'll land on the Alerts interface. To access the Reports screen, click on the Reports tab.
On the Reports interface, select + Report
to create a new report. The "Add Report" window appears.
Report Name field (required), enter a descriptive name for your report. This will also serve as the subject of the email.
Owner(s) field (required), select one or more owners for the report. Owners have the ability to edit the report and are notified in case of any execution failures.
Description (optional), provide a brief and meaningful description of the report.
The "Active" toggle switch is enabled by default.
Ensure that the Superset Admin
user is added as the owner whenever scheduling a new report to prevent the report from being disabled if the current owner is deactivated.
This panel is used to define how frequently the report will be sent to a defined notification channel(s).
Specify Time: The first schedule option enables you to specify a highly granular schedule based on your specific requirements. Data can be checked every minute, hour, day, week, month, or year. The day, week, month, and year options all allow you to define a schedule down to the hour & minute granularity.
Check or Enter CRON: After setting a schedule, the subsequent CRON field will automatically populate with an equivalent CRON expression that represents your defined schedule.
Alternatively, you can also directly enter a CRON expression by selecting the secondary radio button and entering the expression in the CRON Schedule field.
(To learn more about CRON expressions, please refer to the alerts section)
In the Timezone field, select the drop-down menu and choose your timezone.
Log Retention (required): Enter the number of days the report will be stored in the execution log (default is 90 days).
Working Timeout (required): Set the maximum duration for the report job to run before an automatic timeout (default is 3600 seconds).
In the Message Content section, select either the Dashboard or Chart radio button. Then, in the drop-down field, select the relevant dashboard or chart — a screenshot of the dashboard or chart will be sent along with a link.
When sending a chart, be sure to indicate whether a screenshot will be sent (in PNG format) or as a CSV file. For Table and Pivot Tables, you can also choose to include the chart on the email text (rather than as an attachment).
Screenshot Width: This is an optional parameter that allows you to customize the width (in pixels) for your dashboard / chart screenshot.
* **Ignore Cache:** Check this box to generate real-time data and invalidate cache.
In the Notification Method section, select Add notification method. The Select delivery method drop-down field appears. Select either Email or Slack, as needed. On selection, you will be prompted to enter an email address or the channel name. You can also configure it to be sent to both recipient types.
To finalize and save your report, select Add.
By following these steps, you can easily create and schedule reports in Tathya, ensuring that the relevant users receive timely and accurate information.
Learn how to authenticate LDAP user accounts in Tathya, ensuring secure access to the platform using LDAP credentials.
After successfully creating an LDAP user account, log in to Tathya using the generated credentials.
Tathya forwards the authentication request to the LDAP server, which verifies the user's credentials against its directory and responds to Tathya with the authentication status.
If the LDAP authentication is successful, Tathya grants access to the user.
To address and resolve discrepancies in data within Tathya reports, ensuring accurate and reliable information for users.
Occasionally, discrepancies in data may arise in Tathya, impacting the visibility or accuracy of reports generated for clients. These issues typically stem from SQL query issues, dataset misconfigurations, or user-specific requirements not being met.
Issue: Users do not see expected fields or results in Tathya reports. Cause: Missing fields in SQL queries or dataset configurations.
Issue: Users encounter repeated fields or data in reports. Cause: SQL queries fetching duplicate records or incorrect joins.
Issue: Expected data does not match what is displayed in reports. Cause: Incorrect filters or conditions in SQL queries.
Issue: Clients request additional data fields not currently available. Cause: Queries need modification to incorporate new data requirements.
Verify Issue Existence
Log in to Tathya using credentials.
Navigate to the specific report or dashboard where the issue is observed.
Confirm discrepancies reported by users.
Identify Scenario
Determine if the issue falls under Field Not Found, Duplicated Data, Mismatched Data, or New Feature Request.
Check Tathya Configuration
Navigate to the report or chart where the problem occurs.
Click on the chart title or navigate to the Charts
option in the navigation bar.
Edit Dataset
Click the three dots on the chart and select Edit dataset
.
In the Edit dataset
dialog, go to the Source
section.
Edit the SQL query in the SQL
field as necessary to address the issue.
Click on Save
button.
Sync Columns
Inside the Edit dataset
dialog, navigate to the Columns
section.
Click on SYNC COLUMNS FROM SOURCE
to ensure all necessary columns are included from the updated SQL query.
Click Save
to apply changes.
Cross-check Results
Return to the dashboard or report and verify if the issue is resolved.
Ensure the changes do not adversely affect other charts relying on the same dataset.
Dependency Check
Before making changes, review dependencies to understand impacts on other reports or charts.
Ensure changes are implemented in a way that maintains data integrity across all relevant areas.
Order imports from Shopify to HotWax fail due to incorrect email format. In this scenario, we have to inform clients to correct the email addresses and re-import them.
Order failed to import into the OMS due to an error in the email address. Despite confirming that both the customer email address and order email are accurate, the problem originates from Shopify. There a duplicate customer entry exists with an incorrect email address, causing the import error.
In this scenario when an order fails to import in OMs because the order has a special character (any emoji). we need to tell the client to manually import the order.
We noticed that the customer isn't connected to the subsidiary in Netsuite: we need to inform the client to add the customer to the Subsidiary
We've identified an order with all the required attributes that haven't been created in NetSuite.
We have a POS send a sales order in NetSuite where we can't add a location. Upon reviewing the order, we observed that the brokered location associated with it is not linked to Subsidiary 5 in NetSuite.
This scenario involves a point-of-sale (POS) order that hasn't been properly synchronized with NetSuite. The order is imported from Shopify in an unfulfilled status and in HotWax it is in a Created State.
There are issues with Shopify orders involving Custom Denomination Gift Card products not mapped to their parent products in Shopify, causing synchronization issues with Netsuite.
When an instance experiences downtime or is intentionally taken down for maintenance checks, we can utilize this notification template.
In this scenario, as the solution cannot be delivered instantly and requires time to identify the root cause of the issue, we will keep the client updated on the progress.
When a client encounters an issue that can be resolved by adjusting or updating data in the OMS.
When we get the reset inventory files from adoc in an invalid CSV format.
When we search for an order that is not visible because it is not indexed on the search page, we need to communicate this with the client.
Tathya users sometimes face login issues due to forgotten credentials, which can be resolved by verifying or updating their credentials in our records. This guide helps you troubleshoot by detailing how to set up and manage LDAP accounts, including updating user credentials in phpLDAPadmin.
Search for the user's record in your record.
Use the credentials on record to attempt a login.
If the login is successful, share the same old credentials with the user.
We follow a standard template when creating a password, such as "Ht5@nusername".
Use your administrator credentials to log into the phpLDAPadmin
interface.
Select the appropriate instance (e.g., SM, UCG, Krewe, etc.).
From the left-hand side of the phpLDAPadmin
dashboard, navigate to and select the user experiencing login issues.
In the user's details, locate the password field.
Enter a new password, ensuring it meets security standards.
Choose the encryption method (typically MD5 for Hotwax Commerce).
Click on Check password
to verify the new password.
Once confirmed, click on the Update Object
button to save the changes.
Share the new credentials with the user.
For detailed instructions on creating or updating users in LDAP, refer Tathya user setup.
Clients at Hotwax Commerce request reports through Jira or Slack. We gather requirements, create a task in ClickUp, and assign it to the appropriate team member. The final report is then delivered to the client, ensuring a smooth and efficient process.
Clients initiate report requests through Jira or Slack. This step begins the process of obtaining a customized report, ensuring the client's request is logged and tracked.
Interact with the client to clarify their requirements, ensuring that the report meets the client's specific needs. Collect all necessary details related to the requested report. This ensures all relevant data is considered in the report creation process.
Log in to ClickUp and open the Hotwax Commerce Workspace.
Select the Project Management
Space.
Navigate to the OMS
Folder and then to the Report
list.
Create a ticket in the Report list section in ClickUp.
Assign the tickets to the appropriate team member with all gathered requirements and add the relevant details in the description.
Attach the URL of the report or chart to the ticket in ClickUp and close the ticket.
Deliver the desired result to the client via Jira or Slack. This completes the request process and ensures the client receives the required report.
Once the access is granted, the user is automatically listed in Tathya and by default gets assigned a “public” role.
The user's account is not yet fully configured. The auto-assigned public role does not give the required access to view charts or dashboards. To grant access, you have to assign the necessary roles and permissions to the user's account.
To complete the setup, proceed by logging into Tathya with your credentials.
Ensure that you have the necessary permissions and administrative privileges to manage users in Tathya.
After logging into Tathya, in the Toolbar, hover your cursor over Settings and locate the "Security" menu. Within the Security section, look for an option like "List Users." This option will take you to the page where you can view, add, and update all the users.
Now to update the user, locate the newly listed user that you created in the previous step and click on the edit button. You will be redirected to the “Edit user” page, here verify and update user details such as, First Name, Last Name, Username, Is Active, Email, Roles.
First Name and Last Name: The "First Name" and "Last Name" fields contain the user's name. These columns provide information about the user's identity so that you can quickly recognize who each user is.
Username: The "Username" is the same LDAP-generated ID that will be used to log into Tathya (firstn.lastn). Ensure that the provided username matches the user's LDAP identifier.
Ensure that the provided username aligns with the LDAP-generated ID for seamless authentication.
Is Active?: The "Is Active?" field determines whether the user account is currently active or inactive. If set to "True," the user can log in and access Tathya.
Mark check in the Is Active field. If it is set to "False," the user account is deactivated, and the user cannot log in. Use this field to manage user access based on their current status within the organization.
Email: The "Email" is the email address associated with each user's account.
Email addresses are crucial for communication and account recovery purposes. They serve as a means of contact and are often used for sending notifications or alerts related to the user's account.
Roles: The "Roles" column indicates the roles assigned to each user. Select the default role “HotwaxBasicPermissionUser” for the user.
The HotWax Basic Permissions role is designed to provide a limited scope of access within Tathya, ensuring focused functionality for users.
Here are key details about this role:
The HotWax Basic Permissions role restricts access to specific areas in Tathya.
Users with the HotWax Basic Permissions role will only have access to the "Dashboard" panel. Users cannot view any charts here until and unless they are assigned to them in the form of a role.
In the "Settings," users with this role will see a limited set of options. The displayed options include user profile details, general information, and the logout function.
This role ensures a high level of security by preventing users from accessing areas beyond the designated dashboard and settings sections.
Assign roles carefully, considering both default roles and project-specific roles for accurate access permissions.
Once all the fields that are required to add a user have been filled, click on “Save.”
Now, you have successfully listed the new user on Tathya and provided the roles and permissions necessary to manage their project’s charts and dashboards.
The account is now ready to be used by the user.
Regularly review and update user roles as needed, especially when new charts are introduced. This will ensure that the users get access to the latest dashboards that have been created for them.
If a client requests you to create a user on Tathya, the process begins by gathering the user's first name, last name, and email ID. These details are essential for setting up the user's account.
Gather Information: Obtain the user's first name, last name, and email ID from the client.
LDAP User Creation:
First, create a user in the LDAP (Lightweight Directory Access Protocol) system. LDAP serves as the central repository for user authentication and authorization information.
to learn how to create a user in LDAP
Tathya User Creation:
Once the user is successfully created in LDAP, proceed to create their account in Tathya.
Configure permissions and access levels based on their role and responsibilities within Hotwax Commerce.
to learn how to create a user in Tathya
Verification:
Log in with the credentials in our system to verify that all reports are visible to us.
Ensure that the user has the appropriate access and permissions.
Notification:
Notify the client once the user account has been set up in Tathya.
Provide them with login credentials and any additional instructions or information they may need.
For example, to view the dashboard, instruct them to click on the Dashboard menu.
This process ensures that user accounts are securely and efficiently established in both LDAP and Tathya, adhering to organizational security and access protocols.
Read the Tathya user manual to learn how to create and as per the client’s requirements.
Finally, add the project-specific roles that you have created, which grants permission to view the project-specific charts from the Dashboard panel. (Check the section for more information)
In Tathya, user roles and permissions are managed to control what actions users can perform and what data they can access.
The Alpha role in Tathya is a predefined role designed to provide broad access while limiting certain sensitive administrative capabilities.
Below is an overview of the Alpha role and steps to create client-specific database access to the Alpha role.
Ideal for power users who need the ability to create and manage dashboards and charts but do not need full administrative control.
Suitable for team leads or analysts who work extensively in querying and visualizing data.
Steps to Create an Alpha Role with Client-Specific Database Access :
Copy the Existing Alpha Role:
Go to the settings in the top-right corner.
Click on List Roles
.
Select the existing Alpha role
.
Click the Action
button to create a new role based on the Alpha role.
Rename the New Role:
Name the copied role according to the client. For example:
Example-Alpha if the client is Example
.
This ensures the role is uniquely identifiable.
Modify Database Permissions:
Remove alldatabaseaccess Permission:
Navigate to the permissions section of the newly created role.
Remove the alldatabaseaccess
permission to restrict access to all databases.
Remove the alldatasourceaccess
permission to restrict access to all datasources.
Add Permission for the Client Database:
Add permissions to the specific database associated with the client, e.g., Example Database
.
Assign the Role to the Client User:
Navigate to the client user in the List user
.
Add the newly created role (e.g. Example-Alpha) to the client user.
Verify that the client user now has access only to the designated database Example Database
and cannot access any other databases.
Perform a quick test by logging in as the client user to confirm the role's functionality.
Dashboard Management:
Can create, and delete dashboards.
Can view and modify filters in dashboards they have access to.
Chart Management:
Can create, edit, and delete charts.
Can view all charts they have permission to access.
Database Access:
Can access databases, schemas, and tables that are explicitly granted to them.
Dashboard Filters:
Can use advanced filters and cross-filtering in dashboards.
Limitations of Alpha Users
Restricted Administrative Actions:
Cannot manage roles or permissions for other users.
Cannot access or modify system settings or configurations.
Cannot assign or modify roles for other users.
Cannot change their role to Admin.
No Database Configuration:
Cannot add, edit, or delete database connections.
Limited Access Control:
Can only access datasets, databases, and schemas granted to them by an Admin or through specific role permissions.
Cannot edit or overwrite dashboards unless they are the owner of the dashboard.
Cannot view others' draft status dashboards.
Restricted Plugin and Feature Control:
Cannot enable or disable plugins.
Does not have access to SQL Lab and creating datasets
Security Management:
Cannot create or modify roles.
Cannot edit global security policies.
Learn how to set up a machine to deploy a new OMS instance.
To deploy a new OMS instance you first have to setup a machine. These are deployed using Jenkins.
UAT environments are usually deployed internally so they may have different steps, connect with the System Admin team for that.
Go to Jenkins
Login with your credentials and proceed to deploy a new instance under oms-env-setup
segment.
Select oms-prod-launch-deploy
from the preset options.
In case, you are setting a test production environment, use oms-launch-deploy
. Test production accounts are basically replica of production accounts, helping test real challenges faced during production spin up.
Click on Build with Parameters
from the left menu options and enter required details:
1. Enter a HOST name
The host name is should be the client name followed by uat
or oms
depending on what kind of environment you're setting up. Replace xxx-oms
with instance name.
For example, if a company were named Wasatch Ski Company, the instance name could be wasatchski-oms
and wasatchski-uat
.
The name or abbreviation of the company has to be unique.
Multi-instance names contain an additional identifier between the name and instance type.
For example, wasatchski-us-oms and wasatchski-ca-oms will be seperate instances for each country
Domain name is prefilled as hotwax.io
. Do not change this.
2. Select version
The ECR_IMAGE value selects the version of the OMS to deploy the system onto. To determine the current version/tag of the image for deployment, please consult the OMS 1.0 Releases documentation and locate the ECR_IMAGE value. If there's a need to deploy the system using a different ECR_IMAGE version, kindly use the specified image version accordingly.
3. Choosing a machine type
The machine size should be confirmed by internal administration team.
Select the machine size from the EC2_INSTANCETYPE dropdown. Based on the business order size/day we can select the Instance type.
C5a.xlarge: Less than or equal to 100 orders orders per day.
C5a.2xlarge: More than 100 orders orders per day.
4. Timezone
The default AWS region is us-east-1
. Do not change this without explicit confirmation.
Select the timezone where the client’s business is headquartered.
If the preferable timezone is not present in the dropdown, ask the system admin team to create it.
Refer this table to know which timezone to be selected:
AEST (Australia Eastern)
UTC+10
No
Brisbane, Sydney
ACST (Australia Central)
UTC+09:30
No
Adelaide, Darwin
AFT (Afghanistan)
UTC+04:30
No
Afghanistan
AKST (Alaska Standard)
UTC-09
Yes (ADKT)
Alaska
AST (Atlantic Standard)
UTC-04
Yes (ADT)
Antigua and Barbuda, Barbados, US Virgin Islands
CAT (Central Africa)
UTC+02
No
Botswana, Malawi, Namibia, Rwanda, Sudan
CET (Central European)
UTC+01
Yes (CEST)
Albania, Algeria, Austria, Belgium, Denmark
CST (Central Standard)
UTC-06
Yes (CDT)
Houston, Mexico City, Winnipeg
EAT (East Africa)
UTC+03
No
Ethiopia, Kenya, Madagascar, Somalia
EET (Eastern European)
UTC+02
Yes (EEST)
Bulgaria, Finland, Greece, Lithuania
EST (Eastern Standard)
UTC-05
Yes (EDT)
New York, Toronto, Atlanta
MSK (Moscow Standard)
UTC+03
Yes (MSD)
Russia (parts), Belarus, Ukraine
MST (Mountain Standard)
UTC-07
Yes (MDT)
Denver, Phoenix, Calgary
PST (Pacific Standard)
UTC-08
Yes (PDT)
Vancouver, Los Angeles, Las Vegas
WAT (West Africa)
UTC+01
No
Angola, Chad, Morocco, Nigeria
WET (Western European)
UTC+00
Yes (WEST)
United Kingdom, Ireland, Portugal
5. Build_Command
Default value loadOmsDefaultData
should not be changed.
6. OFBIZ_INSTANCE_PREFIX
Default value HotWax
should not be changed.
7. Plugins
External system integrations are referred to as plugins. If your instance incorporates integrations with external systems, include those plugins here. To identify the current versions of the plugins for deployment, please refer to the OMS 1.0 Releases documentation and locate the relevant plugin information.
You're now prepared to build your machine; simply click the build function.
It can take up to 15 minutes for an instance to become active after deployment. Until the instance comes online, the following message will show:
The site can’t be reached
If it takes longer than 25 minutes for your instance to come online, alert the system admin team.
Wait until the processing is finished and the box in the stage view turns green.
In the stage view, red box colors indicate an error. If this happens please report the error to the system admin team.
To check your deployment in the Build History screen.
Go to Build History
Click on the instance which is generally latest record to open the latest record.
Click to Console output
to open the output.
Go to the bottom of the output and verify that you see a Finished: Success
message.
Go to your instance, it should be online and working. If not, refresh your window and a login screen appears, your system is now online.
Add company name
Load facilities
Load System Property data
Recently, our instances were migrated to the New Kubernetes Setup to enhance availability. When an instance is migrated or upgraded, a thorough sanity check is essential to ensure all functionalities are operational and new updates are implemented successfully.
Sanity tests on production instances differ as some instances lack Maarg setup and Solr-based reporting. No actions should be performed on production instances; all checks are done on a view-only basis.
User and System Access Verification
Confirm successful login and functionality across multiple user roles and applications.
Data Validation
Ensure data consistency and accuracy across various OMS pages.
Configuration and System Integration Check
Validate proper configuration and operation of integrated systems, including Solr, SFTP, Maarg, and Nifi.
Order Fulfillment Process Verification
Confirm the end-to-end order fulfillment process.
Reporting Functionality Validation
Verify reporting functionalities.
JWT Token Validation
Ensure proper JWT token generation and usage.
OMS Login
Log in to OMS with multiple user roles to verify role-based access and functionality.
Example: Login with Super permission role and Administrator role.
Launchpad Login
Log in to the Ofbiz application (Fulfillment, Preorder, and User roles) and Moqui-based applications (Available to Promise, Order Routing, and Company) with different user roles to check if login is successful.
SFTP Login
Log in with new credentials and verify file processing success.
Maarg Instance
Log in to Maarg.
Update database/SFTP credentials.
Check for write permissions (e.g., order fulfillment history) retained post-migration on entities.
Nifi Instance
Log in to Napita Production or UAT, depending on whether the migration is for the production or UAT instance.
Verify read-only access to DB/SFTP details by checking DB/SFTP credentials.
OMS Data
Confirm data consistency across all views and detail pages.
Possible Issues:
Product images not found.
Order item details missing.
Pages taking a long time to load.
Action: Report these issues to the sysadmin team.
Solr Cloud
Check data accuracy on the Search Admin page.
Possible Issues:
Hostname changes in the Overview section.
Empty Index operation section.
Action: Report these issues to the sysadmin team.
Plugin Migration
Ensure plugins were upgraded successfully by checking plugin details on the About page.
Job Status
Perform hourly job checks for the next 6 hours and report any failures.
Order Fulfillment
Verify the end-to-end order fulfillment process on Shopify, from import through completion.
Maarg Instance Flows
Review job runs to identify stuck jobs by checking the Error field.
Nifi Instance Flows
Verify functionality for:
DB (ensure the same read access as before migration).
SFTP-related processors (ensure all files can be consumed and placed on SFTP).
Solr-Based Reporting
Ensure recent data visibility and accuracy on Tathya.
Ofbiz-Based Reporting
Confirm functionality of order sync, inventory, and fulfillment reports by ensuring all reports have recent data and no discrepancies.
Moqui-Based Reporting
Check Maarg reports to confirm no stuck services or messages.
Validate Solr report functionality. If there are Solr-based reporting discrepancies, the JWT token might have expired.
Regenerate the JWT instance token if necessary.
Discover the importance of updating your OMS for enhanced security, performance, and efficiency, including crucial security enhancements, improved performance, and bug fixes.
Updating OMS is vital for security, performance, and efficiency. The latest versions often include essential security enhancements, safeguarding your business and customer data. Performance improvements lead to smoother operations, faster order processing, and enhanced user experience. Regular updates also address bugs, preventing disruptions in your order management process. In essence, OMS updates are a simple yet crucial step in maintaining the integrity and effectiveness of your business processes.
You know about OMS versions you can refer to OMS Release Version { % endhint %}
Please check the current release version of your Order Management System (OMS) before you initiate any updates. This practice helps anticipate potential challenges, especially when transitioning from a significantly older version to a new one. Such updates may require manual data input and adherence to specific SQL processes. Failing to follow the correct procedures during the OMS update could lead to version management conflicts.
To check your OMS current release version, follow these steps:
Go to your OMS instance: https://{instanceName}.hotwax.io
Open the Hamburger menu and scroll to the bottom of the page.
Click on the 'Powered by HotWax Commerce' icon to access the Dashboard.
In the Dashboard, you'll find a table displaying the current release versions for oms
and omssetup
:
oms
currentReleaseVersion
omssetup
currentReleaseVersion
The oms
version reflects your current OMS version. Compare it with the version you intend to deploy for accurate version management. This step ensures a seamless and informed update process.
Log into Jenkins:
Use your credentials for authentication.
Navigate to oms-env-setup
:
Click on the oms-env-setup
option from the available segments.
Access oms-update
on the OMS update page:
Locate and click on the oms-update
section.
Initiate Build with Parameters:
Select "Build with parameters" from the side menu.
Complete the Form:
Host
Instance name for the update.
Domain
hotwax.io
ECR Image
289432782788.dkr.ecr.us-east-1.amazonaws.com/omscoreimage
ECR Image Tag
Select the next release tag.
Build Command
ofbiz --load-data readers=ext-upgrade
(for updating from recent to latest version),
build
(for updating from older to latest version)
Run Copy Image
NO
Solr Version
Latest version
Plugins
Any additional plugins to be added to the instance.
Initiate Build:
Click on the "Build" button to execute the update process.
Verify your Machine:
Wait until the processing is finished, and the box in the stage view turns green.
In the stage view, red box colors indicate an error. Report any errors to the system admin team.
Check Deployment in Build History:
Go to the "Build History" screen.
Click on the instance, generally the latest record, to open the latest record.
Click on "Console Output" to open the output.
Verify the presence of a "Finished: Success" message at the bottom.
Verify System Online:
Check your instance; it should be online and working.
If not, refresh your window. If a login screen appears, your system is now online.
Open the Hamburger menu and scroll to the bottom of the page.
Click on the 'Powered by HotWax Commerce' icon to access the Dashboard.
In the Dashboard, find a table displaying the current release versions for oms
and omssetup
, reflecting the new updated version.
Certainly! Here's the information in a structured Markdown code snippet:
1-9. Follow the Steps for Updating from a Recent Version to a New Version:
Log into Jenkins.
Navigate to oms-env-setup
.
Access oms-update
on the OMS update page.
Initiate build with parameters.
Complete the form with the following information:
Host
Instance name for the update.
Domain
hotwax.io
ECR Image
289432782788.dkr.ecr.us-east-1.amazonaws.com/omscoreimage
ECR Image Tag
Select the next release tag.
Build Command
'build' (for updating from older to latest version)
Run Copy Image
NO
Solr Version
Confirm the ready-to-deploy version from the Admin team.
Plugins
Any additional plugins to be added to the instance.
Initiate the build.
Verify your machine.
Check deployment in build history and ensure the system is online.
10. Update Data:
Neglecting to update the upgrade steps and SQL may lead to version management conflicts. Therefore, it is essential to meticulously update both the upgrade steps and SQL to align with the new version.
Go to webtools.
Refer to this document for the chosen release version and upgrade steps.
Copy the data from the document.
In web tools, go to Import/Export > XML data import > Complete XML document.
Enter the data between <entity-engine-xml>{put the data here}</entity-engine-xml>
and click Import Text.
Return to the document, find Update SQL, click on the links, and open webtools in a new tab.
On the Main page, go to ENTITY ENGINE TOOLS > Entity SQL Processor > Select Group as org.apache.ofbiz.
Add the SQL Commands as per the document and click Send.
11. Repeat the Update Process:
Continue the process iteratively until the OMS is on the latest version.
Always verify the Docker instance configuration with the development team before deployment. { % endhint %}
Overview Process for updating UAT instances on the Jenkins platform, providing clear instructions for different deployment scenarios.
Prerequisites
Access to Jenkins: [link to Jenkins.hotwax.co]
Necessary permissions to deploy to the target UAT instance
New Release Tag
Access Jenkins and locate the desired UAT instance.
Navigate to the deployment page and select "Build with parameters".
Input the release tag (e.g., v5.14.0) in the "Docker branch" field.
Specify required plugins in the "Plugins" field (format: plugin_name=develop=https://git_url
).
Select "UAT" as the "Docker instance" and verify its configuration with the dev team.
Trigger the deployment.
Development Branch
Follow the same steps as for a new release tag, but input "Main" as the Docker branch and select "Dev" as the Docker instance.
Feature Tag
Follow the same steps as for a new release tag, but input the feature tag (e.g., v5.15.0-86cw63t1f-beta) as the Docker branch and select "UAT" as the Docker instance.
By following these steps and considering the outlined best practices, you can effectively update UAT instances to support various development and testing needs.
We have defined workflows with various stages to track the progress of the tickets or tasks in ClickUp.
Here's a breakdown of each status:
OPEN: The tickets are initially created in the open stage.
DISCUSSION PENDING: Move the ticket to this stage if there might be some discussion pending.
IN PROGRESS: Move the ticket to this stage when work on the task has started, and progress is being made.
HOLD: Move the ticket to this stage if work is temporarily paused or delayed due to external factors or dependencies.
CODE REVIEW: Move the ticket to this stage when the written code is ready for review by team members to ensure quality and adherence to coding standards.
CODE REVIEW FAILED: If the code review process identifies issues, move the ticket here until the concerns are addressed.
QA (Quality Assurance): Move the ticket to this stage when the task is ready for testing to ensure it meets the specified requirements and functions correctly.
QA FAILED: If issues or defects are found during testing, move the ticket to this stage for further work.
UAT (User Acceptance Testing): Move the ticket to this stage when the task is ready for UAT to verify the release.
SANITY: Move the ticket to this stage for a quick check on production, ensuring major functionalities work as expected, primarily validating the ticket on the instance where it originated.
CLOSED: Move the ticket to this stage once all processes, such as code review, QA, and UAT, have been passed, and the task is considered complete.
This workflow provides a clear path for the progression of tasks from initial creation to completion, allowing for effective tracking and communication throughout the development process.
The centralized workspace for managing projects, documentation, and client interactions, is primarily structured around Product Management.
Primarily focused on sprint management, this space includes all essential components related to product development, maintenance, and improvements:
Ionic Apps: All tasks related to developing and maintaining mobile applications' UI built using the Ionic framework, including feature requests and sprint tasks.
Documentation: This section manages tickets related to documenting new features or improvements. All the tickets are created in backlog and assigned to the sprint based on priority.
OMS: Focuses on the core functionalities of the OMS, including:
Reports: Handling of Business Intelligence reporting requirements and improvement in BI Reports.
Monitoring: All tickets related to solving grafana errors are placed in this section.
Nifi: Covers all tasks related to Apache NiFi, handling data pipelines, and system workflows.
Sprint Management: The central hub for managing all sprints across product management. This space tracks sprint cycles, backlog items, and ensures task allocation and completion within the sprint deadlines.
A dedicated space for tracking the work done for each client. Each client has their own folder with detailed tasks, and clients also have access to this space, allowing them to view the progress of tickets related to their specific projects.
Backlog All non-urgent tasks are created in the backlog list. These tickets are revisited and prioritized for upcoming sprints.
In an agile project management approach, a sprint refers to a set period during which a specific set of tasks or goals is completed. We manage sprints in the product management space.
The team will work in sprints, and each sprint will last two weeks.
A Business Analyst (BA) or Product Associate (PA) creates tickets that include detailed requirements. After discussing these requirements, the ticket will be added to the sprint based on priority.
When creating the ticket, the BA ensures to include a comprehensive description:
For issues: detailing the current behavior, specifying the identified issue, and articulating the expected behavior.
For new requirements: outlining the requirements and clearly defining the expected behavior.
If the BA creates a ticket directly in the project's Codev, they will also ensure it's added to the OMS backlog and will be assigned to the Sprint Manager. This helps maintain a consolidated list of work items. This will give visibility and control over which tickets to select for the sprint.
Urgent Tickets: Any urgent tickets that need immediate attention will be discussed and handled within the current sprint on priority.
The QA is responsible for ensuring that tickets in the User Acceptance Testing (UAT) phase are working correctly in the production environment before closing them.
A sprint will only be considered closed when Quality Assurance (QA) is completed for all the tickets within that sprint.
If the Business Analyst (BA)/Product Associate (PA) is actively working on a ticket in parallel with the developer, they will assign that ticket to themselves. This ensures transparency about who is currently involved in the work. Once the BA completes their work on the ticket, they will remove themselves from the assignment. This step signifies that the BA's engagement with that particular ticket is finished.
If the Business Analyst (BA)/Product Associate (PA) is actively working on any task, they will ensure to create the ticket, add it to the current sprint, and assign it to themselves. No task is small.
After development is complete, the feature or fix should be verified on the feature branch. Only after passing QA should it be merged into the develop or release branch. It should not be merged into the develop or release branch before QA approval. { % endhint %}
This document provides a detailed, sequential walkthrough on deploying a fresh instance of HotWax Commerce.
This document offers a detailed guide for deploying HotWax Commerce tailored to meet the specific requirements of enterprise retailers
The HotWax Commerce support team initiates the launch of a machine to deploy a HotWax Commerce instance. This step establishes the foundational infrastructure for hosting and operating HotWax Commerce. The team configures the machine with precision, aligning it with client requirements for production or testing purposes.
Once the instance is operational, retailers will get a login prompt. Each instance comes with a default user, which is omitted here for security purposes.
During initial login, you'll likely be prompted to reset your password for security purposes. Once you log in ensure that all menus within the Sidebar and EXIM screen have loaded correctly. It is recommended that you create new users and disable the default user.
DBIC
(Doing Business in Countries) is an essential feature for tailoring HotWax Commerce to specific countries where the retailer has fulfillment locations. When a retailer utilizes one HotWax Commerce Instance to cater to different countries, it is crucial to include only the countries relevant to the specific instance. Read here to learn how to add DBIC in HotWax Commerce.
In HotWax Commerce, the Product Store
represents a collection of configurations that can be applied to one or multiple Shopify stores. When a retailer deploys HotWax Commerce, the Product Store allows them to specify brand-specific configurations, such as default Inventory Facility, Order Brokering, Pre-Order Auto Releasing, Allow Split, etc. These settings allow retailers to configure HotWax Commerce for their unique business requirements.
When you deploy HotWax Commerce, one default product store is already created, which needs to be configured as per requirement. However, if the retailer has multiple brands, each with a unique catalog, a new product store needs to be created for all the brands.
Facilities are physical locations such as a warehouse, distribution center, or store where inventory is stored, managed, and processed. To establish facilities, it is necessary to create both the facilities and their internal locations within OMS. Typically, upon creating facilities, the associated locations are generated automatically. In cases where they are not generated, manual addition of location data is required. For an efficient bulk creation of facilities and their corresponding locations during the initial setup, it is advisable to utilize the facilities CSV, or you can create facilities with our Facilities
application.
In Hotwax commerce facility groups are used to define the scope and functionality of the facility for omnichannel order management. For instance, including a facility in the Online Facility group indicates that this facility will be available to sell its inventory to online channels. Facilities in the Pickup group will be available for BOPIS and Facilities under the Brokering subtype will be the facilities where orders can be brokered. You can learn how to add facilities and manage facility groups through our detailed document.
HotWax Commerce has default settings tailored for US retailers, For non-US retailers adjustments are needed to align with their business location. The System Property data encompasses a range of configurations that influence the fundamental settings governing how your instance operates. Ensuring accuracy in these configurations is essential. Read our document on System Property Data to learn how you can configure system property data such as currency, country, and Shipment Weight Units.
SFTP (Secure File Transfer Protocol) is used to secure file transfers between HotWax Commerce and another system. Configuring SFTP is crucial for smooth and secure data exchange within your system and HotWax Commerce. Read our documentation to learn how you can configure SFTP for seamless data transfer.
Solr, an open-source enterprise search platform, provides powerful search capabilities, making it indispensable for efficient data management. Configuring Solr indexing in HotWax Commerce is vital for enhancing data retrieval and search operations within the platform. You can add data to the Solr Index through the steps given in the document.
HotWax Commerce's Store Fulfillment App empowers retailers to efficiently manage online order fulfillment from their stores. By integrating with multiple Third-Party Logistics companies, known as Carriers, HotWax Commerce enables the generation of shipping labels based on store and customer addresses, as well as package weight and dimensions. Each Carrier offers a Shipping Gateway software system, facilitating the request for shipment quotations and labels during the fulfillment process. Read our document to learn how to set up shipping carriers, shipping methods, and integration with the shipment gateway, ensuring a streamlined and cost-effective order fulfillment process. Users are also required to add shipping boxes to ensure precise shipping cost calculation and accurate label generation.
The HotWax Commerce integration app on Shopify facilitates the connection between Shopify stores and HotWax Commerce's Omnichannel Order Management System (OMS). This integration enhances operational efficiency and enables consistent customer experiences across multiple channels. Read our user manual to learn how to install the HotWax Commerce App for your Shopify Store.
The HotWax Commerce integration layer maintains a structured repository of integration mappings between Shopify and HotWax Commerce, covering locations, payment methods, shipping methods, product types, and price levels. Some default mapping data needs to be included when connecting a Shopify store to ensure that data flows smoothly between both systems with correct mappings. If you're only using the default Shopify Shop ID, this data can be imported directly.
However, retailers can have multiple Shopify Shops based on their business scope. Therefore, it is imperative to map the integrations available in HotWax Commerce with different Shopify shops according to business requirements. Users need to periodically amend mappings to ensure alignment with the current operational landscape. Due to these semi-frequent adjustments, users require access to update these mappings themselves. Read our document to learn how to manage integration mappings between HotWax Commerce and Shopify Shop directly from the UI without relying on external support.
HotWax Commerce requires accurate product data to track inventory changes and ensure near real-time inventory counts on Shopify. It also facilitates order downloads and expedites the fulfillment process. Here’s how you can import all the products created in Shopify to HotWax Commerce.
HotWax Commerce ensures that order information is always updated to streamline the process of fulfilling orders. To integrate HotWax Commerce with Shopify, retailers are required to import all open sales orders from a particular time frame that HotWax Commerce must fulfill. Discover how to initiate the initial order sync process between HotWax Commerce and Shopify to seamlessly import open and unfulfilled orders. Orders are initially synced in HotWax Commerce in the created
state but they are not sent for fulfillment until approved. So make sure orders are approved as per the retailers’ requirements.
HotWax Commerce provides a unified view of inventory by seamlessly connecting with various technology systems used by retailers, including Enterprise Resource Planning (ERP), Point of Sale (POS), and Warehouse Management Systems (WMS). HotWax Commerce ensures that inventory updates from all these systems are synchronized to support various business scenarios. HotWax Commerce offers out-of-the-box integrations with systems such as NetSuite and RetailPro to sync inventory. Retailers can also import inventory manually through a CSV file or contact the HotWax Commerce support team for possible integration with the systems in their tech stack.
HotWax Commerce determines the "Available to Promise (ATP)" or the amount of inventory that can be sold and then sends it to Shopify. This makes HotWax Commerce the ultimate authority on inventory availability. Here’s how you can upload inventory from HotWax Commerce to Shopify.
Order routing allows the OMS to determine the best location to fulfill an order according to a set of criteria. The criteria are with respect to selecting which order (part of order) to broker and the criteria to find inventory. Merchants use configurable routing to create order fulfillment strategies best suited for their business. User configurable routing rules allow merchants to optimize fulfillment cost, inventory, and workload based on arbitrary order and fulfillment location parameters such as order total, SKUs, product category, facility type, operating hours, or fulfillment capacity. Read our comprehensive document to learn how to schedule brokering runs for order routing.
HotWax Commerce's Users application allows businesses to create and manage users within the HotWax Commerce Order Management System (OMS). By establishing user profiles, organizations can grant access to critical operations involving managing orders and fulfillment. Here’s how you can create users with store associate and picker roles to manage order fulfillment.
Launch Machine
Initial Login
Log in to the instance.
Reset the default password.
Check Sidebar and EXIM screen for correct loading.
Add DBIC
Configure DBIC for specific countries relevant to the instance.
Add Product Store
Configure default product store as per requirement.
Create new product stores for multiple brands if needed.
Load Facility
Utilize facilities CSV or Facility management application for efficient bulk creation.
Create Facility Groups
Define facility groups for omnichannel order management.
Assign facilities to appropriate groups based on functionality.
Configure System Property Data
Adjust system property data for non-US retailers.
Configure settings such as currency, country, and shipment weight units.
Configure SFTP
Set up SFTP for secure data transfer within the system.
Add Solr Indexes
Configure Solr indexing for efficient data retrieval and search operations.
Add Shipping Gateways
Configure shipping gateways for seamless integration with third-party logistics companies.
Add shipping boxes for accurate shipping cost calculation and label generation.
Install HotWax Commerce integration app on Shopify store
Configure Mappings between HotWax Commerce and Shopify
Manage integration mappings between HotWax Commerce and Shopify Shop.
Ensure correct mappings for locations, payment methods, shipping methods, product types, and price levels.
Sync Products
Import products created in Shopify to HotWax Commerce.
Sync Orders
Initiate initial order sync process between HotWax Commerce and Shopify.
Import Inventory
Sync inventory from various technology systems or manually import through CSV.
Sync Inventory to Shopify
Upload inventory from HotWax Commerce to Shopify.
Schedule Brokering
Schedule brokering runs for order routing.
Create users for Fulfillment
Set up user accounts for fulfillment operations.
Create Pickers for picking order items
User permissions in Tathya determine the level of access and actions granted to individual users within the system. These permissions are categorized into various roles, each with specific functionalities and access levels.
Clients have access to multiple predefined dashboards to review their performance on a daily, weekly, or monthly basis. They are granted with their specific Dashboard permission to view their designated dashboard but are restricted from editing or creating any dashboards. All data within the dashboard is interlinked, so any discrepancies will be promptly reflected in the dashboard. If a client wishes to make changes to the dashboard, they can communicate with Hotwax Support.
For internal users, we generally provide two types of permissions:
Basic Business Analyst Permission: Grants access to create their chart dashboards and view any existing ones. However, users with this permission do not have default access to the SQL lab, nor can they view any draft dashboards or charts, nor edit other owners' charts. If we add SQL permission to the , users gain access to the SQL lab that allows them to save queries and view query history.
SuperBusinessAnalyst Permission: Grants access to more advanced users familiar with dashboard reporting. Users with this permission can access the SQL lab by default but cannot edit dashboards or reports owned by others without explicit permission. If they wish to edit another user's chart, they must request the admin or creator to include them in the owners' section of the chart/dashboard.
Note: users can request Permission, which grants them access to all permissions except for creating and managing users and their roles. This permission allows them to edit dashboards or reports owned by others.
users in Tathya hold the highest level of access, responsible for system management and administration. Admin users have all the SuperBusinessAnalyst permissions along with access to view and edit charts owned by others. Furthermore, they have authority over various aspects, including managing user registrations, handling access requests, and organizing data through tagging for efficient categorization. Additionally, admins oversee role management, which involves editing, deleting, adding, and listing dashboards, charts, and reports of other users and their draft dashboards.
Learn how rolling back an OMS version provides a safety net for unforeseen issues, ensuring minimal downtime and preserving a positive user experience.
For patch releases, rollbacks are useful for issue resolution. However, it's strongly discouraged to attempt major release rollbacks without developer assistance. Major releases have complex changes and dependencies that may not be easily reversible. Doing so without expertise can result in unexpected complications and system instability. Seek developer guidance for a safer approach.
Rolling back a release is crucial in software development, offering a safety net for unforeseen issues. Despite thorough testing, unexpected bugs or performance issues can occur. The ability to quickly revert to a previous version ensures minimal downtime, preserving a positive user experience and preventing potential financial and reputational losses.
Log into Jenkins:
Link:
Use your credentials for authentication.
Navigate to oms-env-setup
:
Click on the oms-env-setup
option from the available segments.
Access oms-update
on the OMS update page:
Locate and click on the oms-update
section.
Initiate Build with Parameters:
Select "Build with parameters" from the side menu.
Complete the Form:
Initiate Build:
Click on the "Build" button to execute the update process.
Verify your Machine:
Wait until the processing is finished, and the box in the stage view turns green.
In the stage view, red box colors indicate an error. Report any errors to the system admin team.
Check Deployment in Build History:
Go to the "Build History" screen.
Click on the instance, generally the latest record, to open the latest record.
Click on "Console Output" to open the output.
Verify the presence of a "Finished: Success" message at the bottom.
Verify System Online:
Check your instance; it should be online and working.
If not, refresh your window. If a login screen appears, your system is now online.
Open the Hamburger menu and scroll to the bottom of the page.
Click on the 'Powered by HotWax Commerce' icon to access the Dashboard.
In the Dashboard, find a table displaying the current release versions for oms
, reflecting the older version.
Rollback SQL Upgrades:
If any SQL upgrades were executed, it's essential to revert these changes to ensure the system returns to its previous state and functions seamlessly.
Undo Upgrade Steps:
If you've executed any specific upgrade steps that led to data creation, it's necessary to reverse those steps by deleting the associated data. This step is crucial to maintain the system's previous state.
Manually Remove Upgraded Data:
Should there be any additional data introduced during the upgrade process, it's recommended to manually delete this data using Webtools. This ensures a clean and accurate representation of the system, aligning with its prior configuration.
Learn about the operational plugins available for HotWax Commerce OMS.
(From release v4.13.0 onwards)
By actively monitoring our systems, we can quickly detect potential issues and address them before they become more serious problems. Continuous system monitoring allows us to track performance, identify irregularities, and resolve bottlenecks in real-time. System monitoring helps minimize downtime, enhance user experience, and prevent disruptions to business processes.
Monitoring the Client's Operational Dashboard is important to ensure the system works correctly. This involves checking if orders are being imported accurately, identifying orders that have been approved but not fulfilled for an extended period, and tracking the number of orders that have gone into unfillable parking.
Frequency: 3 times a day (9 AM, 3 PM, 8 PM)
Order Sync Dashboard:
Check all on Tathya to ensure no orders were missed during import.
Verify that no order has been in the state for more than 3 hours.
Inventory and Fulfillment Dashboards:
Ensure there are no in the charts.
To know more about reports .
We need to check these dashboards for all clients.
We need to monitor jobs because it can create issues if a job takes more than 45 minutes. Additionally, if an instance goes down while a job is running, it can get stuck in a running status. It is also important to identify and troubleshoot the reason if a job fails frequently.
Frequency: 3 times a day (9 AM, 3 PM, 8 PM)
Steps:
Update the status of the job from running to failed by editing the information from the web tools. The status ID is from the job sandbox.
Note that the reset inventory file from external systems may take longer but should not exceed 45 minutes.
Check if any job has failed multiple times.
Reset inventory files come from an external system and set the inventory for products in the OMS. Resetting the inventory is essential to maintain an accurate inventory count.
Frequency: Daily, primarily checked at 3 PM
WMS/ERP Systems:
Navigate to Hamburger Menu -> MDM -> EXIM -> Imports -> Warehouse -> RESET INVENTORY
.
Shopify:
Navigate to Hamburger Menu -> MDM -> EXIM -> Shopify Jobs -> MDM -> Shopify Inventory Sync MDM
.
File Reception:
Confirm whether the reset inventory file has been received from the external system.
Check for invalid file formats or records. If the file is invalid, report the issue to the client.
File Timing: Ensure all inventory files are processed by the following times:
If the file has not been received by the above timings:
If the file is not found, contact the client to inquire about the delay.
If the file is found, wait till it processes.
The Data Manager Logs record details of file processing from external systems, including log ID, user, import time, imported file, error records, and the file's processing start and end times.
Frequency: Daily
Pending and Running Files:
Reports have been set up for each client on Tathya.
Check logs for all clients to ensure no files are pending or running for too long.
Failed Files:
Investigate any file failures.
When encountering an issue, the first step is to attempt resolution independently using available knowledge, tools, and resources. If the problem persists, escalate it by consulting with your mentor. For new issues, document the troubleshooting process and create a ticket for tracking the troubleshooting steps. Also, create a ticket in the OMS Backlog if any development work is needed for the issue.
Always inform the client as soon as an issue arises, letting them know that it is being investigated (e.g., "We have identified an issue and are looking into it"). Once the issue is resolved, update the client with the outcome. If the issue is critical, also explain the cause to keep them informed (e.g., "The issue occurred due to...").
Determine the reason for failure, especially if it is uncommon, and accordingly.
Verify that the file has been placed at the .
and report any uncommon failures.
netsuite
v4.16.0
https://git.hotwax.co/plugins/netsuite.git
klaviyo
v4.12.0
https://git.hotwax.co/plugins/klaviyo.git
shipstation
v4.12.0
https://git.hotwax.co/plugins/shipping-integrations/shipstation.git
shipt
v5.7.0
https://git.hotwax.co/plugins/shipping-integrations/shipt.git
c807
v5.11.0
https://git.hotwax.co/plugins/shipping-integrations/c807.git
guatex
v5.7.0
https://git.hotwax.co/plugins/shipping-integrations/guatex.git
terminal-express
v5.7.0
https://git.hotwax.co/plugins/shipping-integrations/terminal-express.git
cargotrans
v5.7.0
https://git.hotwax.co/plugins/shipping-integrations/cargotrans.git
easypost
v5.5.0
https://git.hotwax.co/plugins/shipping-integrations/easypost.git
sm-setup
v5.7.0
https://git.hotwax.co/sm/stevemadden.git
sm-orsi
v5.7.0
https://git.hotwax.co/sm/sm-orsi-connector.git
canada-post
v5.7.0
https://git.hotwax.co/plugins/shipping-integrations/canada-post.git
KREWE
1 PM
UCG
3 PM
ADOC
3 PM
NEWERA
10 PM / 9 AM
Host
Instance name for the update
Domain
hotwax.io
ECR Image
289432782788.dkr.ecr.us-east-1.amazonaws.com/omscoreimage
ECR Image Tag
Select the (old release tag)[https://docs.hotwax.co/deployment-and-configurations/additional-resources/omsreleases]
Build Command
build
Run Copy Image
NO
Solr Version
Latest version
Plugins
Leave empty
Access to Dashboard Menu
YES
YES
YES
YES
YES
Access to Draft Dashboards
NO
NO
NO
YES
YES
Create Charts/Dashboards/Reports
NO
YES
YES
YES
YES
Edit Anyone Charts/Dashboards/Reports
NO
NO
NO
YES
YES
SQL Lab Access
NO
NO
YES
YES
YES
Manage User Registration
NO
NO
NO
NO
YES
Manage Roles
NO
NO
NO
NO
YES
Write-only Access to Database
NO
NO
YES
YES
YES
Read-only Access to Database
NO
YES
YES
YES
YES