Portugese Version: https://bmcsites.force.com/casemgmt/sc_KnowledgeArticle?sfdcid=000187038 |
There are two different types of CSV file parsers that can be created. One CSV parser imports data in a columnar format (so each row of data has multiple columns with different metrics) and the other in a vertical format (each row of data in the input file is for a single metric). Here is the TSCO 20.02 documentation regarding the two available CSV Parser ETLs (for other TSCO versions please consult the official product documentation):
Outside of the TrueSight Capacity Optimization (TSCO) product documentation, there is also documentation that provides additional examples and recommendations related to importing data via a CSV parser. For example, this Hot Tip document:
Note that for the CSV parser there can be a steep initial learning curve but once you've created one successfully creating others will be much easier. Q: Can a TSCO Generic CSV Parser ETL import Business Driver (transaction count) data?Yes, the CSV parser can import both System Data (metrics like CPU Utilization, I/O Rates, and so on) and Business Driver data (metrics that represent business drivers such as web page hits, number of users logged in, and so on). If importing both System Data and Business Driver data it is necessary to create two separate CSV Parser ETLs -- one to bring in the system data and one to bring in the business drivers data. In the Supported Datasets section of the documentation, it states, "This ETL supports vertical datasets: SYSDAT, WKLDAT. Note that only one dataset at a time is supported.". So, you'd want to separate your input data into two separate CSV input files broken down by data type and then input the system data via one CSV Parser ETL and the business driver data via a 2nd separate CSV parser ETL. Q: Can a single TSCO Generic CSV Parser ETL import several different System Metrics via a single CSV input file?Yes. In the BMC Capacity Optimization integration with the CSV file parser online help page in the "Input file format for system metrics" section at the bottom take a look at the first sample input file. That input file brings the CPU_UTIL, NET_IN_BYTE_RATE, and BYIF_IN_BYTE_RATE metrics into BCO via a single ETL. TSCO can import as many different system metrics as you'd like via a single CSV input file and separately can you have as many different business driver metrics as you'd like imported via a single CSV input file. But they need to be separate files imported by two separate CSV Parser ETLs. Q: What are the standard field separators to use for the CSV parser ETL?The most common field separators are the comma (,) and the semi-colon (;) but other field separators are supported and any field separator can be specified via the 'CSV Separator' ETL Run Configuration parameter.Q: What is the best Timestamp format to use in a CSV input file?Although different timestamp format can be used, the best timestamp format to use is YYYY-MM-DD HH:MM when the time zone (TZ) of the ETL Engine (EE) matches the time zone of the TSCO Application Server (AS) or YYYY-MM-DD HH:MM:SS +0000 (where 0000 is the TZ offset) when the time zone of the ETL Engine doesn't match the time zone of the Application Server. A mismatch between the time zone of the EE when compared to the AS is particularly common in BMC Helix Continuous Optimization SaaS environment since the AS will be using UTC which may be different than the local TZ of the EE.Q: Can the pipe (|) character be used as a field separator?The pipe symbol can be used as a field separator but the pipe symbol is a reserved character so (a) it won't be automatically detected as a field separator and (b) If you want to specify it as a field separator you need to escape it in the 'CSV Separator: Specified' edit box.So, you'd specify this as the 'CSV Separator': \| (backslash, pipe) Note that generally the field separator will be a single character and it should not have spaces around it. If there were spaces surrounding the field separator (for example, "a | b | c | d") those need to be included as part of the specified field separator (or removed from the input file). So, in that case, what you'd actually specify as the field separator is (between the double quotes): " \| " (space, backslash, pipe, space) Sample steps to create an Open ETL template(1) Under Administration -> ETL & System Tasks -> ETL Tasks -> Select Add -> Add/Edit Open ETL template.(2) In the Add/Edit Open ETL template screen select, "Create a new Open ETL Template". Click Next (3) 'Select Datasource type' of 'Comma Separated Value (CSV) file'. Select data 'Performance or Configuration metrics for a set of systems (any type)'. Click Next. (4) In the 'Select entity type of your imported data' don't select anything. Just leave that list blank so the data can be applied to any entity type. Click Next. (5) For the 'Select CSV file to upload' click the 'Browse' button and select your input file. Click Next. (6) The 'Map imported column to expected dataset column' screen will tell you which fields need to be defined in your input CSV file and will let you map the fields to the desired TSCO metrics. Two fields that are necessary even when importing configuration data are: * TS -- The timestamp column that defines the beginning of the range that this data is applicable to the target machine * DURATION -- A duration column that defines the duration that this data is applicable to the target machine. The TS field must be set to a timestamp value. You could set it to the beginning of the month or the beginning of the day, or whatever. The DURATION value can be left blank for configuration data import if you want this configuration metric to always be valid. If in the future you imported a different value for the machine (for example, the hardware changed) the current configuration metric would be given an end date and the new configuration metric would become current for the machine. Map the fields as necessary and then click Next. (7) Give the Open ETL template a name and a description. Click Finish. (8) Back in the ETL Tasks menu, click Add -> Add ETL. (9) In the ETL Run Configuration:
(10) Save the updated ETL configuration. You can now run the ETL and it should import this new metric (when the Run Configuration 'Execute in simulation mode' option is set to 'no'). One thing you may need to think about when creating this ETL is how it will map this metric to an existing entity in the Entity catalog. When you just use the hostname as the 'DS_SYSNM' value you are doing what is called a 'single' lookup where the ETL will look for an existing entity that has a DEFAULT lookup that matches the hostname or a _COMPATIBILTY_lookup value that matches the hostname. This will frequently work the way you want -- but you can have a situation where multiple entities in the Entity Catalog will match that hostname on a lookup and then the ETL won't know what to do. When we create an ETL we'd usually define a "multiple lookup" which provides a better way to map to existing entities in the environment -- but it is more complex to define multiple lookups. So, you can start with a single lookup and see how it goes -- but be aware that this could lead to issues where the ETL isn't quite able to figure out what to map to (and if you set the "Skip entity creation" it will mean that the metric doesn't get imported. Here is a KA that tables about setting up one of these Generic CSV parser ETLs more generally (in relation to importing business driver data); 000137040: In TSCO, what is the best way to import Business Driver data? (https://bmcsites.force.com/casemgmt/sc_KnowledgeArticle?sfdcid=000137040) It in turn links to a KA that describes single versus multiple lookups in more detail. Custom BIRT ReportsThe following KA has the links to the BCO How-To Guides: How to create custom reports in BCO which is a high-quality document related to the creation of custom BIRT report templates:
|