Table of contents
  1. Story
    1. Tutorial Slides
      1. Title Slide
      2. Introduction
      3. Spotfire 5 Users Guide Knowledge Base 1
      4. Spotfire 5 Users Guide Spreadsheet
      5. Spotfire 5 Users Guide Dashboard 1
      6. Spotfire 5 Users Guide Dashboard 2
      7. Spotfire 5 Users Guide Knowledge Base 2
      8. Spotfire 5 Users Guide Knowledge Base 3
      9. Advantages
      10. Spotfire 5 Users Guide Content Can Be Further Structured
  2. Spotfire Dashboard
  3. Research Notes
  4. Introduction
    1. Introduction
    2. The User Interface
    3. Logging In
    4. Logging in Details
      1. Details on Manage Servers
      2. Details on Add/Edit Server
      3. Details on Change Password
  5. Data
    1. Data Overview
      1. In-Memory Data
      2. In-Database Data
    2. Data in Spotfire
      1. Working With Large Data Volumes
      2. Working With In-Database Data
      3. Working With Cubes
    3. Loading Data
      1. Loading Data Overview
      2. Open File
        1. Opening an Analysis File
        2. Opening a Text File
        3. Opening an Excel File
        4. Opening a SAS File
        5. Details
          1. Details on Excel Import
          2. Details on Import Settings
          3. Details on Import Settings – Advanced
          4. Details on SAS Data Import
      3. Open From a Library
        1. Opening Files from the Library
        2. Opening an Information Link
        3. Searching the Library
        4. Edit Properties
          1. Details on Edit Properties – General
          2. Details on Edit Properties – Document
      4. Add Data Tables
        1. How to Insert Multiple Data Tables into the Analysis
        2. Details
          1. Details on Add Data Tables
          2. Details on Manage Relations
          3. Details on New/Edit Relation
          4. Details on Browse for Data Table
          5. Details on Data Function – Select Input
      5. Add On-Demand Data Table
        1. On-Demand Overview
        2. Loading Data on Demand
        3. Example of Marking Controlled On-Demand Details Visualization
        4. Example of Property Controlled On-Demand Data
        5. Details
          1. Details on Add On-Demand Data Table
          2. Details on Define Input
          3. Details on Select Value
          4. Details on Select Property
          5. Details on Settings
          6. Details on Select Information Link
      6. Add a Data Table Connection
        1. Adding Data Connections
        2. Details
          1. Details on Microsoft SQL Server Connection
          2. Details on Oracle Connection
          3. Details on Teradata Connection
          4. Details on Microsoft SQL Server Analysis Services Connection
          5. Details on Select Database Tables
          6. Details on Data Tables in Connection
          7. Details on New/Edit Relation
          8. Details on Data Connection Login
        3. Mapping External Data Table
          1. Oracle Data Types
          2. SQL Server Data Types
          3. Teradata Data Types
      7. Load Data From Active Spaces
        1. Loading Data from ActiveSpaces
        2. Details on ActiveSpaces
      8. Open Database
        1. Open from Database Overview
        2. Opening Data from a Database
        3. Details
          1. Details on Open Database
          2. Configure Data Source Connection – SQLClient
          3. Configure Data Source Connection – OLE DB
          4. Configure Data Source Connection – ODBC
          5. Configure Data Source Connection – OracleClient
          6. Configure Data Source Connection – Custom .NET Provider
      9. Replace Data
        1. Replacing Data
        2. Details
          1. Details on Replace Data Table – Select Data Table
          2. Details on Replace Data Table – Select Source
          3. Details on Replace Data – Select External Source
          4. Details on Replace Data – Match Columns
          5. Details on Replace Data – Missing Columns
      10. Transform Data
        1. Transforming Data
        2. Pivoting Data
        3. Unpivoting Data
        4. Normalizing Data
          1. Normalizing Columns
          2. Details
          3. Normalization by Mean 
          4. Normalization by Trimmed Mean 
          5. Normalization by Percentile 
          6. Normalization by Scaling Between 0 and 1 
          7. Normalization by Subtracting the Mean 
          8. Normalization by Subtracting the Median 
          9. Normalization by Signed Ratio 
          10. Normalization by Log Ratio 
          11. Normalization by Log Ratio in Standard Deviation Units 
          12. Normalization by Z-score 
          13. Normalization by Standard Deviation
        5. Details
          1. Details on Show Transformations
          2. Details on Preview
          3. Details on Pivot Data
          4. Details on Unpivot Data
          5. Details on Calculate and Replace Column
          6. Details on Calculate New Column
          7. Details on Data Function – Transformation
          8. Details on Normalization
          9. Details on Exclude Columns
          10. Details on Change Column Names
          11. Details on Change Data Types
      11. Missing File
        1. Details on Missing File
        2. Details on Search for Missing File
    4. Inserting More Data
      1. Insert Calculated Columns
        1. What is a Calculated Column?
        2. How to Insert a Calculated Column
        3. Details on Insert Calculated Column
        4. Expression Language
          1. General Syntax
          2. Operators
          3. Data Types
          4. Operators
          5. Operator Precedence
          6. Functions
          7. Functions Overview
          8. Binning Functions
          9. Conversion Functions
          10. Cast Method
          11. Date and Time Functions
          12. Logical Functions
          13. Math Functions
          14. OVER Functions
          15. Property Functions
          16. Ranking Functions
          17. Spatial Functions
          18. Statistical Functions
          19. Text Functions
        5. Invalid Values
        6. Details on Formatting
        7. Format String
        8. Properties
          1. Properties in Expressions
          2. Troubleshooting Property Expressions
      2. Insert Binned Columns
        1. What is Binning?
        2. How to Use Binning
        3. Details on Insert Binned Column
        4. The Binning Slider
      3. Insert Columns
        1. How to Insert Columns
        2. Details on Insert Columns – Select Destination
        3. Details on Insert Columns – Select Source
        4. Details on Insert Columns – Match Columns
        5. Details on Insert Columns – Import
      4. Insert Rows
        1. How to Insert Rows
        2. Details on Insert Rows – Select Destination
        3. Details on Insert Rows – Select Source
        4. Details on Insert Rows – Match Columns
        5. Details on Insert Rows – Additional Settings
    5. Multiple Data Tables
      1. How to Insert Multiple Data Tables into the Analysis
      2. How to Handle Multiple Data Tables in One Analysis
      3. Data Tables Overview
      4. Examples
        1. Master-Detail Visualizations
        2. Independent Data Tables
        3. Multiple Related Data Tables
        4. Insert Columns – Example
    6. Data Panel
      1. What is the Data Panel?
        1. In-Memory or In-Database Relational Data
        2. In-Database Cube Data
      2. Data Panel Pop-up Menu
      3. Details on Rename Column
    7. Data Connection Properties
      1. How to Edit Data Connection Properties
      2. Details on Data Connection Properties – General
      3. Details on Data Connection Properties – Data Tables
      4. Details on Data Connection Properties – Credentials
      5. Details on Data Connection Properties – Cache Settings
      6. Details on Rename Data Connection
    8. Data Table Properties
      1. How to Edit Data Table Properties
      2. Details on Data Table Properties – General
      3. Details on Data Table Properties – Source Information
      4. Details on Data Table Properties – Relations
      5. Details on Data Table Properties – Properties
      6. Details on Data Table Properties – Sharing Routines
      7. Details
        1. Details on Select Key Columns
        2. Details on Load Method
        3. Details on Manage Relations
        4. Details on New/Edit Data Table Property
    9. Column Properties
      1. How to Edit Column Properties
      2. Details on Column Properties – General
      3. Details on Column Properties – Formatting
      4. Details on Column Properties – Properties
      5. Details on Column Properties – Sort Order
      6. Column Properties Descriptions
      7. Details
        1. Details on Insert Hierarchy
        2. Details on Custom Sort Order
        3. Details on New/Edit Column Property
        4. Details on Edit Value
        5. Details on Select Visible Properties
  6. Visualizations
  7. Using Visualizations
  8. Enhancing Visualizations
  9. Filters
  10. Tags
  11. Bookmarks
  12. Lists
  13. Collaboration
  14. Tools
  15. Saving and Exporting
    1. Creating a Guided Analysis
      1. What is a Guided Analysis?
    2. Saving
      1. Save Overview
      2. Saving an Analysis File
      3. Details on Save
      4. Saving an Analysis File in the Library
      5. Embedded or Linked Data?
      6. Preparing Analyses for TIBCO Spotfire Web Player
      7. Links to Analyses in the Library
      8. Details on Save to Library
        1. Save as Library Item – Step 1
        2. Save as Library Item – Step 2
        3. Save as Library Item – Step 3
        4. Save as Library Item – Published
        5. Details on Edit Properties – General
        6. Details on New Folder
    3. Export Image
      1. Exporting an Image
    4. Export Data
      1. Exporting Data
      2. Details on Export Data
    5. Export to PowerPoint
      1. Exporting to Microsoft PowerPoint
      2. Details on Export to Microsoft PowerPoint
    6. Export to PDF
      1. Exporting to PDF
      2. Details on Export to PDF – General
      3. Details on Export to PDF – Advanced
      4. Exporting Bookmarks to PDF
      5. Details on Export to PDF – Bookmarks
      6. Exporting Filter Values to PDF
      7. Details on Export to PDF – Filters
    7. Export to HTML
      1. Exporting to HTML
      2. Details on Export to HTML
    8. Printing
      1. Printing
      2. Details on Print Layout Options
  16. Appendix
    1. Important Information
    2. How to Contact Support
    3. Details on Support Diagnostics and Logging
  17. Glossary
    1. 3D Scatter Plot
    2. Analysis File
    3. Axis
    4. Axis Selector
    5. Bar
    6. Bar Chart
    7. Bar Labels
    8. Bar Segment
    9. Bar Segment Labels
    10. Binning
    11. Bookmark
    12. Box Plot
    13. Bullet Graph
    14. Calculated Column
    15. Calculated Value
    16. Categorical Axis
    17. Category Axis
    18. Categorical Scale
    19. Cell
    20. Check Box Filter
    21. Collaboration Panel
    22. Color Mode
    23. Color Palette
    24. Color Scheme
    25. Color Scheme Grouping
    26. Column
    27. Column from Marked
    28. Column Name
    29. (Column Names)
    30. Column Selector
    31. Combination Chart
    32. Comparison Circles
    33. Continuous Axis
    34. Continuous Scale
    35. Cover Page
    36. Cross Table
    37. Curve Fit
    38. Custom Expression
    39. Data Relationships
    40. Data Source
    41. Data Table
    42. Dendrogram
    43. Details-on-Demand
    44. Details Visualization
    45. Drop Targets
    46. DXP File
    47. Dynamic Items
    48. Empty Values
    49. Error Bars
    50. Escape characters
    51. External Column ID
    52. External Column Name
    53. Filter
    54. Filtering Scheme
    55. Filtered Out Rows
    56. Filtered Rows
    57. Filters Panel
    58. Find
    59. Formatting
    60. Graphical Table
    61. Gridlines
    62. GUID
    63. Heat Map
    64. Hierarchical Clustering
    65. Hierarchy
    66. Hierarchy Filter
    67. Horizontal Bars
    68. Hyperlink
    69. Icon
    70. Information Link
    71. Item Filter
    72. Jittering
    73. K-means Clustering
    74. Label
    75. Legend
    76. Library
    77. Line By
    78. Line Connection
    79. Line Chart
    80. Line Labels
    81. Line Similarity
    82. Lines & Curves
    83. List Box Filter
    84. Lists
    85. Map Chart
    86. Marked Row
    87. Marking
    88. Marker
    89. Marker Labels
    90. Page
    91. Parallel Coordinate Plot
    92. Parameterized Information Link
    93. Personalized Information Link
    94. Pie
    95. Pie Chart
    96. Pie Labels
    97. Pie Sector
    98. Pie Sector Labels
    99. Pivot
    100. Primary Key
    101. Properties
    102. Radio Button Filter
    103. Range Filter
    104. Range Filter Data Range
    105. Range Filter Lower Value
    106. Range Filter Upper Value
    107. Renderer
    108. Root View
    109. Row
    110. Scale
    111. Scale Labels
    112. Scatter Plot
    113. Series By
    114. Share
    115. Short Number Format
    116. Short Number Symbol
    117. Sparkline
    118. Spotfire Server
    119. Spotfire Text Data Format
    120. Stacked Bar
    121. Summary Table
    122. Symbol Set
    123. Table
    124. Table Cell
    125. Table Column
    126. Table Column Header
    127. Table Row
    128. Table Row Header
    129. Tags Panel
    130. Tags
    131. Text Area
    132. Tick Marks
    133. Time Scale
    134. Tooltip
    135. Tree Filter (Hierarchy Filter)
    136. Treemap
    137. Trellis
    138. Unpivot
    139. URL
    140. Value Axis
    141. Value Columns
    142. Vertical Bars
    143. Virtual Column
    144. Visualization
    145. Visualization Item
    146. Visualization Title
    147. Web Player
    148. X-Axis
    149. Y-Axis
    150. Z-Axis

Users Guide

Last modified
Table of contents
  1. Story
    1. Tutorial Slides
      1. Title Slide
      2. Introduction
      3. Spotfire 5 Users Guide Knowledge Base 1
      4. Spotfire 5 Users Guide Spreadsheet
      5. Spotfire 5 Users Guide Dashboard 1
      6. Spotfire 5 Users Guide Dashboard 2
      7. Spotfire 5 Users Guide Knowledge Base 2
      8. Spotfire 5 Users Guide Knowledge Base 3
      9. Advantages
      10. Spotfire 5 Users Guide Content Can Be Further Structured
  2. Spotfire Dashboard
  3. Research Notes
  4. Introduction
    1. Introduction
    2. The User Interface
    3. Logging In
    4. Logging in Details
      1. Details on Manage Servers
      2. Details on Add/Edit Server
      3. Details on Change Password
  5. Data
    1. Data Overview
      1. In-Memory Data
      2. In-Database Data
    2. Data in Spotfire
      1. Working With Large Data Volumes
      2. Working With In-Database Data
      3. Working With Cubes
    3. Loading Data
      1. Loading Data Overview
      2. Open File
        1. Opening an Analysis File
        2. Opening a Text File
        3. Opening an Excel File
        4. Opening a SAS File
        5. Details
          1. Details on Excel Import
          2. Details on Import Settings
          3. Details on Import Settings – Advanced
          4. Details on SAS Data Import
      3. Open From a Library
        1. Opening Files from the Library
        2. Opening an Information Link
        3. Searching the Library
        4. Edit Properties
          1. Details on Edit Properties – General
          2. Details on Edit Properties – Document
      4. Add Data Tables
        1. How to Insert Multiple Data Tables into the Analysis
        2. Details
          1. Details on Add Data Tables
          2. Details on Manage Relations
          3. Details on New/Edit Relation
          4. Details on Browse for Data Table
          5. Details on Data Function – Select Input
      5. Add On-Demand Data Table
        1. On-Demand Overview
        2. Loading Data on Demand
        3. Example of Marking Controlled On-Demand Details Visualization
        4. Example of Property Controlled On-Demand Data
        5. Details
          1. Details on Add On-Demand Data Table
          2. Details on Define Input
          3. Details on Select Value
          4. Details on Select Property
          5. Details on Settings
          6. Details on Select Information Link
      6. Add a Data Table Connection
        1. Adding Data Connections
        2. Details
          1. Details on Microsoft SQL Server Connection
          2. Details on Oracle Connection
          3. Details on Teradata Connection
          4. Details on Microsoft SQL Server Analysis Services Connection
          5. Details on Select Database Tables
          6. Details on Data Tables in Connection
          7. Details on New/Edit Relation
          8. Details on Data Connection Login
        3. Mapping External Data Table
          1. Oracle Data Types
          2. SQL Server Data Types
          3. Teradata Data Types
      7. Load Data From Active Spaces
        1. Loading Data from ActiveSpaces
        2. Details on ActiveSpaces
      8. Open Database
        1. Open from Database Overview
        2. Opening Data from a Database
        3. Details
          1. Details on Open Database
          2. Configure Data Source Connection – SQLClient
          3. Configure Data Source Connection – OLE DB
          4. Configure Data Source Connection – ODBC
          5. Configure Data Source Connection – OracleClient
          6. Configure Data Source Connection – Custom .NET Provider
      9. Replace Data
        1. Replacing Data
        2. Details
          1. Details on Replace Data Table – Select Data Table
          2. Details on Replace Data Table – Select Source
          3. Details on Replace Data – Select External Source
          4. Details on Replace Data – Match Columns
          5. Details on Replace Data – Missing Columns
      10. Transform Data
        1. Transforming Data
        2. Pivoting Data
        3. Unpivoting Data
        4. Normalizing Data
          1. Normalizing Columns
          2. Details
          3. Normalization by Mean 
          4. Normalization by Trimmed Mean 
          5. Normalization by Percentile 
          6. Normalization by Scaling Between 0 and 1 
          7. Normalization by Subtracting the Mean 
          8. Normalization by Subtracting the Median 
          9. Normalization by Signed Ratio 
          10. Normalization by Log Ratio 
          11. Normalization by Log Ratio in Standard Deviation Units 
          12. Normalization by Z-score 
          13. Normalization by Standard Deviation
        5. Details
          1. Details on Show Transformations
          2. Details on Preview
          3. Details on Pivot Data
          4. Details on Unpivot Data
          5. Details on Calculate and Replace Column
          6. Details on Calculate New Column
          7. Details on Data Function – Transformation
          8. Details on Normalization
          9. Details on Exclude Columns
          10. Details on Change Column Names
          11. Details on Change Data Types
      11. Missing File
        1. Details on Missing File
        2. Details on Search for Missing File
    4. Inserting More Data
      1. Insert Calculated Columns
        1. What is a Calculated Column?
        2. How to Insert a Calculated Column
        3. Details on Insert Calculated Column
        4. Expression Language
          1. General Syntax
          2. Operators
          3. Data Types
          4. Operators
          5. Operator Precedence
          6. Functions
          7. Functions Overview
          8. Binning Functions
          9. Conversion Functions
          10. Cast Method
          11. Date and Time Functions
          12. Logical Functions
          13. Math Functions
          14. OVER Functions
          15. Property Functions
          16. Ranking Functions
          17. Spatial Functions
          18. Statistical Functions
          19. Text Functions
        5. Invalid Values
        6. Details on Formatting
        7. Format String
        8. Properties
          1. Properties in Expressions
          2. Troubleshooting Property Expressions
      2. Insert Binned Columns
        1. What is Binning?
        2. How to Use Binning
        3. Details on Insert Binned Column
        4. The Binning Slider
      3. Insert Columns
        1. How to Insert Columns
        2. Details on Insert Columns – Select Destination
        3. Details on Insert Columns – Select Source
        4. Details on Insert Columns – Match Columns
        5. Details on Insert Columns – Import
      4. Insert Rows
        1. How to Insert Rows
        2. Details on Insert Rows – Select Destination
        3. Details on Insert Rows – Select Source
        4. Details on Insert Rows – Match Columns
        5. Details on Insert Rows – Additional Settings
    5. Multiple Data Tables
      1. How to Insert Multiple Data Tables into the Analysis
      2. How to Handle Multiple Data Tables in One Analysis
      3. Data Tables Overview
      4. Examples
        1. Master-Detail Visualizations
        2. Independent Data Tables
        3. Multiple Related Data Tables
        4. Insert Columns – Example
    6. Data Panel
      1. What is the Data Panel?
        1. In-Memory or In-Database Relational Data
        2. In-Database Cube Data
      2. Data Panel Pop-up Menu
      3. Details on Rename Column
    7. Data Connection Properties
      1. How to Edit Data Connection Properties
      2. Details on Data Connection Properties – General
      3. Details on Data Connection Properties – Data Tables
      4. Details on Data Connection Properties – Credentials
      5. Details on Data Connection Properties – Cache Settings
      6. Details on Rename Data Connection
    8. Data Table Properties
      1. How to Edit Data Table Properties
      2. Details on Data Table Properties – General
      3. Details on Data Table Properties – Source Information
      4. Details on Data Table Properties – Relations
      5. Details on Data Table Properties – Properties
      6. Details on Data Table Properties – Sharing Routines
      7. Details
        1. Details on Select Key Columns
        2. Details on Load Method
        3. Details on Manage Relations
        4. Details on New/Edit Data Table Property
    9. Column Properties
      1. How to Edit Column Properties
      2. Details on Column Properties – General
      3. Details on Column Properties – Formatting
      4. Details on Column Properties – Properties
      5. Details on Column Properties – Sort Order
      6. Column Properties Descriptions
      7. Details
        1. Details on Insert Hierarchy
        2. Details on Custom Sort Order
        3. Details on New/Edit Column Property
        4. Details on Edit Value
        5. Details on Select Visible Properties
  6. Visualizations
  7. Using Visualizations
  8. Enhancing Visualizations
  9. Filters
  10. Tags
  11. Bookmarks
  12. Lists
  13. Collaboration
  14. Tools
  15. Saving and Exporting
    1. Creating a Guided Analysis
      1. What is a Guided Analysis?
    2. Saving
      1. Save Overview
      2. Saving an Analysis File
      3. Details on Save
      4. Saving an Analysis File in the Library
      5. Embedded or Linked Data?
      6. Preparing Analyses for TIBCO Spotfire Web Player
      7. Links to Analyses in the Library
      8. Details on Save to Library
        1. Save as Library Item – Step 1
        2. Save as Library Item – Step 2
        3. Save as Library Item – Step 3
        4. Save as Library Item – Published
        5. Details on Edit Properties – General
        6. Details on New Folder
    3. Export Image
      1. Exporting an Image
    4. Export Data
      1. Exporting Data
      2. Details on Export Data
    5. Export to PowerPoint
      1. Exporting to Microsoft PowerPoint
      2. Details on Export to Microsoft PowerPoint
    6. Export to PDF
      1. Exporting to PDF
      2. Details on Export to PDF – General
      3. Details on Export to PDF – Advanced
      4. Exporting Bookmarks to PDF
      5. Details on Export to PDF – Bookmarks
      6. Exporting Filter Values to PDF
      7. Details on Export to PDF – Filters
    7. Export to HTML
      1. Exporting to HTML
      2. Details on Export to HTML
    8. Printing
      1. Printing
      2. Details on Print Layout Options
  16. Appendix
    1. Important Information
    2. How to Contact Support
    3. Details on Support Diagnostics and Logging
  17. Glossary
    1. 3D Scatter Plot
    2. Analysis File
    3. Axis
    4. Axis Selector
    5. Bar
    6. Bar Chart
    7. Bar Labels
    8. Bar Segment
    9. Bar Segment Labels
    10. Binning
    11. Bookmark
    12. Box Plot
    13. Bullet Graph
    14. Calculated Column
    15. Calculated Value
    16. Categorical Axis
    17. Category Axis
    18. Categorical Scale
    19. Cell
    20. Check Box Filter
    21. Collaboration Panel
    22. Color Mode
    23. Color Palette
    24. Color Scheme
    25. Color Scheme Grouping
    26. Column
    27. Column from Marked
    28. Column Name
    29. (Column Names)
    30. Column Selector
    31. Combination Chart
    32. Comparison Circles
    33. Continuous Axis
    34. Continuous Scale
    35. Cover Page
    36. Cross Table
    37. Curve Fit
    38. Custom Expression
    39. Data Relationships
    40. Data Source
    41. Data Table
    42. Dendrogram
    43. Details-on-Demand
    44. Details Visualization
    45. Drop Targets
    46. DXP File
    47. Dynamic Items
    48. Empty Values
    49. Error Bars
    50. Escape characters
    51. External Column ID
    52. External Column Name
    53. Filter
    54. Filtering Scheme
    55. Filtered Out Rows
    56. Filtered Rows
    57. Filters Panel
    58. Find
    59. Formatting
    60. Graphical Table
    61. Gridlines
    62. GUID
    63. Heat Map
    64. Hierarchical Clustering
    65. Hierarchy
    66. Hierarchy Filter
    67. Horizontal Bars
    68. Hyperlink
    69. Icon
    70. Information Link
    71. Item Filter
    72. Jittering
    73. K-means Clustering
    74. Label
    75. Legend
    76. Library
    77. Line By
    78. Line Connection
    79. Line Chart
    80. Line Labels
    81. Line Similarity
    82. Lines & Curves
    83. List Box Filter
    84. Lists
    85. Map Chart
    86. Marked Row
    87. Marking
    88. Marker
    89. Marker Labels
    90. Page
    91. Parallel Coordinate Plot
    92. Parameterized Information Link
    93. Personalized Information Link
    94. Pie
    95. Pie Chart
    96. Pie Labels
    97. Pie Sector
    98. Pie Sector Labels
    99. Pivot
    100. Primary Key
    101. Properties
    102. Radio Button Filter
    103. Range Filter
    104. Range Filter Data Range
    105. Range Filter Lower Value
    106. Range Filter Upper Value
    107. Renderer
    108. Root View
    109. Row
    110. Scale
    111. Scale Labels
    112. Scatter Plot
    113. Series By
    114. Share
    115. Short Number Format
    116. Short Number Symbol
    117. Sparkline
    118. Spotfire Server
    119. Spotfire Text Data Format
    120. Stacked Bar
    121. Summary Table
    122. Symbol Set
    123. Table
    124. Table Cell
    125. Table Column
    126. Table Column Header
    127. Table Row
    128. Table Row Header
    129. Tags Panel
    130. Tags
    131. Text Area
    132. Tick Marks
    133. Time Scale
    134. Tooltip
    135. Tree Filter (Hierarchy Filter)
    136. Treemap
    137. Trellis
    138. Unpivot
    139. URL
    140. Value Axis
    141. Value Columns
    142. Vertical Bars
    143. Virtual Column
    144. Visualization
    145. Visualization Item
    146. Visualization Title
    147. Web Player
    148. X-Axis
    149. Y-Axis
    150. Z-Axis

  1. Story
    1. Tutorial Slides
      1. Title Slide
      2. Introduction
      3. Spotfire 5 Users Guide Knowledge Base 1
      4. Spotfire 5 Users Guide Spreadsheet
      5. Spotfire 5 Users Guide Dashboard 1
      6. Spotfire 5 Users Guide Dashboard 2
      7. Spotfire 5 Users Guide Knowledge Base 2
      8. Spotfire 5 Users Guide Knowledge Base 3
      9. Advantages
      10. Spotfire 5 Users Guide Content Can Be Further Structured
  2. Spotfire Dashboard
  3. Research Notes
  4. Introduction
    1. Introduction
    2. The User Interface
    3. Logging In
    4. Logging in Details
      1. Details on Manage Servers
      2. Details on Add/Edit Server
      3. Details on Change Password
  5. Data
    1. Data Overview
      1. In-Memory Data
      2. In-Database Data
    2. Data in Spotfire
      1. Working With Large Data Volumes
      2. Working With In-Database Data
      3. Working With Cubes
    3. Loading Data
      1. Loading Data Overview
      2. Open File
        1. Opening an Analysis File
        2. Opening a Text File
        3. Opening an Excel File
        4. Opening a SAS File
        5. Details
          1. Details on Excel Import
          2. Details on Import Settings
          3. Details on Import Settings – Advanced
          4. Details on SAS Data Import
      3. Open From a Library
        1. Opening Files from the Library
        2. Opening an Information Link
        3. Searching the Library
        4. Edit Properties
          1. Details on Edit Properties – General
          2. Details on Edit Properties – Document
      4. Add Data Tables
        1. How to Insert Multiple Data Tables into the Analysis
        2. Details
          1. Details on Add Data Tables
          2. Details on Manage Relations
          3. Details on New/Edit Relation
          4. Details on Browse for Data Table
          5. Details on Data Function – Select Input
      5. Add On-Demand Data Table
        1. On-Demand Overview
        2. Loading Data on Demand
        3. Example of Marking Controlled On-Demand Details Visualization
        4. Example of Property Controlled On-Demand Data
        5. Details
          1. Details on Add On-Demand Data Table
          2. Details on Define Input
          3. Details on Select Value
          4. Details on Select Property
          5. Details on Settings
          6. Details on Select Information Link
      6. Add a Data Table Connection
        1. Adding Data Connections
        2. Details
          1. Details on Microsoft SQL Server Connection
          2. Details on Oracle Connection
          3. Details on Teradata Connection
          4. Details on Microsoft SQL Server Analysis Services Connection
          5. Details on Select Database Tables
          6. Details on Data Tables in Connection
          7. Details on New/Edit Relation
          8. Details on Data Connection Login
        3. Mapping External Data Table
          1. Oracle Data Types
          2. SQL Server Data Types
          3. Teradata Data Types
      7. Load Data From Active Spaces
        1. Loading Data from ActiveSpaces
        2. Details on ActiveSpaces
      8. Open Database
        1. Open from Database Overview
        2. Opening Data from a Database
        3. Details
          1. Details on Open Database
          2. Configure Data Source Connection – SQLClient
          3. Configure Data Source Connection – OLE DB
          4. Configure Data Source Connection – ODBC
          5. Configure Data Source Connection – OracleClient
          6. Configure Data Source Connection – Custom .NET Provider
      9. Replace Data
        1. Replacing Data
        2. Details
          1. Details on Replace Data Table – Select Data Table
          2. Details on Replace Data Table – Select Source
          3. Details on Replace Data – Select External Source
          4. Details on Replace Data – Match Columns
          5. Details on Replace Data – Missing Columns
      10. Transform Data
        1. Transforming Data
        2. Pivoting Data
        3. Unpivoting Data
        4. Normalizing Data
          1. Normalizing Columns
          2. Details
          3. Normalization by Mean 
          4. Normalization by Trimmed Mean 
          5. Normalization by Percentile 
          6. Normalization by Scaling Between 0 and 1 
          7. Normalization by Subtracting the Mean 
          8. Normalization by Subtracting the Median 
          9. Normalization by Signed Ratio 
          10. Normalization by Log Ratio 
          11. Normalization by Log Ratio in Standard Deviation Units 
          12. Normalization by Z-score 
          13. Normalization by Standard Deviation
        5. Details
          1. Details on Show Transformations
          2. Details on Preview
          3. Details on Pivot Data
          4. Details on Unpivot Data
          5. Details on Calculate and Replace Column
          6. Details on Calculate New Column
          7. Details on Data Function – Transformation
          8. Details on Normalization
          9. Details on Exclude Columns
          10. Details on Change Column Names
          11. Details on Change Data Types
      11. Missing File
        1. Details on Missing File
        2. Details on Search for Missing File
    4. Inserting More Data
      1. Insert Calculated Columns
        1. What is a Calculated Column?
        2. How to Insert a Calculated Column
        3. Details on Insert Calculated Column
        4. Expression Language
          1. General Syntax
          2. Operators
          3. Data Types
          4. Operators
          5. Operator Precedence
          6. Functions
          7. Functions Overview
          8. Binning Functions
          9. Conversion Functions
          10. Cast Method
          11. Date and Time Functions
          12. Logical Functions
          13. Math Functions
          14. OVER Functions
          15. Property Functions
          16. Ranking Functions
          17. Spatial Functions
          18. Statistical Functions
          19. Text Functions
        5. Invalid Values
        6. Details on Formatting
        7. Format String
        8. Properties
          1. Properties in Expressions
          2. Troubleshooting Property Expressions
      2. Insert Binned Columns
        1. What is Binning?
        2. How to Use Binning
        3. Details on Insert Binned Column
        4. The Binning Slider
      3. Insert Columns
        1. How to Insert Columns
        2. Details on Insert Columns – Select Destination
        3. Details on Insert Columns – Select Source
        4. Details on Insert Columns – Match Columns
        5. Details on Insert Columns – Import
      4. Insert Rows
        1. How to Insert Rows
        2. Details on Insert Rows – Select Destination
        3. Details on Insert Rows – Select Source
        4. Details on Insert Rows – Match Columns
        5. Details on Insert Rows – Additional Settings
    5. Multiple Data Tables
      1. How to Insert Multiple Data Tables into the Analysis
      2. How to Handle Multiple Data Tables in One Analysis
      3. Data Tables Overview
      4. Examples
        1. Master-Detail Visualizations
        2. Independent Data Tables
        3. Multiple Related Data Tables
        4. Insert Columns – Example
    6. Data Panel
      1. What is the Data Panel?
        1. In-Memory or In-Database Relational Data
        2. In-Database Cube Data
      2. Data Panel Pop-up Menu
      3. Details on Rename Column
    7. Data Connection Properties
      1. How to Edit Data Connection Properties
      2. Details on Data Connection Properties – General
      3. Details on Data Connection Properties – Data Tables
      4. Details on Data Connection Properties – Credentials
      5. Details on Data Connection Properties – Cache Settings
      6. Details on Rename Data Connection
    8. Data Table Properties
      1. How to Edit Data Table Properties
      2. Details on Data Table Properties – General
      3. Details on Data Table Properties – Source Information
      4. Details on Data Table Properties – Relations
      5. Details on Data Table Properties – Properties
      6. Details on Data Table Properties – Sharing Routines
      7. Details
        1. Details on Select Key Columns
        2. Details on Load Method
        3. Details on Manage Relations
        4. Details on New/Edit Data Table Property
    9. Column Properties
      1. How to Edit Column Properties
      2. Details on Column Properties – General
      3. Details on Column Properties – Formatting
      4. Details on Column Properties – Properties
      5. Details on Column Properties – Sort Order
      6. Column Properties Descriptions
      7. Details
        1. Details on Insert Hierarchy
        2. Details on Custom Sort Order
        3. Details on New/Edit Column Property
        4. Details on Edit Value
        5. Details on Select Visible Properties
  6. Visualizations
  7. Using Visualizations
  8. Enhancing Visualizations
  9. Filters
  10. Tags
  11. Bookmarks
  12. Lists
  13. Collaboration
  14. Tools
  15. Saving and Exporting
    1. Creating a Guided Analysis
      1. What is a Guided Analysis?
    2. Saving
      1. Save Overview
      2. Saving an Analysis File
      3. Details on Save
      4. Saving an Analysis File in the Library
      5. Embedded or Linked Data?
      6. Preparing Analyses for TIBCO Spotfire Web Player
      7. Links to Analyses in the Library
      8. Details on Save to Library
        1. Save as Library Item – Step 1
        2. Save as Library Item – Step 2
        3. Save as Library Item – Step 3
        4. Save as Library Item – Published
        5. Details on Edit Properties – General
        6. Details on New Folder
    3. Export Image
      1. Exporting an Image
    4. Export Data
      1. Exporting Data
      2. Details on Export Data
    5. Export to PowerPoint
      1. Exporting to Microsoft PowerPoint
      2. Details on Export to Microsoft PowerPoint
    6. Export to PDF
      1. Exporting to PDF
      2. Details on Export to PDF – General
      3. Details on Export to PDF – Advanced
      4. Exporting Bookmarks to PDF
      5. Details on Export to PDF – Bookmarks
      6. Exporting Filter Values to PDF
      7. Details on Export to PDF – Filters
    7. Export to HTML
      1. Exporting to HTML
      2. Details on Export to HTML
    8. Printing
      1. Printing
      2. Details on Print Layout Options
  16. Appendix
    1. Important Information
    2. How to Contact Support
    3. Details on Support Diagnostics and Logging
  17. Glossary
    1. 3D Scatter Plot
    2. Analysis File
    3. Axis
    4. Axis Selector
    5. Bar
    6. Bar Chart
    7. Bar Labels
    8. Bar Segment
    9. Bar Segment Labels
    10. Binning
    11. Bookmark
    12. Box Plot
    13. Bullet Graph
    14. Calculated Column
    15. Calculated Value
    16. Categorical Axis
    17. Category Axis
    18. Categorical Scale
    19. Cell
    20. Check Box Filter
    21. Collaboration Panel
    22. Color Mode
    23. Color Palette
    24. Color Scheme
    25. Color Scheme Grouping
    26. Column
    27. Column from Marked
    28. Column Name
    29. (Column Names)
    30. Column Selector
    31. Combination Chart
    32. Comparison Circles
    33. Continuous Axis
    34. Continuous Scale
    35. Cover Page
    36. Cross Table
    37. Curve Fit
    38. Custom Expression
    39. Data Relationships
    40. Data Source
    41. Data Table
    42. Dendrogram
    43. Details-on-Demand
    44. Details Visualization
    45. Drop Targets
    46. DXP File
    47. Dynamic Items
    48. Empty Values
    49. Error Bars
    50. Escape characters
    51. External Column ID
    52. External Column Name
    53. Filter
    54. Filtering Scheme
    55. Filtered Out Rows
    56. Filtered Rows
    57. Filters Panel
    58. Find
    59. Formatting
    60. Graphical Table
    61. Gridlines
    62. GUID
    63. Heat Map
    64. Hierarchical Clustering
    65. Hierarchy
    66. Hierarchy Filter
    67. Horizontal Bars
    68. Hyperlink
    69. Icon
    70. Information Link
    71. Item Filter
    72. Jittering
    73. K-means Clustering
    74. Label
    75. Legend
    76. Library
    77. Line By
    78. Line Connection
    79. Line Chart
    80. Line Labels
    81. Line Similarity
    82. Lines & Curves
    83. List Box Filter
    84. Lists
    85. Map Chart
    86. Marked Row
    87. Marking
    88. Marker
    89. Marker Labels
    90. Page
    91. Parallel Coordinate Plot
    92. Parameterized Information Link
    93. Personalized Information Link
    94. Pie
    95. Pie Chart
    96. Pie Labels
    97. Pie Sector
    98. Pie Sector Labels
    99. Pivot
    100. Primary Key
    101. Properties
    102. Radio Button Filter
    103. Range Filter
    104. Range Filter Data Range
    105. Range Filter Lower Value
    106. Range Filter Upper Value
    107. Renderer
    108. Root View
    109. Row
    110. Scale
    111. Scale Labels
    112. Scatter Plot
    113. Series By
    114. Share
    115. Short Number Format
    116. Short Number Symbol
    117. Sparkline
    118. Spotfire Server
    119. Spotfire Text Data Format
    120. Stacked Bar
    121. Summary Table
    122. Symbol Set
    123. Table
    124. Table Cell
    125. Table Column
    126. Table Column Header
    127. Table Row
    128. Table Row Header
    129. Tags Panel
    130. Tags
    131. Text Area
    132. Tick Marks
    133. Time Scale
    134. Tooltip
    135. Tree Filter (Hierarchy Filter)
    136. Treemap
    137. Trellis
    138. Unpivot
    139. URL
    140. Value Axis
    141. Value Columns
    142. Vertical Bars
    143. Virtual Column
    144. Visualization
    145. Visualization Item
    146. Visualization Title
    147. Web Player
    148. X-Axis
    149. Y-Axis
    150. Z-Axis

Story

Spotfire 5 Users Guide Dashboard

Created by Brand Niemann, February 18, 2013, to improve training and applications by showing how the 1. Steps in Building the 2. Users Guide Knowledge Base and the 3. Summary of Table Row Count, 4. Summary of Users Guide Knowledge Base and 5. Distribution of Topics were built in an Excel Spreadsheet and Spotfire 5 and reference the Spotfire 5 Users Guide Knowledge Base.

For example: Type Excel in the Table of Contents to start the journey to drill down. Then:

and

 

Tutorial Slides

Slides

Title Slide

BrandNiemann02182013Slide1.PNG

Introduction

BrandNiemann02182013Slide2.PNG

Spotfire 5 Users Guide Knowledge Base 1

BrandNiemann02182013Slide3.PNG

Spotfire 5 Users Guide Spreadsheet

BrandNiemann02182013Slide4.PNG

Spotfire 5 Users Guide Dashboard 1

 Web Player

BrandNiemann02182013Slide5.PNG

Spotfire 5 Users Guide Dashboard 2

 Web Player

BrandNiemann02182013Slide6.PNG

Spotfire 5 Users Guide Knowledge Base 2

BrandNiemann02182013Slide7.PNG

Spotfire 5 Users Guide Knowledge Base 3

BrandNiemann02182013Slide8.PNG

Advantages

BrandNiemann02182013Slide9.png

Spotfire 5 Users Guide Content Can Be Further Structured

BrandNiemann02182013Slide10.png

Spotfire Dashboard

For Internet Explorer Users and Those Wanting Full Screen Display Use: Web Player Get Spotfire for iPad App

Error: Embedded data could not be displayed. Use Google Chrome

Research Notes

MY NOTE: I was looking at http://stn.spotfire.com/stn/UserDoc.aspx and found:

Spotfire product documentation has moved to the TIBCO Product Documentation site. 
Please visit the site above for both new and legacy product documentation.

https://docs.tibco.com/products/tibco-spotfire-5-0-0 is not as useful as the original

http://stn.spotfire.com/stn/UserDoc....troduction.htm

so I am creating a linked data version and using the original graphics

Introduction

MY NOTE: I showed how one can do their own hyperlinks in this section - you can customize your hyperlinks, notes, screen captures, etc. in this MindTouch environment!

Introduction


Welcome to TIBCO Spotfire®!

TIBCO Spotfire makes it easy for you to access, analyze and create dynamic reports on your data. It delivers immediate value whether you are a market researcher, a sales representative, a scientist or a process engineer by letting you quickly identify trends and patterns in your critical business data.

Spotfire can access data in a number of places such as on your desktop or in a network file system. It can even access your data if it is located in remote databases, without you having to involve your IT department each time you wish to ask a new question.

Spotfire lets you filter your data interactively, and gives you answers instantly. It also lets you rapidly create clear and concise, yet sleek and colorful visualizations in the form of bar charts, cross tables, scatter plots and many more valuable tools that will help you respond to events that affect your business.  

And finally, Spotfire lets you share your results. Static reports can be limiting to good business in this fast-paced world of data, and Spotfire allows you to create dynamic reports that help you to ask new questions, as well as be able to quickly turn your reports into instant presentations to show to your colleagues and customers.

Note: This user's manual contains information about all functionality that can be used within the Spotfire end user environment. If you do not have access to all licenses, some tools described in this help will be unavailable. For more information on how to get access to the full range of functionality, please visit the website http://support.spotfire.com/support.asp.

See also:

The User Interface

Logging In

Last update: 2012-11-16

The User Interface


The image below shows some of the main parts of the TIBCO Spotfire® user interface.

Inroduction.png

1. Visualizations

Visualizations are the key to analyzing data in Spotfire. A variety of visualization types can be used to provide the best view of the data:

Different types of visualizations can be shown simultaneously. They can be linked to each other, and may or may not be updated dynamically when the corresponding filters on the page are manipulated (see below).

Visualizations can be made to reflect many dimensions of data by letting values control visual attributes such as size, color, shape, etc.

2. Text areas

You can type text in text areas, explaining what is seen in the different visualizations. This can be particularly useful if you are creating analytic applications for other users. Text areas can also include several different types of controls, allowing you to filter, perform actions or make selections to view particular types of data, etc.

3. Filters

By adjusting filters, you can reduce the data seen in the visualizations to "drill down" to the things that interest you. Filters are powerful tools that quickly let you see various aspects of your data and make discoveries.

Filters appear in several forms, and you can select the type of filter device that best suits your needs (for example, check boxes, sliders, etc). When you manipulate a filter by moving a slider or by selecting a check box, all linked visualizations are immediately updated to reflect the new selection of data. By default, all new visualizations on a page will be limited by the filtering scheme used on the page. However, the filtering scheme can be changed for each visualization separately.

4. Details-on-Demand

The Details-on-Demand window can be used to show the exact values of a row or a group of rows. By clicking an item in a visualization, or marking several items by clicking and dragging with the mouse around them, you can see the numerical values and textual data they represent directly in the Details-on-Demand window.

See also:

Opening an Analysis File

Column Selectors

Drag-and-Drop

Marking in Visualizations

Logging In


When you start TIBCO Spotfire a login dialog appears. Enter your Username and Password, and click on the Login button to start Spotfire. If you select the Save my login information check box, you will automatically be logged in when you start Spotfire in the future. Logging into Spotfire will let you access the joint library and other collaboration features.

If the Save my login information check box has been selected, but you later want to reach this dialog again, you can force it to be shown by using the TIBCO Spotfire (show login dialog) option, reached via the Start menu > All Programs > TIBCO.

Login.png

If you are working on a large company with multiple TIBCO Spotfire Servers, you may occasionally also need to change the server you are connecting to via the drop-down list. New servers can be added to the list by clicking on the Manage Servers... link.

Connecting via Proxy Server

If you are connecting via a proxy server, you may need to change your security settings in Internet Explorer prior to logging into Spotfire. See the Microsoft Internet Explorer help for more information. Prior to logging into Spotfire, make sure that the Spotfire Server start page can be accessed by browsing to http://<hostname>/spotfire/.

Downloading Updates

Spotfire will automatically check for updates on your Spotfire Server that apply to you. If you have a network connection to the Spotfire Server, and there are updates available, you will be notified of this and can select whether to install them right away or at a later time.

You can get a look at the contents of the available updates by clicking on the View updates link in the notification dialog.

Working Offline

If you are on a plane or just happen to not be connected to the network where your Spotfire Server is located, you can work with Spotfire offline. Almost all of the functionality of Spotfire works fine without a connection to the server. Library access, however, does not, nor can you access information links to databases. To work offline, simply click the Work Offline button in the login dialog. With some licenses of Spotfire, you do need to connect to your Spotfire Server at least once a month to be able to continue to work offline.

Updates and Working Offline

If you have more than one server, and one of them has provided you with updates, this server must be selected in the login screen for those updates to be available, even if you choose to work offline.

See also:

The User Interface

Details on Change Password

Logging in Details

Details on Manage Servers


  • To reach the Manage Servers dialog:

  1. In the Login dialog of TIBCO Spotfire, click on the Manage Servers... link.

ManageServers.png

Option

Description

Available TIBCO Spotfire servers

Lists all previously added Spotfire servers, which you can select to log into.

Add...

Opens the Add Server dialog, where you can add new Spotfire servers to the list.

Edit...

Opens the Edit Server dialog, where you can edit the address and area of the selected Spotfire server.

Delete

Deletes the selected server from the list of Available TIBCO Spotfire servers.

Server details

Lists information about the selected Spotfire server, such as its address, area and authentication type.

See also:

Logging In

Details on Add/Edit Server


  • To reach the Add Server dialog:

  1. In the Login dialog of TIBCO Spotfire, click on the Manage Servers... link.

  2. In the Manage Servers dialog, click Add....

AddServer.png

  • To reach the Edit Server dialog:

  1. In the Login dialog of TIBCO Spotfire, click on the Manage Servers... link.

  2. In the Manage Servers dialog, click to select a server, then click Edit....

EditServer.png

Option

Description

TIBCO Spotfire server address

This is where the web address to the new server should be specified. Contact your TIBCO Spotfire Administrator for this type of information.

Area

Specifies whether the connection should be made to the Production area or to the Test area on the specified server. The Production area is normally the preferred option for all common users. The Test area is basically reserved for developers and test pilots of new deployments.

See also:

Details on Manage Servers

Logging In

Details on Change Password


This dialog is available when your server has been set up to use Spotfire Database authentication only. It is not available in offline mode. If your Spotfire Server has been set up to use any other authentication mechanism this dialog will not be available at all.

  • To reach the Change Password dialog:

  1. Select Tools > Change Password....

ChangePassword.png

Option

Description

Username

Shows the name of the currently logged in user.

Current password

Type the current password for the logged in user.

New password

Type the new password for the logged in user.

Confirm new password

Retype the password to ensure it is correct.

See also:

Logging In

Data

Data Overview


You can load data into TIBCO Spotfire from a number of different sources: by pasting from the clipboard, by opening simple text files, Microsoft Excel files, SAS files, a database or an information link (a predefined connection to a shared data source). You may also have access to additional file sources if such have been set up by your company.

You reach the different ways to load data via the File menu or using Add Data Tables or Add On-Demand Data Table. Using the last two alternatives you can add more than one data table to your analysis. TIBCO Spotfire also supports connections to external data sources*, such as Microsoft® SQL Server®, Microsoft®  SQL Server®  Analysis Services, Oracle®  or Teradata®. These connections allow you to analyze data in-database, see below.

In-Memory Data

When you are working with in-memory data tables (text files, Excel files, information links, etc.) you have access to all the functionality of Spotfire. You have the opportunity to use all columns as filters and perform any number of calculations. You can also use any of the tools within Spotfire to cluster data, calculate new columns, bin columns, make predictions etc. See Working With Large Data Volumes for some tips on how to improve the performance of an analysis with lots of data.

In-Database Data

When a connection to an external source is set up, all calculations are done using the external system and not with the Spotfire data engine. This will allow you to work with data volumes too large to fit into primary memory and take advantage of the power of the external system. When working with external data connections, you access only the current selection of data and all aggregations and calculations are made in-database (in-db).

When a visualization is configured to use in-db data, the visualization will query the external data source directly. Every time a change is done to the setup of the visualization, e.g., a fact column is set on the Y-axis or a Color by dimension column is added, a new query will be sent to the external data source resulting in a new table of aggregated data. This means that you cannot make any changes to a visualization using an in-db data table when you are not connected to the external data source.

In some cases, the data in a database is modeled as a star schema or snowflake, also known as a multidimensional model. If this is the case, you can reuse this model, the relations and constraints defined in the database, to setup your analysis. If no relations have been set up, it is possible to define those when selecting the data tables to retrieve for a certain connection. By using constraints and relations that has either been predefined in the database or created explicitly by a configurator multiple database tables can appear as a single de-normalized table, i.e., a virtual table. A virtual table can be used in a visualization providing the illusion of a single table. Virtual tables differ from regular Spotfire tables containing imported data in the sense that virtual tables only contain metadata which is used during configuration of a visualization—a virtual table does not actually contain any data.

This has the implication that some of the functionality within TIBCO Spotfire that is available for in-memory data is not applicable when working with in-db data. See Working With In-Database Data for more information.

* Microsoft SQL Server and Microsoft SQL Server Analysis Services are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.

* Oracle is a registered trademark of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

* Teradata is a trademark or registered trademark of Teradata Corporation in the United States and other countries.

See also:

What is the Data Panel?

Loading Data Overview

Data in Spotfire

Working With Large Data Volumes


When you are working with massive amounts of data there may always be certain operations that take time to perform. However, with TIBCO Spotfire you do not have to be afraid to try out different alternatives. You can always cancel an operation if it looks like it is going to take a long time. You can undo an operation, or switch to a different alternative (e.g., switch to a column with fewer unique values on an axis) if you do not want to wait for the calculations to finish.

However, here are a few tips which can be useful when you are working with large data tables and you want to increase the performance of your analysis:

Visualizations and Analyses

  • Use aggregated visualizations as a starting point and use details visualizations for smaller, filtered portions of data only. Many graphical elements in the analysis will take some time to render. This is especially important on the web player which does not allow hardware acceleration.

  • Think about if there are any alternative ways  you can visualize your data in order to see the same thing. Can you use a different visualization type? Or partly aggregate the data? For example, binning can be used to aggregate markers in a scatter plot and still allow you to see a distribution. Using the bin sliders you can increase the number of markers shown until it takes too long time to make changes.

  • Sorting in cross tables, etc., takes time.

  • Save old 4.5 analysis files in the new 5.0 file format. This will shorten the time it takes to load the file.

  • Hide or delete unused filters (or do not create filters for external columns unless you have to).

  • Use the list box filter or the text filter rather than the item filter when working with columns with a lot of unique values. Item filters are costly to display, even when they are not used. If you have old analysis files using item filters for these type of columns it is recommended to manually change the filter type to a list box or text filter and save the file again.

  • Some types of aggregations are more time consuming than others. For example, use average rather than median, if possible.

  • Use the data type real rather than currency. The currency formatter can be applied to the real data type.

  • It is recommended to use the filters panel instead of adding a lot of filters to text areas. Filters in text areas can make the analysis seem unresponsive. The more filters you add to the text area, the less responsive the application becomes.

  • Calculated values (labels) and sparklines in text areas may also give rise to unresponsive analyses.

Hardware

  • Use 64-bit machines rather than 32-bit.

  • Use a fast solid-state drive (SSD) if possible.

  • Do not run other applications on the same machine when working with large data volumes.

Loading Data

  • Use sorted input on categorical columns.

  • Loading data from an SBDF file is much faster than from TXT.

  • If the data is in a tall and skinny format rather than a short and wide you may obtain better performance.

  • Remove invalid values from your data before importing into Spotfire.

Data Export

  • Export from a data table rather than from a table visualization.

  • Export to SBFD rather than to TXT.

Web Player

  • Avoid visualizations with many graphical elements (no hardware acceleration will make the rendering time very long).

  • Use scheduled updates, when possible.

Preferences

  • An administrator can modify the MarkingWhereClauseLimit or the MarkingInQueryLimit preference (under Administration Manager > Preferences > DataOptimization). With lower limits, the allowed complexity of marking queries is reduced. This is important when working with external data sources. See Preferences Descriptions in the Administration Manager help for more information.

  • Switch off the automatic creation of  filters. This can be turned off for a specific data table in the Data Table Properties dialog, and for all new in-memory data tables under Tools > Options – Document.

API

  • Prefer iterator based data access over random access. Use DataRowCursor API:s over GetValue(rowindex) style API:s.

  • Be careful when using custom comparers - depending on usage they may become a bottleneck. Consider if the problem cannot be solved in other ways.​

  • If things are slow and you are using old custom extensions, see if they can be refactored or if some time-consuming steps can be removed. Some API:s are by nature slow and old code might benefit from some refactoring. Try loading without any extensions to see if one of them may be the culprit. 

See also:

Data Overview

Working With In-Database Data

Working With In-Database Data


When you are working with data from an external data source (in-database or in-db data) there are a number of features that you can use with in-memory data that are unavailable. See below for more information.

One thing to think about when working with in-db data is that changes to the underlying database schema will not automatically be reflected in the Spotfire analysis. This means that if a column is added to a database table you need to perform a Refresh Schema operation in Spotfire in order to see the new column in the analysis. Not all users will have the sufficient database privileges to perform a full schema refresh. However, changes to the number of rows can be updated using a simple Reload/Refresh Data by most users.

Limitation

Why?

The only tabular visualization that is supported for in-db data is the cross table. The table, summary table, box plot and Details-on-Demand cannot be used with in-db data.

Only aggregated visualizations are allowed. The number of rows available in an external data source may be too large to handle within Spotfire.

Scatter plots and 3D scatter plots need to be configured as aggregated visualizations when using in-db data.

The number of rows available on an external data source may be too large to handle within Spotfire.

Dendrograms cannot be shown in heat maps.

In-db data tables contain no data and can, hence, not incorporate any other data.

Not all of the standard Spotfire aggregation methods and expressions are available on all external systems. On the other hand, there may be other methods available that Spotfire does not have by default.

The aggregation methods supported by the external data source determines which methods will be available for in-db data.

 

OVER expressions are not supported.

There is currently no mapping between the Spotfire OVER expressions and the corresponding functionality in the external systems.

You cannot save in-db data as embedded in the analysis file.

Since no data is stored within the analysis in runtime, it is not possible to save it.

If the connection to the external data source is lost, no further analysis using that data source can be performed.

Since no external data is stored within the analysis in runtime and each change induces a new query to be sent to the external data source, the data source must be available as long as changes to the visualizations are to take place.

Tags are not available for in-db data.

Tags are dependent on row numbers which are not available for in-db data.

You cannot delete columns or rows from in-db data tables.

In-db data tables contain no data in runtime so nothing can be deleted.

You cannot apply any of the following tools on external data tables:

Insert Columns,

Insert Rows,

Insert Calculated Column,

Insert Binned Column,

Data Relationships,

K-means Clustering,

Line Similarity,

Data Functions,

Regression Modeling,

Classification Modeling,

Insert Predicted Columns

In-db data tables contain no data in runtime and can, hence, not incorporate any other data.

Export Data from Data Table cannot be used when exporting from in-db data tables.

The only information that can be exported with the Data table option in the Export Data dialog is the column names available in the in-db data table.

Tip: Use the Export Data from Visualization option when you want to export from in-db data tables. Note that for all visualizations other than the cross table you need to mark the items of interest before opening the Export Data dialog.

If you have multiple filters for an in-db data table, you will not get the visual clue indicating which values have been filtered out using the other filters, the way you do with in-memory data tables.

While graying out values filtered out by other filters is a costly operation, there is currently no visual connection between filters for in-db data tables.

See also:

Data Overview

Working With Cubes


When data is located in cubes (like Microsoft SQL Server Analysis Services) it behaves rather different compared to data in the relational databases traditionally accessed via Spotfire.

A cube is built from several predefined combinations of dimensions. A dimension in this context is, for example, time, product, customer, region, etc. Connected to each set of dimensions are measures, or aggregated facts, such as sales figures, cost, volumes, etc.

Cubes.png

In the schematic image above, each side of the cube could be said to represent a dimension. If side 1 is product, 2 is time and 3 is region, then the cube could be queried in several different ways: the yellow plane could mean "Show sales per product for different years.", the pink plane represents "Show sales per product in different regions." and the blue plane "Show sales per region for different years.".

When a query is sent to the cube it is common to ask for measures aggregated over a number of different dimensions. For example, you may want to look for the sum of sales for all product types in all regions during the last three years. All questions that you might have should already be anticipated and set up in the cube by the cube administrator.

Since cube data is already aggregated from the start, you might want to view Spotfire as a tool for displaying the related combinations of measures and dimensions in measure groups set up by the cube administrator. The free-dimensionality of Spotfire does otherwise allow you to make combinations of measures and dimensions from the cube that do not always make any sense, or lead to "The expression is not valid"-messages in the visualizations.

Normally, you would use measures on axes where actual values are to be displayed, such as, on the value axis of a bar chart, or as cell values in a cross table. Dimension columns can be used to split the viewed data into smaller subsets on categorical axes.

When looking at cube data in the data panel you have the possibility to select one measure group and one related dimension at a time, which may help you selecting suitable options on the axes. The whole cube is loaded as a single data table in Spotfire.

There are also other implications of working with cube data. Since the measures are defined by the context of a dimension, it is not possible to create filters for measures.

Dynamic named sets are dynamic columns in the cube. For example, one set may show the Top 50 customers and this could either be displayed over different countries in the world or within a single country in standard cube reporting tools. However, while the content of a dynamic named set column depends on the current context, it cannot be used in any situations involving filtering within Spotfire since the filters do not come with a context.

See also:

Data Overview

Working With In-Database Data

Details on Microsoft SQL Server Analysis Services Connection

Loading Data

Loading Data Overview


You can load data into the internal TIBCO Spotfire engine from a number of different sources: by pasting from the clipboard, by opening simple text files, Microsoft Excel files, SAS files, a database or an information link (a predefined connection to a shared data source). You may also have access to additional file sources if such have been set up by your company.

TIBCO Spotfire also supports connections to external data sources, such as Microsoft SQL Server Analysis Services, Oracle or Teradata. When a connection to an external source is set up, all calculations are done by the external data source and not by Spotfire. See Working With In-Database Data for more information.

You reach the different ways to load data via the File menu or using Add Data Tables or Add On-Demand Data Table. Using the last two alternatives you can add more than one data table to your analysis.

Limiting What Data to Load

When the data source contains large amounts of data, it may take a long time to retrieve all data and the application could also be perceived as less responsive to different actions. You may also want to restrict some data from certain users. When you are working with information links it is possible to limit what data to open in different analyses in a number of different ways (combinations are also possible):

Method

Use when?

Define where?

Add On-Demand Data Table

When you want the data in your analysis to dynamically change with some predefined condition. For example, when setting up a details visualization dependent on the marking or filtering in another data table.

Another example is when you want one information link to return different data for different analysis files, in which case you could use the on-demand data table as the only data table in the analysis (with a document property as input).

On-demand data tables are added to your analysis in TIBCO Spotfire by selecting File > Add On-Demand Data Table and specifying the input conditions that should control loading.

See On-Demand Overview for more information.

Note: You can only specify a single fixed value as input to on-demand loading, so if you need to retrieve multiple values from a certain column you will have to make sure that the information link is set up to use a multiple selection prompt rather than using it as an on-demand data table.

Details Visualizations Against External Data Sources

When you are analyzing in-database data using a connection to an external data source you only load the requested data.

By setting up visualizations based on the in-db data as details visualizations limited by the marking or filtering in a master visualization you can make sure that the actual loaded data is limited to a subsection of the available data only.

Make sure that the master data table and the in-db data table are related.

Right-click on the master visualization and select Create Details Visualization. Set up the new details visualization to use the in-db data table.

Prompted Information Links

When the source data amount is huge, but the end users of the information link are allowed to determine what data to bring in for analysis themselves.

Can in some cases be replaced by an on-demand data table.

Prompts are defined in Information Designer, Information Link tab, Prompts section.

Personalized Information Links

When you want the data source to return only information applicable for a certain user name (via a lookup table) or for a specified group.

Personalized information links are set up on a filter or column element in Information Designer using the %CURRENT_USER% or %CURRENT_GROUPS% syntax. See Personalized Information Links for more information.

Parameterized Information Links

When you want the data source to return only information applicable for a certain user or group in a more flexible way than with personalized information links.

Parameters are created in Information Designer (for example, as a part of an expression set on a column or filter) but their properties and definitions are defined using the API.

By using a parameterized information link and a configuration block, it is possible to create an analysis with different input parameters (e.g., to be used by an On-Demand data table) for different groups of users. See Parameterized Information Links for more information.

See also:

Information Links

How to Insert a Calculated Column

How to Use Binning

How to Insert Columns

How to Insert Rows

Open File

Opening an Analysis File

If a colleague has created an analysis file (a DXP file) and either sent it to you in an email, or, given you a link to the Library where the file is located, double-clicking on the file will open it. To open a file from within TIBCO Spotfire, see below.

  • To open an analysis file:

  1. Click on the Open button on the toolbar, or select File > Open....

  2. Browse to the analysis file of interest and click Open.

Note: SFS files created with TIBCO Spotfire DecisionSite, opened in TIBCO Spotfire will not retain any visualizations created in DecisionSite, and the file will be opened as if it was a standard Spotfire Text Data Format file. Note that SFS files cannot be opened from the Library.

See also:

Opening a Text File

Opening an Excel File

Opening a SAS File

Opening Files from the Library

Opening an Information Link

Opening Data from a Database

Opening a Text File

This option is used when delimited text files, such as CSV or TXT files, are opened in Spotfire.

  • To open a text file:

  1. Click on the Open button on the toolbar, or select File > Open....

  2. Browse to the text file of interest and click Open.

  3. Look at the Data preview and make sure that the format of your data looks OK.

  4. If necessary, change any settings required to obtain the desired result.

    Comment: For detailed information about the various settings, see Import Settings or Import Settings - Advanced.

  5. Click on Refresh.

    Response: The Data preview field is updated to show how data will be imported with the current settings.

  6. When you are satisfied, click OK.

    Comment: For information about adding more data tables to the analysis, see How to Insert Multiple Data Tables to the Analysis.

Note: If a delimited text file is pasted into Spotfire, the Import Settings dialog will not be displayed. The default settings will be used during import.

Note: SFS files created with Spotfire DecisionSite, opened in Spotfire will not retain any visualizations created in DecisionSite, and the file will be opened as if it was a standard Spotfire Text Data Format file. Note that SFS files cannot be opened from the Library.

See also:

Opening an Analysis File

Opening an Excel File

Opening a SAS File

Opening Files from the Library

Opening an Information Link

Opening Data from a Database

Opening an Excel File

Microsoft Excel files (XLSX or XLS) stored using Microsoft Office Excel 2000 or later can be opened in Spotfire.

  • To open an Excel file:

  1. Click on the Open button on the toolbar, or select File > Open....

  2. Browse to the Excel file of interest and click Open.

  3. Select the Worksheet to import.

    Comment: If you cannot see all worksheets available in the file at this step, try saving and closing the file in Excel before you open it in Spotfire.

  4. Look at the Data preview and make sure that the format of your data looks OK.

  5. If necessary, change any settings required to obtain the desired result.

    Comment: For detailed information about the various settings, see Excel Import.

  6. Click on Refresh.

    Response: The Data preview field is updated.

  7. When you are satisfied, click OK.

    Comment: For information about adding more data tables to the analysis, see How to Insert Multiple Data Tables to the Analysis.

See also:

Opening an Analysis File

Opening a Text File

Opening a SAS File

Opening Files from the Library

Opening an Information Link

Opening Data from a Database

Opening a SAS File

Note: To be able to open SAS data files (*.sas7bdat, *.sd2) directly into TIBCO Spotfire, the SAS Providers for OLE DB 9.1.3 or later must first be installed on the client machine (see http://support.spotfire.com/sr.asp for more information). *.sd7 files can also be opened provided that they first are renamed to *.sas7bdat.

  • To open a SAS file:

  1. Click on the Open button on the toolbar, or select File > Open....

  2. Browse to the SAS file of interest and click Open.

  3. Select the columns to import by clicking on them in the Available columns list and then click Add >.

    Comment: To select all columns click Add All. For multiple selection, press Ctrl and click on the desired columns.

  4. Select whether you want to Map data to TIBCO Spotfire compatible types or not.

  5. Select whether you want to Use Description as column name once imported into TIBCO Spotfire.

    Comment: For detailed information about the various settings, see SAS Data Import.

  6. Click OK.

    Comment: For information about adding more data tables to the analysis, see How to Insert Multiple Data Tables to the Analysis.

See also:

Opening an Analysis File

Opening a Text File

Opening an Excel File

Opening Files from the Library

Opening an Information Link

Opening Data from a Database

Details
Details on Excel Import

  • To reach the Excel Import dialog:

  1. Select File > Open....

  2. Browse to a Microsoft Excel file and click Open.

ExcelImport.png

Option

Description

Worksheet

Select the worksheet containing the data you wish to import.

Note: If you cannot see all worksheets available in the file, try saving and closing the file in Excel before you open it in Spotfire.

Refresh

Updates the Data preview field to reflect any changes made to the settings.

Ignore empty rows

Select the check box to skip empty rows during import.

Data preview

Shows how the file will be interpreted, given the specified settings.

     Name

Double-click on a column name to edit the name.

     Type

Change the type for a column by clicking on the arrow and selecting the new type from the drop-down menu. The available data types are: String, Integer, Real, Currency, Date, Time, DateTime, TimeSpan, LongInteger, SingleReal and Boolean. If an inapplicable data type is selected, the data in the preview will be displayed in italics once you have clicked on the Refresh button.

     Included

Clear the check box to ignore a specific column upon import.

 

The drop-down list available on each row contains the following options:

Option

Description

Name row

Select this option on the row or rows that will be used to specify the column names in the imported data.

Type row

Select this option on the row that will be used to specify the data types.

Data row

Select this option for all data rows that you wish to import.

Ignore

Select this option for rows that should be ignored during import.

See also:

Opening an Excel File

Details on Import Settings

  • To reach the Import Settings dialog:

  1. Select File > Open....

  2. Browse to a delimited text file and click Open.

ImportSettings.png

Option

Description

Separator character

Allows you to specify which character to interpret as separator character.

Individual fields (column names, type strings, and values) are delimited by separator characters—usually commas, semicolons or tabs. Spotfire automatically makes a guess to determine the separator character, but you can change to a different separator character if necessary.

Culture

Allows you to change the culture (the language-related regional settings for formatting information, such as time, currency, or dates) from which the data originates.

Encoding

Allows you to change the encoding used to interpret the data.

Advanced...

Opens the Import Settings – Advanced dialog, where additional settings can be changed.

Refresh

Updates the Data preview field to reflect any changes made to the settings in this dialog or the Import Settings - Advanced dialog.

Data preview

Shows how the file will be interpreted, given the specified settings.

     Name

Double-click on a column name to edit the name.

     Type

Change the type for a column by clicking on the arrow and selecting the new type from the drop-down menu. The available data types are: String, Integer, Real, Currency, Date, Time, DateTime, TimeSpan, LongInteger, SingleReal and Boolean.  If an inapplicable data type is selected, the data in the preview will be displayed in italics once you have clicked on the Refresh button.

     Included

Clear the check box to ignore a specific column upon import.

 

The drop-down list available on each row contains the following options:

Option

Description

Name row

Select this option on the row or rows that will be used to specify the column names in the imported data.

Type row

Select this option on the row that will be used to specify the data types.

Data row

Select this option for all data rows that you wish to import.

Ignore

Select this option for rows that should be ignored during import.

See also:

Opening a Text File

Details on Import Settings – Advanced

  • To reach the Import Settings – Advanced dialog:

  1. Select File > Open....

  2. Browse to a delimited text file and click Open.

  3. In the Import Settings dialog, click Advanced....

ImportSettings-Advanced.png

Option

Description

Comment row beginning

Allows you to ignore all rows beginning with a specific character sequence. For example, if "#" is used as in the example above, all rows beginning with # will be set as Comment rows and will be ignored during import.

Set number of columns

Allows you to specify a fixed number of columns to import. This could be smaller or greater than the number of columns available in the beginning of the text file. For example, in a data table where 50 columns are present for the first 100 rows and 60 columns for the following rows, it could be useful to set this option to 60 and, hence, import all available data.

Minimum number of columns allowed

Ignores rows where the number of available values is less than the specified number. If the data table contains comments or texts in the middle of the data, this option can be set to, for example, 5, and only rows with values in at least five columns will be imported.

Interpret as null (missing value)

Allows you to specify a string that should be interpreted as null (a missing data value).

Start reading data from row

Allows you to leave out a specified number of rows. For example, if your data contains a header of ten rows which should be ignored during import, this option should be set to 11.

Name for columns with no name rows

Specifies the default naming of columns for data tables without any specified name rows. The suffix "{0}" will automatically be added if you do not type it yourself and it means that all columns will receive a number after the specified name. For example, "Column {0}" will result in the columns "Column 1", "Column 2", "Column 3", etc.

Concatenate multiple name rows

Specifies how multiple name rows will be concatenated. For example, the default value for three name rows, "{0}, {1}, {2}", will separate the name parts of the different name rows with a comma and a space. If the commas are removed, "{0} {1} {2}", only a space will separate the name parts.

Replace missing name fields

Allows you to replace a missing name (or a part of a name if multiple name rows are used) by one of the following methods:

None - leaves a name part blank. If no other name rows contribute to the name, the "Name for columns with no name rows" specified above will be used for that particular column.

From left - takes the name or name part from the column on the left and uses it as a name or name part.

By string - replaces missing names or name parts with the specified string.

Has quote character

Specifies whether or not the data table contains quote characters.

Quote character

Specifies the quote character.

Quote escape

Specifies how quote characters should be escaped.

Allow newline characters in quoted fields

Specifies whether or not newline characters will be allowed within a quoted field.

Default

Returns all settings in the Import Settings – Advanced dialog to the default values.

See also:

Opening a Text File

Import Settings

Details on SAS Data Import

  • To reach the SAS Data Import dialog:

  1. Select File > Open....

  2. Browse to a SAS data file (*.sas7bdat or *.sd2) and click Open.

SASDataImport.png

Option

Description

Available columns

Lists all columns available in the SAS file.

Click a column name in the list to select it. To select more than one column, press Ctrl and click the column names in the list. Then click Add > to send the selected column to the Selected columns field.

Selected columns

Lists all columns that will be imported into Spotfire.

Add >

Sends the columns selected in the Available columns list to the Selected columns list.

< Remove

Removes the selected columns from the Selected columns list and sends them back to the Available columns list.

Add All

Adds all available columns to the Selected columns list.

Remove All

Removes all columns from the Selected columns list.

Map data to TIBCO Spotfire compatible types

Select this option to map the data to data types available in TIBCO Spotfire. If this check box is cleared, the SAS formatting will be unchanged.

Use Description as column name (if available)

Select this option to specify whether to use the SAS description as the column name once imported into TIBCO Spotfire. If this check box is cleared, the column name used in the SAS file will be kept after import.

See also:

Opening a SAS File

Open From a Library

Opening Files from the Library

The library provides publishing capabilities for all of your analysis materials, so you can share data with your colleagues. The library can be used directly from Spotfire by anyone who has at least read privileges.

  • To open a file from the library:

  1. Select File > Open From > Library....

    Comment: You can also add data from the library using either of the Add Data Table tools, or the Replace Data Table tool.

  2. Navigate through the folders, and select the analysis file you want to open.

    Response: Information about the selected analysis file is displayed to the right of the list of folders and files.

    Comment: Which library folders you have access to is controlled by group privileges. Contact your Spotfire administrator if you cannot reach all the necessary data.

    Comment: SFS files created with Spotfire DecisionSite cannot be opened in Spotfire from the library. However, local SFS files can be opened using File > Open..., but in that case no visualizations or settings from the SFS file are retained.

    Comment: To limit the amount of items shown in the list, you can select Analysis File from the Show items of type drop-down.

  3. Click Open.

    Note: You can also search for a file in the library by entering a file name, or part of a file name in the search field in the upper right corner in the dialog, and then pressing Enter. All the files and folders matching your search string will then be listed. See Searching the Library for more information about search expressions.

Files published in the library can also be accessed directly by users of Spotfire Web Player by clicking on a link to the analysis in an email or on a website.

Tip: Right-click in the library tree to display a pop-up menu where you can delete or edit the properties of previously added files and folders. You can also copy the URL to an analysis and open the analysis in the Web Player or send the link to a colleague.

See also:

Opening an Analysis File

Opening a Text File

Opening an Excel File

Opening a SAS File

Opening an Information Link

Opening Data from a Database

Saving an Analysis File in the Library

Opening an Information Link

Information links are predefined database queries, specifying the columns to be loaded, and any filters needed to reduce the size of the data table prior to visualization. They are organized into different folders in the library. Which folders in the library are available to you depends on how your permissions have been set by the administrator. Information links are defined using Tools > Information Designer.

  • To open an information link:

  1. Select File > Open From > Library....

  2. Navigate through the folders, and select the information link you want to open.

    Response: Information about the information link is displayed to the right of the list of folders and files.

    Comment: Which Library folders you have access to is controlled by group privileges. Contact your Spotfire administrator if you cannot reach all the necessary data.

    Comment: To limit the amount of items shown in the list, you can select Information Link from the Show items of type drop-down.

  3. Click Open.

    Response: The information link is opened into Spotfire. If the information link contains prompted steps, you have to respond to these first.

    Note: You can also search for an item in the library by entering its name, or part of the name in the search field in the upper right corner in the dialog, and then pressing enter. All the files, information links and folders matching your search string will then be listed. See Searching the Library for detailed information about library search.

See also:

Opening an Analysis File

Opening a Text File

Opening an Excel File

Opening a SAS File

Opening Files from the Library

Opening Data from a Database

Information Links

Searching the Library

You can search for library items in the Open from Library dialog, in the Library Administration tool and in Information Designer.

Searching for a text string will by default look for matching text in the title and keywords of the items in the library. You can use wildcards and boolean operators to search for parts and combinations of words. For a listing of the basic search syntax, see Searching in TIBCO Spotfire.

Library specific search:

Keyword

Example

Function

title:<word in title>

title:sales

Locates library items with the specified word (or part of word) somewhere in the title.

created_by:
<username>

created_by:admin

 

created_by::admin

Locates library items created by a certain user.

In the first example, all items modified by any users beginning with admin will be found. In the second example, only items modified by the user 'admin' will be found.

modified_by:
<username>

modified_by:admin

Locates library items modified by a certain user.

item_type:<type>

or

type:<type>

item_type:datasource

Locates items of a specific type. The available types are: column, filter, join, procedure, query (=information link) folder, dxp (= TIBCO Spotfire analysis file), datasource, datafunction and colorscheme.

item_id::<GUID>

or

id::<GUID>

item_id::dac3cd8c-47ec-454a-a8f2-691c60ece052

Locates a specific library item based on its unique identifier.

depends_on
(
<expression>)

depends_on(item_id::538bcde4-7212-475f-a348-5bb41ba39c41)

 

depends_on(Sales)

Locates all items that depend on a specific element.

 

required_by
(
<expression>)

required_by(item_id::6f6dc7e0-57bd-11d7-5ac0-0010ac110132)

Locates all items that are required by another item. If the GUID in the example to the left belongs to an information link, the search will find all columns, filters, etc. that are included in that information link.

modified

modified:"2 days ago"

modified:"a week ago"

modified:>"an hour ago"

modified:today

modified:<"this month"

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

modified::>created

 

 

modified:"2009-02-01T18:27:55CEST"

It is possible to search for items that have been modified during a specified time span, relative to today. There are two different ways of describing relative dates and times:

1) State the number of time parts ago in a string surrounded by quotes. The available time parts are seconds, minutes, hours, days, weeks, months and years. For example, search for modified:<"6 months ago". The given number of time units will be subtracted from the current time in the search.

2) State the time period to look back at using either of the keywords; today, yesterday, "this week", "this month", "this year". Note that you need quotes around all keywords consisting of more than one word. In this type of search, the last part of the date or time is "reset" (the time gets set to zero, the day of the month gets set to 1 etc.). The start day of a week is dependent on your server locale. For a en-US locale the first day of the week would be Sunday.

Modified, created and accessed can also be used in comparisons with each other. The  example to the left locates all items that have been modified after their creation.

Modified can also be used together with a timestamp of ISO 8601 format ("yyyy-MM-dd'T'HH:mm:ssz") to find items modified at a specific time.

created

created:>"this week"

created:<"2 weeks ago"

 

 

 

 

 

created:>"2009-02-01T18:27:55CEST"

 

 

It is possible to search for items that have been created during a specified time span, relative to today. See details regarding the allowed time spans under "modified" above.

Modified, created and accessed can be used in comparisons with each other.

Created can also be used together with a timestamp of ISO 8601 format ("yyyy-MM-dd'T'HH:mm:ssz") to find items created at a certain time.

accessed

accessed:>"this month"

accessed:<"2 weeks ago"

 

 

 

 

 

 

 

 

 

accessed:null

 

accessed:>"2009-02-01T18:27:55CEST"

It is possible to search for items that have been accessed during a specified time span, relative to today. See details regarding the allowed time spans under "modified" above.

Modified, created and accessed can be used in comparisons with each other.

Accessed can also be used together with a timestamp of ISO 8601 format ("yyyy-MM-dd'T'HH:mm:ssz") to find items accessed at a certain time.

The example accessed:null finds all items that have never been accessed.

The last example finds all items that have been accessed after the first of February 2009.

::>

modified::>created

Used to finds items strictly greater than the expression following the operator.

For example, finds all items that have been modified after their creation.

::<

accessed::<modified

Used to finds items strictly less than the expression following the operator.

For example, finds all items that have been modified after they were last accessed.

parent_id::
<folder GUID>

parent_id::538bcde4-7212-475f-a348-5bb41ba39c41

Locates all items located in the specified folder.

format_version:
<string or null>

format_version:null

Locates all items of a specified format version. For example, all items which have no format version specified can be found.

content_size:
<byte>

content_size:>10000

content_size:>500KB

content_size:<2MB

Locates all items of a specific byte size. In the first example, all items larger than 10000 bytes are found.

If nothing else is specified, the number is interpreted as bytes, but you can specify content sizes in KB, MB or GB as well.

Analysis files:

When searching for analysis files, there are a number of search parameters that may help you locating a specific group of analyses. If you want to locate analysis files only, add type:dxp to the search expression.

Keyword

Example

Function

description

description:sales

 

type:dxp description:sales

Locates all items containing the specified word in their description.

Locates all analysis files containing the specified word in their description.

keywords

keywords:sales

 

type:dxp keywords:sales

Locates all items containing the specified keyword.

Locates all analysis files containing the specified keyword.

AllowWebPlayerResume:
<true or false>

AllowWebPlayerResume:true

If true, locates all analysis files that allow personalized views for all web player users.

EmbedAllSourceData:
<true or false>

EmbedAllSourceData:true

If true, locates all analysis files that embed all source data. (Override and embed all data check box selected.)

OnDemandInformationLinks:
<GUID>

OnDemandInformationLinks:*

 

 

OnDemandInformationLinks:c45618c3-b7ac-43aa-bafe-e14f39fd4bb7

The first example locates all analyses that use on-demand data tables.

You can also specify a GUID to locate all analyses that use a specific information link as an on-demand data table.

AllTablesEmbedded:
<true or false>

AllTablesEmbedded:true

If true, locates all analysis files that only have embedded data tables.

Information Model elements:

If you want to locate information model elements of a specific type only, add type:column (or filter, join, procedure, query, folder or datasource) to the search expression.

Keyword

Example

Function

description

description:sales

 

type:query description:sales

Locates all items containing the specified word in their description.

Locates all information links containing the specified word in their description.

column

column:Sales

 

column::Sales

Locates all items referring to a source column with the specified name.

The source column could be referred to in the conditions or groupings of a column element, a filter condition, a join condition or the join condition of a procedure.

table

table:SalesandCost

Locates all items referring to a source table or stored procedure with the specified name.

This could be referred to in the conditions or groupings of a column element, a filter condition, the condition or target tables of a join or in the source procedure or join condition of a procedure.

schema

schema:dbo

Locates all items referring to a source schema with the specified name.

This could be referred to in the conditions or groupings of a column element, a filter condition, the condition or target tables of a join or in the source procedure or join condition of a procedure.

catalog

catalog:Sales

Locates all elements referring to a source catalog with the specified name.

This could be referred to in the conditions or groupings of a column element, a filter condition, the condition or target tables of a join or in the source procedure or join condition of a procedure.

datatype

datatype:integer

Locates all columns of the specified data type (integer, real, string, date, time, datetime, clob or blob).

parameter

 

parameter:MinSales

parameter:*

Locates information links using the specified parameter.

<property_name>:
<property_value>   

 

 "my.prop":*

Custom properties in any information model element are searchable using the same syntax.

However, note that the property name must by quoted if it contains a '.' delimiter.

Combinations of keywords:

You can combine many of the keywords described above to create more advanced search expressions. For example:

type:query depends_on(type:column salary) - searches for information links that contains a column named salary

type:query depends_on(column:salary) - searches for information links that contains an element that refers to a data source column named salary

required_by(type::query InformationLinkName) - shows the elements used by the information link with the name InformationLinkName.

(not (required_by(type:dxp))) and type:query - searches for information links that are not used by any analysis file in the library.

  • To search for items in the Open from Library dialog:

Depending on where you are searching, you may get different search results. Analyses and information links are shown when searching in the Open from Library dialog, not any information model elements or data sources, etc.

  1. Navigate to the top folder of the structure you want to perform the search in. If you want to search the entire library, navigate to the library root.

  2. Type the text you want to search for in the search field at the top right corner of the dialog.

  3. Click on the search button with a magnifying glass.

    Response: The dialog will switch to a Search Results view.

  4. The items matching your search criteria will be displayed in the list. To return to the normal folder view, click the Back to folder link.

  • To search for items in the Library Administration tool:

  1. Navigate to the top folder of the structure you want to perform the search in. If you want to search the entire library, navigate to the library root.

  2. Type the text you want to search for in the search field at the top right corner of the Library Administration tool.

  3. Click on the Search button.

    Response: The Library Administration tool will switch to a Search Result view. Note: Searching for data sources does not include searching for database entities like catalogs, schemas or tables. It is only the database instance itself that can be located via search.

  4. The items matching your search criteria will be displayed in the list. To return to the normal folder view, click the Back to folder link.

  • To search for items in Information Designer:

Depending on where you are searching, you may get different search results. Information model elements, information links and data sources are shown when searching in Information Designer, not any analyses, etc.

  1. Type the text you want to search for in the search field at the top of the Elements tree.

  2. Click on the search button with a magnifying glass, MagnifyingGlassButton.png.

    Response: The search results are displayed. Note: Searching for data sources does not include searching for database entities like catalogs, schemas or tables. It is only the database instance itself that can be located via search.

  3. The items matching the search result are shown in the list. To return to the normal folder view, click the Clear Search... link.

  • To use search expressions in custom RSS feeds:

You can create a customized RSS feed showing the latest changes to the library items you are interested in by appending a library search expression to a URL.

Use the following syntax to create your own feed:

http://<server>/spotfire/library[/path/to/something/interesting]?rss[&search=<search_expression>]

The path and  search parameters are optional. If you only specify http://myspotfireserver/spotfire/library?rss, the feed will return the 20 most recently modified files in the library. You can also add a max-results section if you want to limit the number of results shown, see example below.

Examples:

http://myspotfireserver/spotfire/lib...h=content_size:>500KB

http://myspotfireserver/spotfire/lib...ated_by::admin

http://myspotfireserver/spotfire/lib...ch=title:sales

  • To use search expressions in tibcospotfire links:

You can incorporate a search expression in a tibcospotfire link in order to directly populate the Open from Library dialog with some suitable analyses or information links. See Links to Analyses in the Library for more information about links. The links are a list of keys and value pairs. The key and value are separated using colon, :, and each key and value pair are also separated with colons:

tibcospotfire:<key1>:<value1>:<key2>:<value2>...<keyN>:<valueN>

The following keys and values are allowed:

Search: <search expression> with optional parameters.

OrderBy : Title | Modified | Created | Accessed | ContentSize | Description

MaxResult: <positive integer>

SortDirection: Ascending | Descending

The values should be encoded using the following pattern:

Value

Encoded to:

:

\:

"

\'

\

\\

Examples:

tibcospotfire:search:*:OrderBy:Modified:SortDirection:Descending:MaxResult:20

tibcospotfire:search:modified\:<\'3 days ago'\:OrderBy:Modified:SortDirection:Descending

Edit Properties
Details on Edit Properties – General

This dialog is used to edit the properties for an item in the library. It can be reached by right-clicking on the item of interest in any view representing the library structure and selecting Edit Properties... from the pop-up menu. In the Library Administration tool, it is reached when clicking on the Edit... link for the Selected Item.

EditProperties.png

To edit the properties of an item you must have Browse + Access + Modify permissions to the folder it is placed in.

Option

Description

Name

The name of the library item. The following characters are not accepted in titles:
\ / : * ? " < > | $ & + = @ # % ; , { [ ] } ^ ' ~ ´

Description

A description of the library item.

Keywords

Allows you to add keywords which can be used for finding the item in the library. Keywords are separated by a semicolon.

See also:

Opening Files from the Library

Saving an Analysis File in the Library

Editing Properties in the Library Administration tool

Details on Edit Properties - Document

Details on Edit Properties – Document

EditProperties-Details.png


Option

Description

Remember personalized view for each Web Player user

Select this check box if you want to allow users to continue where they left off from one time to another when working on the analysis in the Web Player. For example, a user can open the analysis in the Web Player, change the view (by filtering out some data, for instance), close the analysis, and then open the analysis again with the same filter settings.

Note: To make sure this works completely, it is necessary to define key columns for all the data tables in the analysis even if they are embedded.

Allow users to add new bookmarks

The ability to add and modify bookmarks can be restricted on two levels: the user licenses and the property settings on an analysis level. This check box determines the analysis level settings based on the users' library folder permissions.

Clear the check box if you do not want anyone to be able to add bookmarks to the analysis.

Select the check box to allow some or all users to add bookmarks and specify the permitted level using the drop-down list:

Private bookmarks only (all users) – allows all users to add private bookmarks but no public bookmarks are allowed.

Private (all users), public (write permissions needed) – allows all users to add private bookmarks but only users with Modify folder permissions or higher will be able to make bookmarks public.

Private and public bookmarks (write permissions needed) – allows only users with Modify folder permissions or higher to add any bookmarks.

Private and public bookmarks (all users) – allows all users to add both private and public bookmarks.

Select preview image

Select whether or not to show a preview image for this analysis when browsing for analyses in the library.

Automatically - sets the preview image to a snapshot of the active page when saving the analysis to the library.

Manually - allows you to manually select a previously saved image.

(No preview) - use this option to prevent any preview image from being shown in the library.

Browse...

When Select preview image has been set to Manually you can browse for an image to use in the preview.

Current preview image

Displays the currently selected preview image. If Select preview image has been set to Automatically and the analysis has not yet been saved to the library, then no preview will be visible. However, once saved to the library the active page when saving will be used as a preview image.

See also:

Opening Files from the Library

Saving an Analysis File in the Library

Editing Properties in the Library Administration tool

Details on Edit Properties - General

Add Data Tables

How to Insert Multiple Data Tables into the Analysis

Data can be added to the analysis in several different ways: as new columns, as new rows or as new data tables. Adding data as separate data tables is useful if the new data is unrelated to the previously opened data table or if the new data is in a different format (pivoted vs. unpivoted).

If you have a visualization made from a particular data table which has filtering and marking that you would like to apply to visualizations made from another data table, then you must define a relation between the two tables. For a relation to be useful, you need to have one or more key columns (identifier columns) available in both data tables, and use these to define which rows in the first data table will correspond to rows in the second data table. If you need more than one key column to set up a unique identifier, you must add one relation for each identifier column.

Note: The map chart is the only visualization where you can use different data tables in the same visualization. If you need to bring in-memory data from different data sources together in any other single visualization, use the Insert Columns or Insert Rows tools instead. With in-database data tables you can often join several database tables into a single virtual data table before adding it to Spotfire. See Details on Data Tables in Connection for more information.

Tip: For a simple line from a different data table in a scatter plot, see Details on Line from Data Table.

  • To add new data tables to the analysis:

  1. Select File > Add Data Tables....

    Response: The Add Data Tables dialog is displayed.

  2. Click Add and select the type of data to add from the drop-down list.

    Comment: You can add data tables from files, information links, databases, the clipboard, external connections, data functions or from current data tables within your analysis. You may also have access to other sources if they have been set up by your administrators.

    Response: Depending on your selection you will be presented with a dialog where you can specify which file, information link, etc., to add. If you need more information on specific data sources, see Opening a Text File, Opening an Excel File, Opening a SAS File, Opening an Information Link,  Opening Data from a Database or Adding Data Connections.

  3. Select the source data and specify any required settings.

  4. If desired, type a new Data table name.

  5. Apply transformations (optional and not applicable for in-database data tables).

  6. If you want to add more data tables, repeat steps 2-5 for each data table.

  7. Determine whether or not the new data tables will be related to each other or to previously added data tables. If a relation is necessary, click Manage Relations... and specify the relation.

    Comment: See To define a new relation below for more information. Remember that you need to define a relation if the new data table is to be used to create details visualizations for the previously added data tables.

  8. Click OK.

    Response: The new data tables are incorporated into the analysis and are ready to be used.

Note: If you want to add a new data table that is loaded on demand you should instead use the File > Add On-Demand Data Table option. See Loading Data on Demand for more information.

  • To define a new relation:

  1. In the Add Data Tables dialog, click Manage Relations....

    Response: The Manage Relations dialog is displayed.

  2. Click on New....

    Response: The New Relation dialog is displayed.

  3. Select the two data tables you want to connect from the Left data table and Right data table drop-down lists.

  4. Select the columns containing the identifiers from the Left column and Right column drop-down lists.

  5. If desired, you can apply a Left method or Right method to modify the values of one or both columns.

    Comment: For example, if the identifiers are written in uppercase letters in one of the data tables and in lowercase letters in the other, you can use the Lower method on the uppercase column and change the letters to lowercase.

    Response: The result of the method application is shown in the Sample field.

  6. Click OK.

Tip: You can always go back and edit relations as well as create new ones using the Data Table Properties dialog.

See also:

Transforming Data

Details
Details on Add Data Tables

Use this dialog to add one or more data tables to your analysis. You can also apply one or more transformation steps before adding the new data table.

AddDataTables.png

Option

Description

Data tables

Lists all data tables that you have selected to add to the analysis, along with information about their origin and any transformations.

Add

 

  Files...

Allows you to add a data table from a file.

  Information Link...

Allows you to add a data table from an information link.

  Connection To

Use one of the options below when you are analyzing massive amounts of data and you need to keep the underlying data of aggregated values in the database rather than bringing it into Spotfire's internal data engine.

     Oracle

Allows you to set up a connection to an Oracle database and analyze your data externally.

     Microsoft SQL Server

Allows you to set up a connection to a SQL Server database and analyze your data externally.

     Microsoft SQL Server Analysis Services

Allows you to set up a connection to a SQL Server Analysis Services cube and analyze your data externally.

     Teradata

Allows you to set up a connection to a Teradata database and analyze your data externally.

  Database...

Allows you to add a data table from any supported database.

  Clipboard

Allows you to add a data table from the clipboard.

  Data Function...

Allows you to add a data table from a data function.

  From Analysis

Allows you to add a data table from the current analysis. For example, you may want to pivot or otherwise transform the data in an already existing data table, but you also want to keep the original data in the analysis.

Remove

Removes the selected data table from the list.

Name

Allows you to change the name of the selected data table.

Show transformations

Expands the dialog and allows you to apply transformations on the data table you want to add. For more information, see Details on Show Transformations.

Manage Relations...

Opens the Manage Relations dialog where you can specify how the new data tables are related to each other or any previously loaded data tables in your analysis.

When working with in-database data tables you must add the data tables to the analysis before you can add any relations. Open this dialog via the Data Table Properties dialog at a later stage instead.

See also:

Details on Add On-Demand Data Table

Details on Manage Relations

This dialog is used to manage relations between both new and previously added data tables in your analysis. When data tables have been related, they can be set up to propagate marking and filtering  (see Filtering in Related Data Tables) from one data table to another. A relation between data tables is necessary if you want to set up a details visualization where the marking in one visualization allows you to drill down to details about the selected data in another visualization.

  • To reach the Manage Relations dialog:

  1. Select Edit > Data Table Properties.

  2. Go to the Relations tab.

  3. Click on Manage Relations....

    Comment: You can also reach the Manage Relations dialog from the Data page of the Map Chart Visualization Properties, or from the Add Data Tables or the Add On-Demand Data Table dialogs.

ManageRelations.png

Option

Description

Show relations for

Select the data table whose relations you wish to view, or select All data tables to view all relations in the document.

Relations

Lists all relations for the selected data table or all relations in the document, depending on your selection above.

Note: If one or more relations have become invalid, these will appear in red.

New...

Opens the New Relation dialog where you can define a new relation between two data tables.

Edit...

Opens the Edit Relation dialog where you can edit the relation selected in the Relations list.

Delete

Removes the selected relation from the Relations list.

See also:

Details on Data Table Properties

How to Insert Multiple Data Tables to the Analysis

Details on New/Edit Relation

This dialog is used to define a relation between two data tables.

NewRelation.png

Option

Description

Left data table

Lists all data tables currently available in the analysis. Select one of the data tables for which you wish to define a relation.

Right data table

Lists all data tables currently available in the analysis. Select the data table you wish to relate to the previously selected left data table. If you reached this dialog via an add data table procedure, then the new data tables will be the only ones available here.

Left column

Lists all columns available in the left data table. Select the column to be used in the matching of rows.

Right column

Lists all columns available in the right data table. Select the column to be used in the matching of rows.

Left method

If desired, modifies the content of the selected left column according to the selected method. What methods are available depends on the data type of the selected column. For example, for a string column it is possible to use the methods "Lower" or "Upper" to convert the strings to lowercase or uppercase, respectively.

Right method

If desired, modifies the content of the selected right column according to the selected method. What methods are available depends on the data type of the selected column. For example, for a string column it is possible to use the methods "Lower" or "Upper" to convert the strings to lowercase or uppercase, respectively.

Sample value

Displays the resulting first value of the selected left or right column after any specified methods have been applied.

See also:

Details on Manage Relations

Details on Browse for Data Table

This dialog is shown when you have selected to add a data table or additional columns or rows from an existing data table, and you have more than one data table available in the analysis.

BrowseforDataTable.png

Select the data table from which you wish to add or replace data using the drop-down list.

See also:

Details on Add Data Tables

Details on Data Function – Select Input

This dialog allows you to define how the input parameters of the selected data function should be handled when adding data tables. It is necessary to specify a mapping of all required parameters to Spotfire in order to use the data function.

  • To reach the Data Function - Select Input dialog:

  1. Select File > Add Data Tables....

    Response: The Add Data Tables dialog is displayed.

  2. Select Add > Data Functions....

    Response: The Data Functions - Select Function dialog is displayed.

  3. Click to select the function of interest from the list, then click OK.

    Comment: If no previously added data are available in your document, you will only be able to select data functions with values as input.

DataFunctions-SelectInput.png

Option

Description

Refresh function automatically

Select this check box to update the results from the data function automatically each time the input settings are changed. If the check box is cleared, a manual refresh is needed in order for any updates to take effect.

A data function set to load automatically will switch to manual update if cyclic dependencies are detected in the analysis.

Input parameters

 

Lists all input parameters that have been defined for the selected data function. Select an input parameter in this list to edit its settings.

Input handler

Lists all possible input handlers for the selected input parameter. Depending on which input handler you select in this list, a different set of settings is available to the lower right in the dialog.

[Input handler settings]

See the table below.

OK

Adds the selected data function to the Add Data Tables dialog.

Input Handler Settings

Note that which input handlers are available depends on the type of input parameter that is selected (Value, Column or Table). You will not be able to select from all of the input handlers described below when specifying the input for a selected parameter.

Option

Description

Column

 

   Data table

Allows you to select the data table from which to retrieve the input column.

   Column

Allows you to specify which column to use as input from the selected data table.

   Limit by

Use a combination of filtering and markings to limit the calculations to rows matching the specified settings only. If more than one option is selected, then calculations will be performed for rows matching the intersection of the selected filtering and markings only.

Leave both the Filtered rows and the Marked rows check boxes blank to base calculations on all rows.

      Filtered rows

Select this check box to limit the calculations to rows remaining after filtering with the specified filtering scheme.

      Marked rows

Select this check box to limit the calculations to rows marked by the selected markings.

If more than one marking is available in your analysis, you need to determine which marking or markings to control the calculation. If more than one marking is selected, then calculations will be performed for rows matching the intersection of the markings.

Columns

 

   Data table

Allows you to select the data table from which to retrieve the input columns.

   Columns

Lists the selected input columns. Click Select Columns... to change columns.

   Select Columns...

Opens a dialog where you can specify which columns to include as input to the function.

   Limit by

See a description of the options under Column above.

Expression

 

   Data table

Allows you to select the data table to evaluate the expression against.

   Expression

Displays the expression.

   Edit...

Opens the Edit Expression dialog where you can specify an expression.

   Limit by

See a description of the options under Column above.

Value

 

   Value

Allows you to type an input value in the text box.

Document property

 

   Property

Allows you to select a document property to use as input. Use the search field to help locate your property.

   New...

Opens the New Property dialog where you can define a new document property to use as an input parameter.

   Edit...

Opens the Edit Property dialog where you can change the value of the selected property.

   Delete

Deletes the selected property.

Data table property

 

   Data table

Allows you to select the data table to work with.

   Property

Allows you to select a data table property to use as input. Use the search field to help locate your property.

   New...

Opens the New Property dialog where you can define a new data table property to use as an input parameter.

   Edit...

Opens the Edit Property dialog where you can change the value of the selected property.

   Delete

Deletes the selected property.

Column property

 

   Data table

Allows you to select the data table to work with.

   Column

Allows you to select which column to work with.

   Property

Allows you to select the column property you wish to use as input.

   New...

Opens the New Property dialog where you can define a new column property to use as an input parameter.

   Edit...

Opens the Edit Property dialog where you can change the value of the selected property.

   Delete

Deletes the selected property.

None

No input handler has been selected. This can be used for optional input parameters. If the input parameter is required, you must specify a different input handler to be able to continue.

 

See also:

What are Data Functions?

How to Insert Multiple Data Tables into the Analysis

Add On-Demand Data Table

On-Demand Overview

When an information link is to be added to the analysis as a new data table, you have the option to either load all data at once, or to load data on demand only. Your analysis can benefit from on-demand loading when you have access to massive amounts of data, but you only need to work with some parts of the data at a time. When setting up an on-demand data table you can specify conditions based on one or more other data tables to control what to load. You can also start by letting an on-demand data table be the first (or only) data table in the analysis if its input is defined by a document property.

OnDemandOverview.png

See also:

Loading Data on Demand

Example of Marking Controlled On-Demand Details Visualization

Example of Property Controlled On-Demand Data

How to Insert Multiple Data Tables to the Analysis

Loading Data on Demand

Select File > Add On-Demand Data Table... to load data on demand.

The on-demand loading of information links can be controlled by specifying one or more conditions that need to be met for data to be loaded. See Example of Marking Controlled On-Demand Details Visualization and Example of Property Controlled On-Demand Data for examples of how to configure the on-demand loading in those cases.

Conditions can be set in a number of different ways. For example, they could be determined by the value of a property or an expression, or by the values of the filtered or marked rows in a column from another data table. If the selected information link has been set up with required prompts or parameters, then these will automatically be required parameters for the on-demand loading and you must specify a condition using the Define Input button for each required parameter.

If desired, you can apply a transformation to the data prior to loading. While the transformation will be performed on the data corresponding to the condition only, you may gain some performance by doing the transformation here rather than conditioning the entire information link directly.

Examples of Conditions

What to control on-demand loading

How to set it up

Marking in another data table.

1. Click to select the column of interest from the Define input for parameters that should control loading list.

2. In the Define Input dialog, set Input for the selected parameter to Values from column.

3. Select the Data table from the analysis where you want to mark data.

4. Select the Column from the selected data table to match against the column in the information link.

5. Select to Limit by Marked rows by selecting the check box.

Filtering in another data table.

1. Click to select the column of interest from the Define input for parameters that should control loading list.

2. In the Define Input dialog, set Input for the selected parameter to Values from column.

3. Select the Data table from the analysis where you want to mark data.

4. Select the Column from the selected data table to match against the column in the information link.

5. Select to Limit by Filtered rows by selecting the check box.

A range of values defined by the min and max values from the current marking or filtering, for a selected column.

1. Click to select the column of interest from the Define input for parameters that should control loading list.

2. In the Define Input dialog, set Input for the selected parameter to Range from column.

3. Select the Data table from the analysis where you want to mark data.

4. Select the Column from the selected data table to match against the column in the information link.

5. Select to Limit by Marked rows or Filtered rows by selecting the corresponding check box.

A document property value.

1. Click to select the column/parameter of interest from the Define input for parameters that should control loading list.

2. In the Define Input dialog, set Input for the selected parameter to Values (fixed/properties/expression).

3. Click the Property radio button.

4. Click Select... and specify which document property to use in the dialog that opens.

An expression.

1. Click to select the column/parameter of interest from the Define input for parameters that should control loading list.

2. In the Define Input dialog, set Input for the selected parameter to Values (fixed/properties/expression).

3. Click the Expression radio button.

4. Click Edit... and specify your custom expression.

All values over (or under) a certain limit, e.g., Sales > 1000.

1. Click to select the column/parameter of interest from the Define input for parameters that should control loading list.

2. In the Define Input dialog, set Input for the selected parameter to Range (fixed/properties/expression).

3. In the field of interest (e.g., Min) click the Fixed value radio button.

4. Type the value of interest in the field or click Select... to pick a value from the available values in the column.

The data retrieved for the on-demand data table can be based on a combination of all of the examples above.

Note: Special attention is needed when setting up an on-demand data table dependent on nothing but a parameter defined within a configuration block (a text file which configures the initial state of an analysis, see Spotfire Technology Network). An on-demand data table must always have at least one input defined for the parameter in the Add On-Demand Data Table dialog. Since configuration block parameters will automatically be assigned to document properties (if they exist and have the appropriate data type), a document property may act as a bridge between a configuration block parameter and a parameter in an information link. If another input is defined, such as a column filter, no such bridge is required and the parameter from the configuration block will be used automatically.

See also:

On-Demand Overview

Details on Define Input

Example of Marking Controlled On-Demand Details Visualization

If you have selected to load data on demand, and specified a marking that should control what data to be loaded you will end up with the following scenario:

When you mark items in a visualization that uses the specified marking, the data for the on-demand data table is updated. The update can either be done automatically each time you change the marking, or manually by clicking on the refresh button displayed when the marking is changed.

This way, you can create a master visualization in which to specify the item of interest and a details visualization where more information about the marked item is loaded from the database only when requested.

  • To set up a marking controlled on-demand data table:

When one identifier column in the first data table is matched by an identifier column in the on-demand data table, the on-demand data table should be set up using the following steps:

  1. Select File > Add On-Demand Data Table....

    Response: The Select Information Link dialog is displayed.

  2. Browse to the information link holding the desired data and select it in the list.

  3. Click OK.

    Response: The Add On-Demand Data Table dialog is displayed.

  4. If desired, change the Data table name for the new on-demand data table.

  5. In the Define input for parameters that should control loading list, click to select the column in the information link that contains the identifiers.

  6. Click Define Input....

    Response: The Define Input dialog is displayed.

  7. In the Input for the selected parameter list, select Values from column.

  8. Select the Data table used by the master visualization.

  9. Select the Column containing identifiers in the master data table.

  10. Make sure that the Marked rows check box is selected and that only the check box for the marking used in the main visualization is selected.

    Comment:  You might also want to add a relation between the two data tables, so the marked rows from the master data table also become marked in the on-demand data table visualizations. This can be done directly in the Add On-Demand Data Table or later in the Data Table Properties dialog. See Data Table Properties - Relations for more information.

  11. Click OK to close the Define Input dialog.

  12. Use the Load automatically check box to determine whether to reload data as soon as the input conditions change or using a manual update only.

    Comment: This setting can be changed later on in the Data Table Properties dialog.

  13. Click OK.

    Response: The on-demand data table is loaded and a default visualization is created. The data shown in any visualization based on the on-demand data table will depend on what is marked in the master visualization.

Example of on-demand-loaded data table with manual update:

Click on an item in the master visualization:

Exampleofon-demand-loadeddatatablewithmanualupdate1.png

The refresh button of the visualization based on the on-demand-loaded data table appears in the title bar. (If nothing was marked from the beginning, the on-demand visualization will be empty until the first refresh.) Click on refresh.

Exampleofon-demand-loadeddatatablewithmanualupdate2.png

The visualization is updated to show details about the marked item:

 

Clicking on a different item in the master visualization once again displays the refresh button.

Click refresh to update the on-demand visualization to use the new marking:

See also:

On-Demand Overview

Example of Property-Controlled On-Demand Data

What is a Details Visualization?

Example of Property Controlled On-Demand Data

The data that is to be loaded on demand can be controlled in a number of ways. See Loading Data on Demand for more information. The example below uses a property control in a text area to select which data to display in a bar chart based on an on-demand data table.

In this example, we first assume that we have a data table containing a string column called "Type" which lists a number of different product types loaded in the analysis. We also assume that there is an information link with some additional data available, which also contains a "Type" column. See Creating an Information Link if you need information about how to set up information links.

  • To add an on-demand data table using input from a document property value:

  1. Select File > Add On-Demand Data Table....

    Response: The Select Information Link dialog is displayed.

  2. Browse to the information link holding the desired data and select it in the list.

  3. Click OK.

    Response: The Add On-Demand Data Table dialog is displayed.

  4. If desired, change the Data table name for the new on-demand data table.

  5. In the Define input for parameters that should control loading list, click to select the column containing the product types.

  6. Click Define Input....

    Response: The Define Input dialog is displayed.

  7. In the Input for the selected parameter list, select Values (fixed/properties/expression).

  8. Click on the Property radio button.

  9. Click Select....

    Response: The Select Property dialog is displayed.

  10. If no suitable property is available, click New... in the Document Properties tab.

    Response: The New Property dialog is displayed.

  11. Define a string property using one of the available product types as default value. For example, create a string property called "Type" with the value "Apples".

  12. Click OK in all dialogs.

    Response: The on-demand data table is loaded using the limiting default value and a visualization is displayed. In the example below, the visualization shown is a bar chart displaying the sum of sales for Apples in four different regions.

ExampleofPropertyControlledOn-DemandData1.png

  • To add a property control for changing the document property to a text area:

  1. Create or activate a text area.

  2. Click on the Toggle Edit Mode button, ToggleEditModeButton.png, in the title bar of the text area.

  3. Type some descriptive text to help other users understand what the control will do.

  4. Click on the Insert Property Control button, InsertPropertyControlButton.png, and select which type of control to add. In this example we will add a drop-down list.

    Response: The Property Control dialog is displayed.

  5. Select the previously specified document property.

  6. Select Set property value through: Unique values in column.

  7. Select the Data table to be the first data table in the analysis (not the on-demand data table).

  8. Select the "Type" Column.

  9. If desired, limit the values to be displayed in the drop-down list using a search expression.

    Comment: Only those values matching the search expression will be shown in the control. See Searching in TIBCO Spotfire for more information about valid search expressions.

  10. Click OK.

    Response: The property control is added to the text area.

  11. Click on the Toggle Edit Mode button again to exit the edit mode.

You can now use the property control to change which product type to look at in the visualization. If Load automatically has been selected in the on-demand settings, the visualization will be updated each time the property is changed via the control. If Load automatically has not been selected, a refresh button will be displayed in the title bar of the visualization each time the input is changed.

ExampleofPropertyControlledOn-DemandData2.png

See also:

Using Properties in the Analysis

Details
Details on Add On-Demand Data Table

This dialog is used when you want to add a data table where data is loaded only when requested. You can specify what input to control the loading in a number of different ways.

  • To reach the Add On-Demand Data Table dialog:

  1. Select File > Add On-Demand Data Table....

  2. Browse to the information link holding the desired data and select it in the list.

  3. Click OK.

AddOn-DemandDataTable.png

Option

Description

Source

Lists the path and name of the selected information link.

Browse...

Allows you to select a different information link.

Data table name

Allows you to specify a name for the new on-demand data table.

Define input for parameters that should control loading

This is where you select what will affect the loading of data from the perspective of the information link. All columns and parameters available in the selected information link are listed. Click to select the parameter in the list and click Define Input... to specify a condition that must be fulfilled for any data to be loaded.

For example, this is where you specify that the marking based on a certain column from one data table will limit what is shown in the on-demand data table. If you would like to retrieve only data for a certain Region as shown in the picture above, you would select Region in this list and click Define Input to specify that only those rows corresponding to the set condition (e.g., marked rows in the data table "Sales Data") should be retrieved.

Any required prompts or parameters that were specified upon the creation of the information link will be listed as Required parameters in this field. This means that you must specify input handling of these parameters to be able to load any on-demand data at all.

Define Input...

Opens the Define Input dialog where you can tie the selected parameter to a value or a range.

Clear Input

Removes the previously added input definition from the selected column or parameter.

Load automatically

Select this check box if the on-demand data should be loaded automatically each time the specified input conditions are changed. If the check box is cleared the visualization can be manually updated using the refresh icon,, in the visualization title bar.

A data table set to load automatically will switch to manual update if cyclic dependencies are detected in the analysis.

Allow caching

Select this check box to allow caching of data. This may speed up the process when loading new subsets of data. However, if the underlying information link data are updated during the current TIBCO Spotfire session you may end up with different results for a specific set of input values depending on whether or not the current selection is stored in the cache. You should always clear the check box if you know that the underlying data may be updated during your current session.

Show transformations

Expands the dialog and allows you to apply transformations on the data table you want to add. For more information, see the Show transformations topic.

Manage Relations...

Opens the Manage Relations dialog where you can define how the on-demand data table should be related to other data tables in your analysis.

If you want marked rows in one data table to also show up as marked in the other data table, then adding a relation is necessary.

See also:

Loading Data on Demand

How to Insert Multiple Data Tables into the Analysis

Details on Add Data Tables

Details on Define Input

This dialog is used to tie the selected parameter to a specified value or a range from the perspective of the analysis.

  • To reach the Define Input dialog:

  1. Select File > Add On-Demand Data Table....

  2. Browse to the information link holding the desired data and select it in the list.

  3. Click OK.

  4. Select the parameter for which you want to create a condition and click Define Input....

DefineInput1.png

Option

Description

Selected parameter

Shows the name of the parameter that was selected in the Add On-Demand Data Table dialog in a previous step.

Input for the selected parameter

Allows you to select whether to retrieve the input for the parameter from values or a range. See a description of the various options below.

   Values from column

Use this option to set the conditions for the parameter from the values in a column already in the analysis. The data retrieved for the on-demand data table can be based on filtered or marked rows, or a combination of both. See below for details.

   Range from column

Use this option to set the conditions for the parameter from the range of a column already in the analysis. The resulting range will be the min and the max values from the selected column.

   Values (fixed/properties/expression)
   

Use this option if you want to specify fixed values or connect the parameter to a property. You can also calculate the values with an expression.

   Range (fixed/properties/expression)

Use this option if you want to specify a fixed range or connect the parameter range to properties. You can also calculate the values with an expression.

Note: This option can also be used to set a single limit for a range, either an upper or a lower limit, such as loading only "Sales < 100".

 

Values from column/Range from column settings

DefineInput2.png

Option

Description

Data table

Select the data table where the column of interest is located.

Column

Select the column from which the input values should be picked.

Limit by

Use a combination of filtering and markings to limit the loaded data to rows matching the specified settings only. If more than one option is selected then data will be retrieved for rows matching the intersection of the selected filtering and markings only.

Leave both the Filtered rows and the Marked rows check boxes blank to retrieve data for all rows.

   Filtered rows

Select this check box to retrieve data for values remaining after filtering with the specified filtering scheme.

   Marked rows

Select this check box to retrieve data for values marked by the selected markings.

If more than one marking is available in your analysis you need to determine which marking or markings to control the loading. If more than one marking is selected, then data will be retrieved for rows matching the intersection of the markings.

   Number of identifiers to limit data loading to

Available for the Values from column option only.

Use this check box to determine whether the on-demand data should be loaded regardless of how many identifiers have been marked or whether there should be a limit to how many identifiers one can retrieve data for.

The purpose of this option is to be able to limit the amount of data loaded from the server. If more identifiers than the specified number are marked and this check box has been selected, the data table will be empty.

 

Values (fixed/properties/expression) settings

DefineInput3.png

Option

Description

Fixed value

Select this option to type a value to use as input for the parameter.

   Select...

Opens the Select Value dialog where you can pick a value available in the selected column element to use as a fixed value. This may be helpful when you are uncertain of which values are valid for the specified information link.

The button will not be available if your information link is parameterized.

If the selected parameter is a date or a datetime column, you will be able to select a date by clicking on the calendar icon instead, CalendarIcon.png.

Property

Select this option to tie the parameter value to a property value. Property values can easily be changed if you add a property control to a text area. See Using Properties in the Analysis for more information.

   Select...

Opens the Select Property dialog where you can specify a property to tie to the selected parameter.

Expression

Select this option if you need to perform some calculation to obtain the desired input parameter value. For example, if the input is to be affected by multiple columns and properties, these can be defined in an expression.

   Edit...

Opens the Edit Expression dialog where you can specify the expression to control the input parameter.

Limit by

Use a combination of filtering and markings to limit the loaded data to rows matching the specified settings only. If more than one option is selected then data will be retrieved for rows matching the intersection of the selected filtering and markings only.

Leave both the Filtered rows and the Marked rows check boxes blank to retrieve data for all rows.

   Filtered rows

Select this check box to retrieve data for values remaining after filtering with the specified filtering scheme.

   Marked rows

Select this check box to retrieve data for values marked by the selected markings.

If more than one marking is available in your analysis you need to determine which marking or markings to control the loading. If more than one marking is selected, then data will be retrieved for rows matching the intersection of the markings.

   Number of identifiers to limit data loading to

Use this check box to determine whether the on-demand data should be loaded regardless of how many identifiers have been marked or whether there should be a limit to how many identifiers one can retrieve data for.

The purpose of this option is to be able to limit the amount of data loaded from the server. If more identifiers than the specified number are marked and this check box has been selected, the visualization will be empty.

Include empty values

Select this check box to also include rows that do not contain any data for the specified column.

 

Range (fixed/properties/expression) settings

 DefineInput4.png

Option

Description

Fixed value

Select this option to type a value to use as input for the parameter.

   Select...

Opens the Select Value dialog where you can pick a value available in the selected column element to use as a fixed value. This may be helpful when you are uncertain of which values are valid for the specified information link.

The button will not be available if your information link is parameterized.

If the selected parameter is a date or a datetime column, you will be able to select a date by clicking on the calendar icon instead, CalendarIcon.png.

Property

Select this option to tie the parameter value to a property value. Property values can easily be changed if you add a property control to a text area. See Using Properties in the Analysis for more information.

   Select...

Opens the Select Property dialog where you can specify a property to tie to the selected parameter.

Expression

Select this option if you need to perform some calculation to obtain the desired input parameter value. For example, if the input is to be affected by multiple columns and properties, these can be defined in an expression.

   Edit...

Opens the Edit Expression dialog where you can specify the expression to control the input parameter.

   Base on

Displays whether the calculations will be based on All values, Filtered values or Marked values.

   Settings...

Select this option to define whether to base the calculations on All values, Filtered values or Marked values.

Include empty values

Select this check box to also include rows that do not contain any data for the specified column.

It is not necessary to specify both a min and a max input value for a range, one is sufficient.

See also:

Loading Data on Demand

Details on Select Value

This dialog is used to select a fixed value to control the input of an on-demand loaded data table.

SelectValue.png

The Available values list shows all unique values in the selected information link column element. Click to select the value to use. Use the search field to limit the values shown to one matching the search expression. See Searching in TIBCO Spotfire for more information.

See also:

Details on Define Input

Details on Select Property

This dialog is used to specify a property that will contain a parameter value for an action control or an on-demand information link parameter.

Document Properties

SelectProperty1.png

Option

Description

Select property

Select the property you want to tie to the parameter value from the list. You can type an expression in the search field to limit the number of displayed properties. If no suitable properties are available, you can create a new one by clicking New....

New...

Opens a dialog where you can specify a new document property.

Edit...

Opens a dialog where you can edit the selected document property.

Delete

Deletes the selected document property.

Data Table Properties

SelectProperty2.png

Option

Description

Data table

Allows you to select the data table to work with.

Select property

From the list, select the property you want to tie to the parameter value. You can type an expression in the search field to limit the number of displayed properties. If no suitable properties are available, you can create a new one by clicking New....

New...

Opens a dialog where you can specify a new data table property.

Edit...

Opens a dialog where you can edit the selected data table property.

Delete

Deletes the selected data table property.

Column Properties

SelectProperty3.png

Option

Description

Data table

Allows you to select the data table to work with.

Column

Allows you to select the column to add a new property to.

Select property

From the list, select the property you want to tie to the parameter value. You can type an expression in the search field to limit the number of displayed properties. If no suitable properties are available, you can create a new one by clicking New....

New...

Opens a dialog where you can specify a new column property.

Edit...

Opens a dialog where you can edit the selected column property.

Delete

Deletes the selected column property.

See also:

Details on Action Control

Details on Add On-Demand Data Table

Details on Define Input

Details on Settings

Settings.png

Option

Description

Limit by

Use a combination of filtering and markings to limit the loaded data to rows matching the specified settings only. If more than one option is selected then data will be retrieved for rows matching the intersection of the selected filtering and markings only.

Leave both the Filtered rows and the Marked rows check boxes blank to retrieve data for all rows.

   Filtered rows

Select this check box to retrieve data for values remaining after filtering with the specified filtering scheme.

   Marked rows

Select this check box to retrieve data for values marked by the selected markings.

If more than one marking is available in your analysis you need to determine which marking or markings to control the loading. If more than one marking is selected, then data will be retrieved for rows matching the intersection of the markings.

See also:

Marking in Visualizations

Filtering Schemes

Details on Select Information Link

This dialog is used to specify which information link to load when adding a data table to the analysis.

SelectInformationLink.png

Navigate through the folders, and select the information link you want to use. Information about the selected information link is displayed to the right of the list of folders and information links. Which library folders you have access to is controlled by group privileges. Contact your Spotfire administrator if you cannot reach all the necessary information links.

You can search for an information link in the library by entering a name, or part of a name in the search field in the upper right corner in the dialog, and then pressing Enter. All information links and folders matching your search string will then be listed. See Searching the Library for more information about search expressions.

See also:

Details on Add Data Tables

Details on Add On-Demand Data Table

Add a Data Table Connection

Adding Data Connections

  • To add a connection to Microsoft SQL Server:

  1. Select File > Add Data Tables....

    Response: The Add Data Tables dialog is displayed.

  2. Click Add > Connection To > Microsoft SQL Server.

    Response: The Microsoft SQL Server Connection dialog is opened.

  3. Specify the Server you want to connect to.

  4. Select Authentication method.

  5. If you selected SQL Server authentication, specify Username and Password.

  6. Click Connect.

    Response: Spotfire will connect to the specified server, and the databases that are available on the server will be listed in the Database drop-down list.

  7. Select the Database of interest.

  8. Click OK.

    Response: If the database you connect to contains a large number of tables, you will reach the Select Database Tables dialog which lets you limit the number of tables to work with, see step 9. Otherwise you will reach the Data Tables in Connection dialog, see step 13.

  9. In the Available tables list, select one or more tables you that want to be able to work with in Spotfire.

  10. Click Add >.

    Response: The tables are moved from the Available tables list to the Selected tables list.

  11. Repeat step 9 and 10 until all the tables of interest have been moved to the Selected tables list.

    Comment: Retrieving the tables and their schemas from the database may take some time if you add a large number of tables. Therefore, it is recommended that you add only the tables you need to work with.

    Tip: Let Spotfire locate related tables for you. Select one or more interesting tables in the Selected tables list and then click on the Add Related Tables button. All tables that have a relation in the database to the selected tables will then be added to the list.

  12. Click OK.

    Response: The Data Tables in Connection dialog is opened.

  13. In the Available tables in database list, double-click on the tables you want to work with in Spotfire.

    Response: The tables are moved to the Data tables in Connection list. If you add a table with relations to other tables (a table with relations to other tables is indicated by an arrow to the left of its name), all related tables will automatically be included and the resulting data table in Spotfire will be a joined virtual table with columns from all the related tables.

    Comment: Click on a table in the Data tables in connection list to view the columns in the table.

  14. Enter a descriptive Data connection name.

  15. Click OK.

    Response: The connection with the selected tables is added to the Data tables list in the Add Data Tables dialog.

  16. Click OK.

    Response: A connection to Microsoft SQL Server has now been added to the analysis. A default visualization is opened in Spotfire, and the selected data tables are ready to be used.

  • To add a connection to Teradata:

  1. Select File > Add Data Tables....

    Response: The Add Data Tables dialog is displayed.

  2. Click Add > Connection To > Teradata.

    Response: The Teradata Connection dialog is opened.

  3. Specify the Server you want to connect to.

  4. If desired, select Use data encryption.

  5. Select Authentication method.

  6. If you selected Teradata authentication, specify Username and Password.

  7. Click Connect.

    Response: Spotfire will connect to the specified server, and the databases that are available on the server will be listed in the Database drop-down list.

  8. Select the Database of interest.

  9. Click OK.

    Response: If the database you connect to contains a large number of tables, you will reach the Select Database Tables dialog which lets you limit the number of tables to work with, see step 10. Otherwise you will reach the Data Tables in Connection dialog, see step 14.

  10. In the Available tables list, select one or more tables that you want to be able to work with in Spotfire.

  11. Click Add >.

    Response: The tables are moved from the Available tables list to the Selected tables list.

  12. Repeat step 10 and 11 until all the tables of interest have been moved to the Selected tables list.

    Comment: Retrieving the tables and their schemas from the database may take some time if you add a large number of tables. Therefore, it is recommended that you add only the tables you need to work with.

    Tip: Let Spotfire locate related tables for you. Select one or more interesting tables in the Selected tables list and then click on the Add Related Tables button. All tables that have a relation in the database to the selected tables will then be added to the list.

  13. Click OK.

    Response: The Data Tables in Connection dialog is opened.

  14. In the Available tables in database list, double-click on the tables you want to work with in Spotfire.

    Response: The tables are moved to the Data tables in Connection list.  If you add a table with relations to other tables (a table with relations to other tables is indicated by an arrow to the left of its name), all related tables will automatically be included and the resulting data table in Spotfire will be a joined virtual table with columns from all the related tables.

    Comment: Click on a table in the Data tables in connection list to view the columns in the table.

  15. Enter a descriptive Data connection name.

  16. Click OK.

    Response: The connection with the selected tables is added to the Data tables list in the Add Data Tables dialog.

  17. Click OK.

    Response: A connection to Teradata has now been added to the analysis. A default visualization is opened in Spotfire, and the selected data tables are ready to be used.

  • To add a connection to Oracle:

  1. Select File > Add Data Tables....

    Response: The Add Data Tables dialog is displayed.

  2. Click Add > Connection To > Oracle.

    Response: The Oracle Connection dialog is opened.

  3. Specify the Server you want to connect to.

  4. Select whether to Connect using: SID or Service name.

  5. Select Authentication method.

  6. If you selected Oracle authentication, specify Username and Password.

  7. Click OK.

    Response: If the database you connect to contains a large number of tables, you will reach the Select Database Tables dialog which lets you limit the number of tables to work with, see step 8. Otherwise you will reach the Data Tables in Connection dialog, see step 12.

  8. In the Available tables list, select one or more tables that you want to be able to work with in Spotfire.

  9. Click Add >.

    Response: The tables are moved from the Available tables list to the Selected tables list.

  10. Repeat step 8 and 9 until all the tables of interest have been moved to the Selected tables list.

    Comment: Retrieving the tables and their schemas from the database may take some time if you add a large number of tables. Therefore, it is recommended that you add only the tables you need to work with.

    Tip: Let Spotfire locate related tables for you. Select one or more interesting tables in the Selected tables list and then click on the Add Related Tables button. All tables that have a relation in the database to the selected tables will then be added to the list..

  11. Click OK.

    Response: The Data Tables in Connection dialog is opened.

  12. In the Available tables in database list, double-click on the tables you want to work with in Spotfire.

    Response: The tables are moved to the Data tables in Connection list. If you add a table with relations to other tables (a table with relations to other tables is indicated by an arrow to the left of its name), all related tables will automatically be included and the resulting data table in Spotfire will be a joined virtual table with columns from all the related tables.

    Comment: Click on a table in the Data tables in connection list to view the columns in the table.

  13. Enter a descriptive Data connection name.

  14. Click OK.

    Response: The connection with the selected tables is added to the Data tables list in the Add Data Tables dialog.

  15. Click OK.

    Response: A connection to Oracle has now been added to the analysis. A default visualization is opened in Spotfire, and the selected data tables are ready to be used.

  • To add a connection to Microsoft SQL Server Analysis Services:

  1. Select File > Add Data Tables....

    Response: The Add Data Tables dialog is displayed.

  2. Click Add > Connection To > Microsoft SQL Server Analysis Services.

    Response: The Microsoft SQL Server Analysis Services Connection dialog is opened.

  3. Specify the Server you want to connect to.

  4. Click Connect.

    Response: Spotfire will connect to the specified server, and the databases that are available on the server will be listed in the Database name drop-down list.

  5. Select the Database of interest.

  6. Select the Cube of interest.

  7. Click OK.

    Response: The connection with the selected cube is added to the Data tables list in the Add Data Tables dialog.

  8. Specify a descriptive Name for the connection.

  9. Click OK.

    Response: A connection to Microsoft SQL Server Analysis Services has now been added to the analysis, and a default visualization is opened in Spotfire.

  • To add a structural relation between database tables:

  1. Open the Data Tables in Connection dialog.

  2. Under Relations, click New....

  3. Select the two data tables you want to connect from the Foreign key table and Primary key table drop-down lists.

  4. Select the columns containing the identifiers from the Column drop-down lists.

  5. You can specify a second pair of identifiers by selecting the check box Second column pair, and a third pair of identifiers by selecting the check box Third column pair.

  6. Click OK when you have specified the necessary identifiers.

    Response: A relation is created between the two tables. If you add the Foreign key table to the Data tables in connection list, then the columns in the Primary key table will be included automatically. To see which tables have a relation to a specific table, you can click on the arrow next to the table to expand the tree structure. Note that adding the Primary key table to the Data tables in connection list will not include the columns in the Foreign key table.

  • To delete a structural relation:

Note: Only structural relations defined in Spotfire can be deleted, not those set up by the database administrator. However, you can always add a database table to a connection without including a related table by clearing the check box for the related table in the Data tables in connection list.

  1. Open the Data Tables in Connection dialog.

  2. In the Available tables in database list, locate the table that is the foreign key table in the relation.

  3. Expand the tree view by clicking on the arrow next to it.

  4. Select the table in the expanded view.

    Comment: This is the table that was set up as the Primary key table when the relation was created.

    Response: The Delete button is enabled.

  5. Click on the Delete button.

    Response: The relation between the two tables is removed.

    Comment: Note that relations may include other relations so that deleting a relation for a one table may also affect the resulting number of columns in other virtual tables in the Data tables in connection list.

  • To edit an existing structural relation:

Note: Only structural relations defined in Spotfire can be edited, not those set up by the database administrator.

  1. Open the Data Tables in Connection dialog.

  2. In the Available tables in database list, locate the table that is the foreign key table in the relation.

  3. Expand the tree view by clicking on the arrow next to it.

  4. Select the table in the expanded view.

    Comment: This is the table that was set up as the Primary key table when the relation was created.

    Response: The Edit... button is enabled.

  5. Click on the Edit... button.

  6. Make the desired changes in the Edit Relation dialog and click OK .

    Response: The relation is updated.

Note: You can also create relations between different data tables in Spotfire without actually joining them. This will form a looser connection between the tables but it can be used if you want to set up a details visualization using one of the data tables, limited by selections in the other. See Details on Manage Relations for more information.

See also:

Data Overview

Details
Details on Microsoft SQL Server Connection

This dialog is used to set up a connection to a Microsoft SQL Serverdatabase, where you can analyze data from the database without bringing it into your analysis.

  • To reach the Microsoft SQL Server Connection dialog:

  1. Select File > Add Data Tables....

  2. Click Add.

  3. Select Connection To > Microsoft SQL Server.

    Comment: You can also set up new connections in the Data Connection Properties dialog.

MicrosoftSQLServerAnalysisServicesConnection.png

Option

Description

Server

The name of the server where your data is located. To include a port number, add it directly after the name preceded by comma. To include an instance name, add it directly after the server name preceded by backslash.

Example with port number:
MyDatabaseServer,1234

Example with instance name:
MyDatabaseServer\InstanceName

Authentication method

The authentication method to use when logging into the database. Choose from Windows authentication and SQL Server authentication.

Windows authentication

When using Windows Authentication, e.g., Kerberos, the access token of the logged in user will be used. Users that have been given the appropriate access rights to SQL Server will be able to connect and read data.

Domain credentials are not stored in the analysis file.

SQL Server authentication

With database authentication the authentication is done using a database user. Database credentials can be stored, unencrypted, as part of the analysis file, using a setting in the Data Connection Properties dialog. If credentials are found in the analysis file they will be used to automatically authenticate against the database.

If no credentials or credentials profiles are found in the analysis file all who open the file will be prompted for database credentials.

Note that there will be no prompting for credentials if the credentials embedded in the analysis file fail.

Username

The username you wish to use when logging into the SQL Server database.

Password

The password for the specified username.

Connect

Connects you to the specified server and populates the list of available databases below.

Database

Select the database of interest from the drop-down list.

See also:

Data Overview

Adding Data Connections

Details on Oracle Connection

This dialog is used to set up a connection to an Oracle database, where you can analyze data from the database without bringing it into your analysis. Note that the Oracle Data Access Component (ODAC) driver (Oracle Data Provider for .NET4 (ODP.NET4)) must be installed on the machine running the Oracle connector. See the system requirements at http://support.spotfire.com/sr.asp for details.

  • To reach the Oracle Connection dialog:

  1. Select File > Add Data Tables....

  2. Click Add.

  3. Select Connection To > Oracle.

    Comment: You can also set up new connections in the Data Connection Properties dialog.

OracleConnection.png

Option

Description

Server

The name of the server where your data is located. To include a port number, add it directly after the server name preceded by colon.

Example with port number:
MyOracleDatabaseServer:1234

Connect using

 

   SID

Select this option to specify an Oracle System Identifier (SID) to use when connecting to the database. SID is used to uniquely identify a particular database on a system.

   Service name

Select this option to specify a service name to use when connecting to the database. The service name is the TNS alias that you give when you remotely connect to your database.

Authentication method

The authentication method to use when logging into the database. Choose from Windows authentication and Oracle authentication.

Windows authentication

When using Windows authentication, e.g., Kerberos, the access token of the currently logged in user will be used. Users that have been given the appropriate access rights to the Oracle database will be able to connect and read data.

Domain credentials are not stored in the analysis file.

Oracle authentication

With database authentication the authentication is done using a database user. Database credentials can be stored, unencrypted, as part of the analysis file, using a setting in the Data Connection Properties dialog. If credentials are found in the analysis file they will be used to automatically authenticate against the database.

If no credentials or credentials profiles are found in the analysis file all who open the file will be prompted for database credentials.

Note that there will be no prompting for credentials if the credentials embedded in the analysis file fail.

Username

The username you wish to use when logging into the Oracle database.

Password

The password for the specified username.

See also:

Data Overview

Adding Data Connections

Details on Teradata Connection

This dialog is used to set up a connection to a Teradata database, where you can analyze data from the database without bringing it into your analysis. Note that the Teradata .NET Data Provider must be installed on the machine running the Teradata connector. See the system requirements at http://support.spotfire.com/sr.asp for details.

  • To reach the Teradata Connection dialog:

  1. Select File > Add Data Tables....

  2. Click Add.

  3. Select Connection To > Teradata.

    Comment: You can also set up new connections in the Data Connection Properties dialog.

TeradataConnection.png

Option

Description

Server

The name of the server where your data is located. To include a port number, add it directly after the server name preceded by colon.

Example with port number:
MyTeradataDatabaseServer:1234

Use data encryption

Select this check box to increase the security using data encryption.

Authentication method

The authentication method to use when logging into the database. Choose from Windows authentication and Teradata authentication.

Windows authentication

When using Windows Authentication, e.g., Kerberos, the access token of the logged in user will be used. Users that have been given the appropriate access rights to Teradata will be able to connect and read data.

Domain credentials are not stored in the analysis file.

Teradataauthentication

With database authentication the authentication is done using a database user. Database credentials can be stored, unencrypted, as part of the analysis file, using a setting in the Data Connection Properties dialog. If credentials are found in the analysis file they will be used to  automatically authenticate against the database.

If no credentials or credentials profiles are found in the analysis file all who open the file will be prompted for database credentials.

Note that there will be no prompting for credentials if the credentials embedded in the analysis file fail.

Username

The username you wish to use when logging into the Teradata database.

Password

The password for the specified username.

Connect

Connects you to the specified server and populates the list of available databases below.

Database

Select the database of interest from the drop-down list.

See also:

Data Overview

Adding Data Connections

Details on Microsoft SQL Server Analysis Services Connection

This dialog is used to set up a connection to a Microsoft SQL Server Analysis Services cube, where you can analyze data from the cube without bringing it into your analysis. Note that the Microsoft Adomd.NET driver must be installed on the machine running the Microsoft SQL Server Analysis Services connector. See the system requirements at http://support.spotfire.com/sr.asp for details.

  • To reach the Microsoft SQL Server Analysis Services Connection dialog:

  1. Select File > Add Data Tables....

  2. Click Add.

  3. Select Connection To > Microsoft SQL Server Analysis Services.

    Comment: You can also set up new connections in the Data Connection Properties dialog.

MicrosoftSQLServerConnection.png

Option

Description

Server

The name of the server where your data is located. To include a port number, add it directly after the name preceded by colon. To include an instance name, add it directly after the server name preceded by backslash.

Example with port number:
myDatabaseServer:1234

Example with instance name:
MyDatabaseServer\InstanceName

Connect

Connects you to the specified server and populates the lists of available databases and cubes below. Microsoft SQL Server Analysis Services only supports Windows authentication.

Database

Select the database of interest from the drop-down list.

Cube

Select the cube of interest from the drop-down list.

See also:

Data Overview

Adding Data Connections

Working With Cubes

Details on Select Database Tables

This dialog will appear if the database you connect to contains a large number of tables. It is used to make a first limitation of the tables to show, since retrieval of a large number of tables from the database may take some time. The permissions set in the database decides whether or not you are allowed to view the tables in the database in this dialog. If the dialog is empty, then you do not have sufficient permissions.

  • To reach the Select Database Tables dialog when opening data:

  1. Select File > Add Data Tables....

  2. Click Add.

  3. Select Connection To > either Microsoft SQL Server, Oracle, or Teradata.

  4. Specify which server to log into, and enter the needed credentials.

  5. Click Connect.

  6. Select the database of interest, then click OK.

    Comment: This dialog will appear if the database you connect to contains a large number of tables. Otherwise you will reach the Data Tables in Connection dialog directly. You can still reach the Select Database Tables dialog by clicking the Edit Tables... button in the Data Tables in Connection dialog.

  • To reach the Select Database Tables dialog when data has already been loaded:

  1. Select Edit > Data Connection Properties....

  2. In the list of Connections, select the connection with the data tables of interest.

  3. Click on the Data Tables tab.

  4. Click Edit... to open the Data Tables in Connection dialog.

  5. Click on the Edit Tables... button.

SelectDatabaseTables.png

Option

Description

Available tables

Lists the tables that are available in the database. If the database has a hierarchical structure, this will be reflected in the list by showing the table name preceded by the database schema it is included in.

Example:
Sales.Customer
The table Customer resides in the database schema named Sales.

Tip: Use the search field to find the relevant tables if the list of tables is long. It is possible to use the wildcard character * in the search. See Searching in TIBCO Spotfire for more information.

Selected tables

Lists the tables that you have added from the Available tables list, and that you want to be able use in Spotfire.

Note: It is recommended to select only the tables that you need to work with, since retrieving the tables and schemas from the database may take some time.

Add >

Adds the tables selected in the Available tables list to the Selected tables list.

< Remove

Removes the selected tables from the Selected tables list and sends them back to the Available tables list.

Add Related Tables

Select one or more tables in the Selected tables list, and click Add Related Tables to include all the tables that have a relation to the selected tables in the database.

See also:

Details on Data Tables in Connection

Details on Data Tables in Connection

This dialog is used to select which tables should be included in the connection. If tables are related, they can be joined into a single virtual table.

  • To reach the Data Tables in Connection dialog when opening data:

  1. Select File > Add Data Tables....

  2. Click Add.

  3. Select Connection To > either Microsoft SQL Server, Oracle, or Teradata.

  4. Specify which server to log into, and enter the needed credentials.

  5. Click Connect.

  6. If applicable, select the database of interest, then click OK.

    Comment: If the database you connect to contains a large number of tables you will reach the Select Database Tables dialog first.

  • To reach the Data Tables in Connection dialog when data has already been loaded:

  1. Select Edit > Data Connection Properties....

  2. In the list of Connections, select the connection with the data tables of interest.

  3. Click on the Data Tables tab.

  4. Click Edit....

DataTablesinConnection.png

Option

Description

Available tables in database

Lists the tables from the database for which the schema is saved in the analysis. The tables shown here may be all the existing tables in the database, but it could also be only a selection of tables. See Edit Tables... below. An arrow next to a table indicates that the table has been set up with one or more structural relations to other tables in the database:

To see the structure of the relation, click on the arrow to expand the view:

The expanded view shows that the table Sales and Cost is related to the tables Customer Information and Region. Customer Information is in turn related to the table Buyer, and so on.

You can use the relations that have been set up in the database to join database tables into a single virtual table in Spotfire by adding a table at the top level of a relation.

Note: If the database contains a large number of tables, then this list shows only the tables that have been selected in the Select Database Tables dialog.

[Type to search tables]

Type a search string to limit the number of items in the Available columns list. It is possible to use the wildcard character * in the search. See Searching in TIBCO Spotfire for more information.

Edit Tables...

Opens the Select Database Tables dialog where you can specify the schema to save in the analysis. This determines which tables should be available in the Available tables in database list.

Note: The permissions set in the database decides whether or not you are allowed to view the tables in the database when opening the Select Database Tables dialog.

Relations

 

   New...

Opens the New Relation dialog where you can set up a structural relation between two tables.

   Edit...

Opens the Edit Relation dialog where you can edit a structural relation that already exists between the selected table and another table.
Note: Only structural relations defined in Spotfire can be edited, not those set up by the database administrator.

   Delete

Removes the relation.

Note: Only structural relations defined in Spotfire can be deleted, not those set up by the database administrator.

Add >

Adds the tables selected in the Available tables in database list to the Data tables in connection list.

< Remove

Removes the selected tables from the Data tables in connection list and sends them back to the Available tables in database list.

Data connection name

The name of the data connection.

Data tables in connection

Lists the tables that you have added from the Available tables in database list. The tables listed here are those that will become data tables in Spotfire. If a table with structural relations to other tables is selected, then all related tables will be included in the list, so that a joined, virtual table is produced.

If you do not want to include related tables, you can clear the check box next to the table you want to exclude.

Columns in selected data table

Lists the columns that the selected table in the Data tables in connection list contains.

A parenthesis after a column name indicates that the column is included in a table that is the primary key table in the relation with the selected table. The name in the parenthesis is the name of the column that was used as the foreign key column when joining the tables together.

Tip: Hover the mouse-pointer over a column to see which data type it contains.

See also:

Details on Select Database Tables

Details on New/Edit Relation

  • To reach the New Relation dialog when opening data:

  1. Select File > Add Data Tables....

  2. Click Add.

  3. Select Connection To > either Microsoft SQL Server, Oracle, or Teradata.

  4. Specify which server to log into, and enter the needed credentials.

  5. Click Connect.

  6. If applicable, select the database of interest, then click OK.

    Response: The Data Tables in Connection dialog is opened.

    Comment: If the selected database contains a large number of tables, the Select Database Tables dialog is opened first. Select the tables you want to be able to work with in Spotfire and click OK to reach the Data Tables in Connection dialog.

  7. Under Relations, click New....

  • To reach the New Relation dialog when data has already been loaded:

  1. Select Edit > Data Connection Properties....

  2. In the list of Connections, select the connection with the data tables of interest.

  3. Click on the Data Tables tab.

  4. Click Edit... to open the Data Tables in Connection dialog.

  5. Under Relations, click New....

  • To reach the Edit Relation dialog:

  1. Open the Data Tables in Connection dialog. See above.

  2. In the Available tables in database list, locate the table with the relation you want to edit, and select it in the list.

    Response: The Edit... button is enabled.

  3. Click Edit....

NewRelation.png

Option

Description

Foreign key table

Lists all tables currently available. Select one of the tables for which you wish to define a relation.

Column

Lists all columns available in the foreign key table. Select the column to be used in the matching of rows.

Primary key table

Lists all tables currently available. Select the table you wish to relate to the previously selected foreign key table.

Column

Lists all columns available in the primary key table. Select the column to be used in the matching of rows.

Second column pair

Select this check box if you want to use a second pair of columns to match the tables.

Third column pair

Select this check box if you want to use a third pair of columns to match the tables.

See also:

Details on Data Tables in Connection

Details on Data Connection Login

  • To reach the Data Connection Login dialog:

  1. Open an analysis which has a data connection that requires you to log into the data source.

DataConnectionLogin.png

Option

Description

Data connection

Shows the name of the data connection.

Data source

Shows information about the data source. Which type of data source, which server and database.

Username

The username for the specified data source.

Password

The password for the specified data source.

Connect

Opens the analysis and connects to the specified data source.

Skip

Opens the analysis without connecting to the specified data source.

Cancel

Cancels opening the analysis.

See also:

Data Overview

Mapping External Data Table
Oracle Data Types

When you are setting up a connection against an external data source, Spotfire has to map the data types in the data source against data types in Spotfire. See below for a list of the different data type mappings applicable when working against an Oracle database.

Oracle Database Type

Spotfire Data Type

Supported

BFILE

Binary

Yes

BINARY_DOUBLE / REAL

Real

Yes

BINARY_FLOAT / FLOAT

SingleReal

Yes

BLOB

Binary

Yes

CHAR

String

Yes

CLOB

Binary

Yes

DATE

Date

Yes

INTERVAL DAY TO SECOND

TimeSpan

Yes

INTERVAL YEAR TO MONTH

LongInteger

No

LONG

Binary

Yes

LONG RAW

Binary

Yes

NCHAR

String

Yes

NCLOB

Binary

Yes

NUMBER                        (NUMBER(x, s))

 

 

   Boolean (NUMBER(1))

Boolean

Yes

   Integer (NUMBER(p) 2 <= p <= 9)

Integer

Yes

   Long (NUMBER(p) 10 <= p <= 18)

LongInteger

Yes

   Double (NUMBER(x, s) 16 > x > s > 0)

Real

Yes

                                          (NUMBER(x, s))

Real

Yes

NVARCHAR / NVARCHAR2

String

Yes

PLS_INTEGER

 

No

RAW

Binary

Yes

RAW(16)

String

Yes

REF

 

No

REF CURSOR

 

No

ROWID

String

Yes

TIMESTAMP

DateTime

Yes

TIMESTAMP WITH LOCAL TIME ZONE

DateTime

No

TIMESTAMP WITH TIME ZONE

DateTime

No

UROWID

String

Yes

VARCHAR / VARCHAR2

String

Yes

XMLType

String

No

See also:

Adding Data Connections

SQL Server Data Types

When you are setting up a connection against an external data source, Spotfire has to map the data types in the data source against data types in Spotfire. See below for a list of the different data type mappings applicable when working against a SQL Server database.

SQL Server Database Type

Spotfire Data Type

Supported

BIGINT

LongInteger

Yes

BINARY

Binary

Yes

IMAGE

Binary

Yes

ROWVERSION

Binary

Yes

TIMESTAMP

Binary

Yes

VARBINARY

Binary

Yes

BIT

Boolean

Yes

CHAR

String

Yes

NCHAR

String

Yes

NTEXT

String

Yes

NVARCHAR

String

Yes

TEXT

String

Yes

UNIQUEIDENTIFIER

String

Yes

VARCHAR

String

Yes

XML

String

No

DATE

Date

Yes

DATETIME

DateTime

Yes

DATETIME2

DateTime

Yes

DATETIMEOFFSET

DateTime

No

SMALLDATETIME

DateTime

Yes

DECIMAL

Currency

Yes

MONEY

Currency

Yes

NUMERIC

Currency

Yes

SMALLMONEY

Currency

Yes

FLOAT

Real

Yes

SQL_VARIANT

Real

No

INT

Integer

Yes

SMALLINT

Integer

Yes

TINYINT

Integer

Yes

REAL

SingleReal

Yes

TIME

Time

Yes

See also:

Adding Data Connections

Teradata Data Types

When you are setting up a connection against an external data source, Spotfire has to map the data types in the data source against data types in Spotfire. See below for a list of the different data type mappings applicable when working against a Teradata database.

Teradata Database Type

Spotfire Data Type

Supported

BIGINT

LongInteger

Yes

BLOB

Binary

Yes

BYTE

Binary

Yes

VARBYTE

Binary

Yes

BYTEINT

Integer

Yes

INTEGER

Integer

Yes

SMALLINT

Integer

Yes

FLOAT

Real

Yes

DOUBLE

Real

Yes

DOUBLE PRECISION

Real

Yes

DECIMAL

Currency

Yes

REAL

SingleReal

Yes

CHAR

String

Yes

CLOB

Binary

Yes

GRAPHIC

String

No

INTERVAL MONTH

String

No

INTERVAL YEAR

String

No

INTERVAL YEAR TO MONTH

String

No

PERIOD DATE

String

No

PERIOD TIME

String

No

PERIOD TIMESTAMP

String

No

PERIOD TIMESTAMP WITH TIME ZONE

String

No

PERIOD TIME WITH TIME ZONE

String

No

VARCHAR

String

Yes

VARGRAPHIC

String

No

TIME WITH TIME ZONE

String

No

DATE

Date

Yes

TIMESTAMP

DateTime

Yes

TIMESTAMP WITH TIMEZONE

DateTime

No

TIME

Time

Yes

INTERVAL DAY

TimeSpan

Yes

INTERVAL DAY TO HOUR

TimeSpan

Yes

INTERVAL DAY TO MINUTE

TimeSpan

Yes

INTERVAL DAY TO SECOND

TimeSpan

Yes

INTERVAL HOUR

TimeSpan

Yes

INTERVAL HOUR TO MINUTE

TimeSpan

Yes

INTERVAL MINUTE

TimeSpan

Yes

INTERVAL MINUTE TO SECOND

TimeSpan

Yes

INTERVAL SECOND

TimeSpan

Yes

See also:

Adding Data Connections

Load Data From Active Spaces

Loading Data from ActiveSpaces

TIBCO ActiveSpaces® is a peer-to-peer distributed in-memory data grid, a form of virtual shared memory that is replicated on distributed devices and applications. If you have installed ActiveSpaces on your computer, you can load data from ActiveSpaces into Spotfire.

For information about installing ActiveSpaces, see TIBCO ActiveSpaces® Installation manual. To learn more about ActiveSpaces in general, refer to TIBCO ActiveSpaces® Developer’s Guide, TIBCO ActiveSpaces® Administration manual, and TIBCO ActiveSpaces® C Reference manual.

  • To load data from ActiveSpaces:

  1. Select File > Add Data Tables....

    Response: The Add Data Tables dialog is displayed.

  2. Click Add > Other > ActiveSpaces.

    Response: The ActiveSpaces dialog is opened.

  3. In the Discovery URL field, specify the URL to use when connecting to the metaspace.

  4. Specify the name of the Metaspace you want to connect to.

  5. Click on the Discover Spaces button.

    Response: Spotfire connects to the specified metaspace, and the spaces available in that metaspace are displayed in the Available spaces drop-down list.

  6. Select the space of interest from the Available spaces drop-down list.

  7. Optionally, enter a query in the Limit by query field.

  8. Optionally, specify a Timeout value.

  9. Click Connect.

    Response: Data is loaded into Spotfire.

  • To add an on-demand data table from ActiveSpaces:

  1. Load the data table that you want to use as the master data table into Spotfire.

  2. Select File > Open From > ActiveSpaces On-Demand....

    Response: The ActiveSpaces dialog is opened.

  3. In the Discovery URL field, specify the URL to use when connecting to the metaspace.

  4. Specify the name of the Metaspace you want to connect to.

  5. Click on the Discover Spaces button.

    Response: Spotfire connects to the specified metaspace, and the spaces available in that metaspace are displayed in the Available spaces drop-down list.

  6. In the Available spaces drop-down list, select the space containing the detail data that you want to load on demand.

  7. Optionally, enter a query in the Limit by query field.

  8. Optionally, specify a Timeout value.

  9. Click Connect.

    Response: The Data functions – Parameters dialog is opened.

  10. Specify a descriptive Name for the data function.

  11. If you want the on-demand data to be reloaded automatically, select the check box Refresh function automatically.

  12. The Input parameters list shows the available data in the selected space. Select the column that should be used as the identifier in the on-demand data table.

  13. Select whether the Input handler should be Column or Expression.

    Response: Settings for specifying what should control the on-demand data becomes available in the right-hand side of the dialog.

  14. Select the master Data table that should control the on-demand data.

    Comment: This is the data table that you loaded into Spotfire in step 1.

  15. If you selected Column in step 13, select the Column (in the master data table) that matches the column you selected under Input parameters. If you selected Expression in step 13, click on the Edit... button to open the Edit Expression dialog, and specify the expression to use. It will be displayed in the Expression text field after you click OK in the Custom Expression dialog.

  16. Under Limit by, select Filtered rows if you want filtering in the master data table to control what data is shown in the on-demand data table. Select Marked rows if you want to control what data is shown in the on-demand data table by marking rows in the master data table. You should also select which Marking to use.

    Comment: If you select both Filtered rows and Marked rows, then only rows matching the intersection of the selected filtering and markings will be shown in the on-demand data table.

  17. Click OK.

  18. If you are asked whether you want to replace the data table, click Yes to continue.

    Response: The on-demand data table is loaded into Spotfire, and a table visualization based on the new data table is created.

See also:

Details on ActiveSpaces

Details on Data Functions – Parameters

Details on ActiveSpaces

  • To reach the ActiveSpaces dialog:

  • Select File > Add Data Tables....

  • Click Add.

  • Select Other > ActiveSpaces.

ActiveSpaces.png

Option

Description

Discovery URL

Specify the URL to use when connecting to the metaspace.

Example:
tcp://myActiveSpacesServer:portNumber

Metaspace

Specify the name of the metaspace you want to connect to. A metaspace is a virtual entity that contains spaces, which are containers that store the actual data.

Discover Spaces

Click this button to find the spaces that are available in the specified metaspace.

Available spaces

Lists the spaces that are available in the specified metaspace. A space provides shared virtual storage for data. Select the space you want to load data from.

Limit by query (optional)

Optionally, specify a query to limit the amount of data returned from ActiveSpaces.

Example:
Columnname=”value”

Timeout (optional)

Optionally, specify (in seconds) how long Spotfire should attempt to wait for data to be returned from ActiveSpaces. This can be useful for large data sets.

See also:

Loading Data from ActiveSpaces

Open Database

Open from Database Overview

By default, Spotfire can connect to several data source types using the following drivers: ODBC, OLE DB, OracleClient and SQLClient. OLE DB UDL files can also be opened directly using File > Open.... Other data sources may also be available depending on your installed data providers.

  • General data connection recommendations:

  1. Preferably, use Information Services and create an information link to retrieve your data.

  2. If you need to use SqlServer, use the SqlClient Data Provider.

  3. If you need to use Oracle, install the Oracle Data Provider for .NET (ODP.NET) on all machines that need to reach the database. It is faster and better than the default data provider for Oracle.

  4. It is not recommended to use the OracleClient Data Provider because is at least twice as slow at retrieving data (compared to the other options) and even slower at retrieving metadata.

  5. Use OleDb rather than ODBC, since ODBC only refers to a local registry connection string (one on each machine), which means it will be hard to administer. For OleDb the connection string is saved within the file. One advantage with ODBC is that you can change the connection string in a single place for one computer.

See also:

Opening Data from a Database

Opening Data from a Database

See Open from Database Overview for some tips on what connections to use.

  • To open data using SQLClient:

  1. Select File > Open From > Database....

    Response: The Open Database dialog is displayed.

  2. Click to select SqlClient Data Provider as Data source type.

  3. Click Configure....

    Response: The Configure Data Source Connection dialog is displayed.

  4. Enter the SQL server name.

  5. Specify whether to Use Windows Authentication or Use SQL Server Authentication.

  6. If you are using SQL Server Authentication, type a Username and Password in the fields provided.

  7. If you want to connect to a remote database, Select or enter a database name.

    Comment: Select a database from the drop-down list or type the name in the field.

  8. If you instead have a local database file you want to connect to, select Attach to database file, and Browse for the local file. Type a logical name to associate with the database file.

  9. Click OK.

    Response: The Specify Tables and Columns dialog is displayed.

  10. Select the Tables, views and columns you wish to import.

    Comment: If desired you can edit the SQL statement directly, or load a previously saved SQL file with a more complex SQL statement.

  11. Click OK.

    Response: Data is loaded into Spotfire.

  • To open data using OLE DB:

Note: UDL files can be opened directly using File > Open....

  1. Select File > Open From > Database....

    Response: The Open Database dialog is displayed.

  2. Click to select OleDb Data Provider as Data source type.

  3. Click Configure....

    Response: The Configure Data Source Connection dialog is displayed.

  4. Type or paste a connection line.

    Comment: This should normally be provided by your database administrator.

  5. Click OK.

    Response: The Connection string field in the Open Database dialog is updated with the information entered in the previous step.

  6. Click OK.

    Response: The Specify Tables and Columns dialog is displayed.

  7. Select the Tables, views and columns you wish to import.

    Comment: If desired you can edit the SQL statement directly, or load a previously saved SQL file with a more complex SQL statement.

  8. Click OK.

    Response: Data is loaded into Spotfire.

  • To open data using ODBC:

Note: To learn how to set up a data source, please refer to the database vendor's documentation and the Windows documentation on ODBC. It might be necessary to install ODBC driver software particular to the database used before being able to utilize the ODBC option.

  1. Select File > Open From > Database....

    Response: The Open Database dialog is displayed.

  2. Click to select Odbc Data Provider as Data source type.

  3. Click Configure....

    Response: The Configure Data Source Connection dialog is displayed.

  4. Select a System or user data source name from the drop-down list.

    Comment: The data sources available here are the ones previously defined in Windows ODBC Data Source Administrator, found under Control Panel > Administrative Tools > Data Sources (ODBC). Contact your database administrator if you are missing any information.

  5. If the data source is password protected, type a Username and Password in the fields provided.

  6. Click OK.

    Response: The Connection string field in the Open Database dialog is updated with the information entered in the previous step.

  7. Click OK.

    Response: The Specify Tables and Columns dialog is displayed.

  8. Select the Tables, views and columns you wish to import.

    Comment: If desired you can edit the SQL statement directly, or load a previously saved SQL file with a more complex SQL statement.

  9. Click OK.

    Response: Data is loaded into Spotfire.

  • To open data using OracleClient:

Note: To be able to use the OracleClient data provider, you need to have Oracle Client installed on your computer.

  1. Select File > Open From > Database....

    Response: The Open Database dialog is displayed.

  2. Click to select OracleClient Data Provider as Data source type.

  3. Click Configure....

    Response: The Configure Data Source Connection dialog is displayed.

  4. Type or paste the Oracle server name.

  5. If the data source is password protected, type a Username and Password in the fields provided.

  6. Click OK.

    Response: The Connection string field in the Open Database dialog is updated with the information entered in the previous step.

  7. Click OK.

    Response: The Specify Tables and Columns dialog is displayed.

  8. Select the Tables, views and columns you wish to import.

    Comment: If desired you can edit the SQL statement directly, or load a previously saved SQL file with a more complex SQL statement.

  9. Click OK.

    Response: Data is loaded into Spotfire.

See also:

Opening an Analysis File

Opening a Text File

Opening an Excel File

Opening a SAS File

Opening Files from the Library

Opening an Information Link

Details
Details on Open Database

  • To reach the Open Database dialog:

  1. Select File > Open From > Database....

OpenDatabase.png

Option

Description

Data source type

Lists the available data source types.

Connection string

Shows the connection string specified for the selected data source type. If no connection has been defined yet you can do that by clicking Configure....

Configure...

Opens the Configure Data Source Connection dialog for the respective data source type: ODBC, OLE DB, OracleClient, SQLClient or a custom provider.

See also:

Opening Data from a Database

Configure Data Source Connection – SQLClient

  • To reach the Configure Data Source Connection dialog:

  1. Select File > Open From > Database....

  2. In the Open Database dialog, click to select the SqlClient Data Provider.

  3. Click Configure....

ConfigureDataSourceConnection1.png

Option

Description

SQL server name

The name of the SQL server where your data is located.

Refresh

Refreshes the list of available SQL servers to include a recently added SQL server name.

Use Windows Authentication

Select this option if you can use your normal Windows username and password to log into the SQL Server.

Use SQL Server Authentication

Select this option if the SQL server requires you to log in using a different username and password.

   Username

The username you wish to use when logging into the SQL server.

   Password

The password for the specified username.

Allow saving credentials

Select this option to allow saving of your credentials.

Select or enter a database name

The name of the database where your data is located.

Attach to a database file

Select this option if you have a local database file you want to connect to.

Browse

Browse for the database file.

Logical name

Specify a logical name to be associated with the database file.

See also:

Opening Data from a Database

Details on Open Database

Configure Data Source Connection – OLE DB

An OLE DB data provider allows native access to data, such as an SQL Server or an Oracle database. Using an OLE DB data provider, Spotfire can retrieve data from a wide variety of data sources, not just relational databases. The connection string provided should specify the OLE DB driver that is designed to work with your data.

The following providers are included with the Microsoft data access components:

Microsoft Jet 3.51 OLE DB Provider

OLE DB Provider for Oracle

OLE DB Provider for SQL Server

OLE DB Provider for ODBC Drivers

Note: For more information about OLE DB providers, see the OLE DB Programmer's Reference. This documentation is available in the Microsoft Data Access SDK. For more information about advanced initialization properties, see the documentation provided with your OLE DB provider.

  • To reach the Configure Data Source Connection dialog:

  1. Select File > Open From > Database....

  2. In the Open Database dialog, click to select the OleDb Data Provider.

  3. Click Configure....

ConfigureDataSourceConnection2.png

Option

Description

Connection string

Should provide information about which OLE DB driver to use, which data source to connect to, etc. The connection string would normally be acquired from your database administrator.

Allow saving credentials

Select this option to allow saving of your credentials.

See also:

Opening Data from a Database

Details on Open Database

Configure Data Source Connection – ODBC

ODBC (Open Database Connectivity) allows you to import data from virtually any kind of database commercially available.

To learn how to set up an ODBC data source, please refer to the database vendor's documentation and the Windows documentation on ODBC. It might be necessary to install ODBC driver software particular to the database used before being able to utilize the ODBC option.

  • To reach the Configure Data Source Connection dialog:

  1. Select File > Open From > Database....

  2. In the Open Database dialog, click to select the Odbc Data Provider.

  3. Click Configure....

ConfigureDataSourceConnection3.png

Option

Description

System or user data source

Select this option to connect to a system or user data source.

The data sources available here are the ones previously defined in Windows ODBC Data Source Administrator, found under Control Panel > Administrative Tools > Data Sources (ODBC). Contact your database administrator if you are missing any information.

Refresh

Refreshes the list of defined data sources to include a currently added system or user data source name.

File data source

Select this option to connect to a file data source.

Browse...

Opens a dialog where you can browse to locate the DSN file of interest.

Username

The username you wish to use when logging into the selected data source.

Password

The password for the specified username.

Allow saving credentials

Select this option to allow saving of your credentials.

See also:

Opening Data from a Database

Details on Open Database

Configure Data Source Connection – OracleClient

  • To reach the Configure Data Source Connection dialog:

  1. Select File > Open From > Database....

  2. In the Open Database dialog, click to select the OracleClient Data Provider.

  3. Click Configure....

ConfigureDataSourceConnection4.png

Option

Description

Oracle server name

The net service name for the Oracle instance where your data are located. The net service name can be found either in the local tnsnames.ora file, on an Oracle Names server, or, it can depend on your configuration of the Oracle Native Naming Adapters for your system.

Username

The username you wish to use when logging into the Oracle server.

Password

The password for the specified username.

Allow saving credentials

Select this option to allow saving of your credentials.

See also:

Opening Data from a Database

Details on Open Database

Configure Data Source Connection – Custom .NET Provider

You can also open other types of databases if other .NET providers are installed on your system. How these connections are configured is highly depending on the provider, and these examples might not look like the ones installed on your system.

  • To reach the Configure Data Source Connection dialog:

  1. Select File > Open From > Database....

  2. In the Open Database dialog, click to select the Data Provider of interest.

  3. Click Configure....

ConfigureDataSourceConnection5.png

Option

Description

Misc

Lists properties for your connection. (What properties are visible depends on the provider you are using.) Edit the properties by typing in the right-hand column.

Connection string

Displays the connection string that is the result of what you have entered in the various fields above.

Allow saving credentials

Select this option to allow saving of your credentials.

Note that what properties are shown is dependent on your connection provider. Some provider might for instance have no visible properties at all, and instead present a login window when you open the data source:

DatabaseLogin.png

See also:

Opening Data from a Database

Details on Open Database

Replace Data

Replacing Data

In Spotfire it is possible to reuse the visualizations, calculations and setup from a previously created document with new data, as long as the new data is reasonably similar to the old data. This is useful when creating an analysis for, say, sales figures for a certain month. You create a full analysis using the data from January, set up visualizations, calculations, etc., and save the file. When the sales figures for February are available, you can open the same file again, and replace the data from January with the data from February, and the visualizations will be updated. This of course requires that the data table for February is structured in the same way as for January, using the same column names and format.

  • To replace with new data table:

  1. Select File > Replace Data Table....

    Response: The Replace Data Table – Select Data Table dialog is displayed.

  2. Select the data table you wish to replace.

  3. Select to replace with a New data table.

  4. Click OK.

    Response: The Replace Data Table – Select Source dialog is opened.

  5. Select source type for the new data table.

  6. If the selected data type is anything other than the clipboard, click Browse... to specify the source.

    Response: Depending on which option you have selected, you are provided with some means to choose what data to open. See Opening a Text File, Opening an Excel File, Opening a SAS File, Opening an Information Link, or Opening Data from a Database for more information about each alternative.

  7. Apply transformations (optional). See Transforming Data to learn more.

  8. Click OK.

    Response: If the new data table contains columns that match the columns in the old data table completely, the analysis is immediately updated to use the new data. However, if some columns used in the analysis could not be replaced automatically by columns in the new data table, you will be presented with the Replace Data – Match Columns dialog. Here, you can match columns from the current data table with columns from the new data table. If there are missing columns remaining after you have matched columns, the Replace Data – Missing Columns dialog is shown. The dialog will state all mismatches that still occur. Make a note of these and click Close. The data is replaced, but you may need to make some manual fixes to make sure all visualizations are displayed as you intended.

  9. If necessary, update any visualizations, calculations, or hierarchies that were broken when the data was replaced.

  • To replace with new data table loaded on demand:

  1. Select File > Replace Data Table....

    Response: The Replace Data Table – Select Data Table dialog is displayed.

  2. Select the data table you wish to replace.

  3. Select to replace with a New data table loaded on demand.

  4. Click OK.

    Response: The Select Information Link dialog is opened.

  5. Locate and select the information link of interest.

  6. Click OK.

  7. The Replace Data Table – On Demand Configuration dialog is opened.

  8. Specify how the on-demand data should be loaded. See Loading Data on Demand to learn more.

  9. Click OK.

    Response: If the new data table contains columns that match the columns in the old data table completely, the analysis is immediately updated to use the new data. However, if some columns used in the analysis could not be replaced automatically by columns in the new data table, you will be presented with the Replace Data – Match Columns dialog. Here, you can match columns from the current data table with columns from the new data table. If there are missing columns remaining after you have matched columns, the Replace Data – Missing Columns dialog is shown. The dialog will state all mismatches that still occur. Make a note of these and click Close. The data is replaced, but you may need to make some manual fixes to make sure all visualizations are displayed as you intended.

  10. If necessary, update any visualizations, calculations, or hierarchies that were broken when the data was replaced.

  • To replace with data table from a new data connection:

This option is used when you want to create an entirely new data connection and use one of the data tables within it instead of an old data table.

Note: Cube data cannot be replaced.

  1. Select File > Replace Data Table....

    Response: The Replace Data Table – Select Data Table dialog is displayed.

  2. Select the data table you wish to replace.

  3. Select to replace with Data table from data connection.

  4. Click OK.

    Response: The Replace Data Table – Select External Source dialog is opened.

  5. Click the radio button A new table in a new data connection.

  6. Select the type of data source you want to connect to in the drop-down list.

  7. Click OK.

    Response: The connection dialog for the selected data source type is opened.

  8. Set up the data connection and select the table to replace the old data table with.

    Comment: See To_add_a_connection_to_Microsoft_SQL_Server, To_add_a_connection_to_Oracle, or To_add_a_connection_to_Teradata for detailed descriptions of how to load data from the different data sources.

    Response: If the new data table contains columns that match the columns in the old data table completely, the analysis is immediately updated to use the new data. However, if some columns used in the analysis could not be replaced automatically by columns in the new data table, you will be presented with the Replace Data – Match Columns dialog. Here, you can match columns from the current data table with columns from the new data table. If there are missing columns remaining after you have matched columns, the Replace Data – Missing Columns dialog is shown. The dialog will state all mismatches that still occur. Make a note of these and click Close. The data is replaced, but you may need to make some manual fixes to make sure all visualizations are displayed as you intended.

  9. If necessary, update any visualizations, calculations, or hierarchies that were broken when the data was replaced.

  • To replace with a new data table in an existing data connection:

This option is used when you want to add a new data table to an existing data connection and use the new data table instead of an old data table.

Note: Cube data cannot be replaced.

  1. Select File > Replace Data Table....

    Response: The Replace Data Table – Select Data Table dialog is displayed.

  2. Select the data table you wish to replace.

  3. Select to replace with Data table from data connection.

  4. Click OK.

    Response: The Replace Data Table – Select External Source dialog is opened.

  5. Click the radio button A new table in an existing connection.

  6. In the drop-down list, select the connection to which you want to add a new table.

  7. Click OK.

    Response: The Data Tables in Connection dialog is opened.

  8. In the Available tables in database list, select the new table to use, and click Add >.

    Tip: See To add relations or To edit an existing relation if you want to learn more about adding or editing relations between tables.

    Comment: If the database contains a large number of tables, it may be the case that all tables have not been added to the connection when it was initially created. If the table you want to replace with is not shown in the Available tables in database list, click Edit Tables... to open the Select Database Tables dialog where you can add the table of interest. However, you need to have sufficient permissions in the database to be able to do this.

  9. Click OK.

  10. Response: If the new data table contains columns that match the columns in the old data table completely, the analysis is immediately updated to use the new data. However, if some columns used in the analysis could not be replaced automatically by columns in the new data table, you will be presented with the Replace Data – Match Columns dialog. Here, you can match columns from the current data table with columns from the new data table. If there are missing columns remaining after you have matched columns, the Replace Data – Missing Columns dialog is shown. The dialog will state all mismatches that still occur. Make a note of these and click Close. The data is replaced, but you may need to make some manual fixes to make sure all visualizations are displayed as you intended. The new table has also been added to the selected connection. To view the current setup of the data connection, open the Data Connection Properties dialog.

  11. If necessary, update any visualizations, calculations, or hierarchies that were broken when the data was replaced.

  • To replace with an existing external table in the analysis:

Note: Cube data cannot be replaced.

  1. Select File > Replace Data Table....

    Response: The Replace Data Table – Select Data Table dialog is displayed.

  2. Select the data table you wish to replace.

  3. Select to replace with Data table from data connection.

  4. Click OK.

    Response: The Replace Data Table – Select External Source dialog is opened.

  5. Click the radio button An existing external table in my analysis.

  6. In the drop-down list, select the table you want to replace the current table with.

  7. Click OK.

    Response: If the new data table contains columns that match the columns in the old data table completely, the analysis is immediately updated to use the new data. However, if some columns used in the analysis could not be replaced automatically by columns in the new data table, you will be presented with the Replace Data – Match Columns dialog. Here, you can match columns from the current data table with columns from the new data table. If there are missing columns remaining after you have matched columns, the Replace Data – Missing Columns dialog is shown. The dialog will state all mismatches that still occur. Make a note of these and click Close. The data is replaced, but you may need to make some manual fixes to make sure all visualizations are displayed as you intended.

  8. If necessary, update any visualizations, calculations, or hierarchies that were broken when the data was replaced.

See also:

Transforming Data

Details
Details on Replace Data Table – Select Data Table

  • To reach the Replace Data Table – Select Data Table dialog:

  1. Select File > Replace Data Table....

ReplaceDataTable-SelectDataTable.png

Option

Description

Select data table to replace

Specifies which data table to replace.

Note: Data tables containing cube data will not be listed since cube data cannot be replaced.

Replace with

 

   New data table

Allows you to select a file, an information link, a database, the clipboard, or an existing data table in your analysis as the source for your new data table.

   New data table loaded on demand

Allows you to replace your data table with an information link which is loaded on demand. See Loading Data on Demand for more information.

   Data table from data connection
  

Allows you to replace the selected data table with a table in a new or an existing data connection.

See also:

Details on Replace Data Table - Select Source

Details on Replace Data – Select External Source

Replacing Data

Details on Replace Data Table – Select Source

  • To reach the Replace Data Table – Select Source dialog:

  1. Select File > Replace Data Table....

    Response: The Replace Data Table – Select Data Table dialog is opened.

  2. Select New data table.

  3. Click OK.

ReplaceDataTable-SelectSource.png

Option

Description

Select source type

 

  File

Allows you to add a data table from a file.

  Information Link

Allows you to add a data table from an information link.

  Database

Allows you to add a data table from any supported database.

  Clipboard

Allows you to add a data table from the clipboard.

  Existing data table in my analysis

Allows you to add a data table from the current analysis.

Location

Shows the path and file name of the selected file.

Browse...

Opens a dialog where you can select which file, information link, database, etc., to open.

Show transformations

Expands the dialog and allows you to apply transformations on the data table you want to add. For more information, see the Show transformations dialog.

See also:

Details on Replace Data Table - Select Data Table

Replacing Data

Details on Replace Data – Select External Source

  • To reach the Replace Data Table – Select External Source dialog:

  1. Select File > Replace Data Table....

    Response: The Replace Data Table – Select Data Table dialog is opened.

  2. Select Data table from data connection.

  3. Click OK.

ReplaceDataTable-SelectExternalSource.png

Option

Description

Replace with

 

   A new table in a new data connection

Allows you to replace the selected data table with a table from a new connection. In the drop-down list, select the type of data source you want to add a connection to. Clicking OK will open the connection dialog for the selected data source type, where you can specify which server and database to connect to.

   A new table in an existing connection

Allows you to replace the selected data table with a new data table from a connection that already exists in the analysis. Select the connection of interest from the drop-down list. Clicking OK will open the Data Tables in Connection dialog, where you can add a new table to the connection and use it to replace data with.

   An existing external table in my analysis

Allows you to replace the selected data table with an already existing data table in a connection in the analysis. Select the data table of interest from the drop-down list.

Note: Data tables or connections containing cube data will not be listed in the drop-down lists since cube data cannot be replaced.

See also:

Details on Replace Data Table – Select Data Table

Replacing Data

Details on Replace Data – Match Columns

This dialog is displayed when data have been replaced in your current document, but all columns used in the document could not be replaced automatically by columns in the new data table. It allows you to match columns from the current data table with columns from the new data table.

ReplaceDataTable-SelectExternalSource.png

Option

Description

From current data

Lists the columns in the current data table that could not be replaced automatically. Click here to select the column you wish to match with a column from the new data, then click Match Selected.

From new data

Lists the columns in the new data table that has not been matched to columns from the current data table. Click here to select the column you wish to match with a column from the current data, then click Match Selected.

Match Selected

Matches the selected columns from the current data table and the new data table.

Matched columns

Lists all column pairs that have been selected for matching.

Unmatch All

Unmatches all the matched columns, including the automatically matched columns.

Unmatch Selected

Unmatches selected columns from the Matched columns list.

Cancel

Cancels the replace data table operation.

See also:

Replacing Data

Details on Replace Data – Missing Columns

This dialog is displayed when you have replaced or reloaded the data in your current document, but some columns are missing in the new data table. The data is still replaced, but some visualizations and hierarchies in the document may need to be manually adjusted.

ReplaceData-MissingColumns.png

Option

Description

Missing columns

Lists columns that were available in the old data table, but are missing in the new data table.

Invalid calculated columns (manual updates required)

Lists columns that were calculated using a column that was available in the old data table, but is missing in the new data table. This means that the calculation of the column fails.

You can edit the calculated column in Spotfire by selecting Edit > Column Properties; then click to select the column in the Columns and hierarchies list and click Edit... in the lower part of the General tab. Or, you can simply remove the erroneous column from the document (Delete in Column Properties).

Invalid hierarchies (manual updates required)

Lists any hierarchies that were created using a column that was available in the old data table, but is missing in the new data table. This means that the hierarchy can no longer be used, until it is manually updated to use a different column. Hierarchies can be edited by right-clicking on the filter and selecting Edit Hierarchy....

See also:

Replacing Data

Transform Data

Transforming Data

Sometimes the data you want to analyze in Spotfire is not in the most appropriate format and may contain errors. It may therefore be necessary to perform modifications on the data before importing it in order to get the best results from the analysis.

There are several methods that can be used to transform your data before importing it into the analysis file.

Calculate and replace column allows you to replace a column in the data table with a calculated column.

Calculate new column allows you to add a calculated column to the data table.

Change column names allows you to change the name of one or more of the columns in the data table.

Change data types allows you to change the data type for one or more of the columns in the data table.

Data function allows you to use a previously registered data function as a transformation step.

Exclude columns allows you to exclude one or more of the columns from the data table.

Normalization allows you to normalize the data prior to addition of the data table.

Pivot allows you to pivot the data - to change the data table from a tall/skinny to a short/wide format.

Unpivot allows you to unpivot the data - to change the data table from a short/wide to a tall/skinny format.

Note: Additional transformations may be available to you if these have been added locally.

Note: If you are adding data tables from an external data connection, then the external data source will determine whether or not any transformation methods will be available.

  • To transform data

  1. Select File > Add Data Tables... or File > Add On-Demand Data Table....
    If you already have an analysis open, you can also choose:
    File > Replace Data Table... 
    Insert > Columns... 
    Insert > Rows...

  2. Select source type.

  3. Browse for the file if needed.

  4. Click Show transformations.

  5. Add the transformations you want to perform on your data.

  6. Click OK to import the transformed data to the analysis.

See also:

Details on Show Transformation

Details on Add Data Tables

Details on Add On-Demand Data Table

Details on Replace Data Table - Select Source

Details on Insert Columns - Select Source

Details on Insert Rows - Select Source

Pivoting Data

A pivot transformation is one way to transform data from a tall/skinny format to a short/wide format. The data is distributed into columns usually aggregating the values. This means that multiple values from the original data end up in the same place in the new data table.

Example:

The example below shows a pivot transformation on a very simple data set. In the original data table, there are three columns and four rows. Each row contains one of two department stores, A or B; a product, TV or DVD; and a numerical value for the number of sales. The data table might look like this if a new row is added after each day.

However, perhaps we are more interested in knowing how many units of each product are sold in each store on an average day.

After pivoting the data table, using the aggregation method "average" on the numerical values for the two products, we get a new data table. This data table has just two rows, one for each store. The layout of the table has gone from tall/skinny to short/wide. Had there been more products in the data table the difference would be even more pronounced. In the new data table, it is easy to see the number of products being sold in each store on an average day. The first row tells us that on any given day in department store A, 3 TVs are sold, but no DVDs. In department store B, however, an average day might see 6 TVs and 8 DVDs sold.

PivotingData1.png

Example:

In this example, we have a larger data set, with data from an imagined company that produces small machinery parts. These parts have measurements for width, height and thickness. The parts have three different holes in them. There are also measurements for the diameter of these holes, and a measurement for a possible small offset from where they are supposed to be.

In the original data table, which contains measurements for samples of all parts, we can see which of the company's three factories—A, B or C—have produced the parts, and we can see on which date the parts were shipped, which batch they belong to as well as all the measurements for the parts.

PivotingData2.png

 

What we are really interested in knowing is how good the three different factories are at producing these parts. If we deliver the parts to different customers who have different demands for the accuracy of the holes in the part, we want to know which factory should supply which customer with parts. We then pivot the data to get one row for each factory, and to get minimum, maximum and average values for the different measurements of the parts.

PivotingData3.png

After importing the data to Spotfire, we can start analyzing it. By filtering the data, we can set the minimum and maximum allowed measurements for the diameter and offset of the holes in the part.

PivotingData4.png

In the analysis, we can see that if the most important criterion is that the diameter is not too small, A is the factory that should supply parts to the most demanding customers.

See also:

Details on Pivot Data

Transforming Data

Unpivoting Data

An unpivot transformation is one way to transform data from a short/wide to a tall/skinny format. When the data types of source columns differ, the varying data is converted to a common data type so the source data can be part of one single column in the new data set.

Example:

The example below shows an unpivot transformation on a very simple data set. In the original data table, there are three columns and four rows. Each row contains a city, a morning temperature and an evening temperature for each city.

While this is certainly useful, we want to determine the average temperature of all the cities for all times of day.

After unpivoting the data, we have one row for each measurement and can easily get an average value for the Temperature column in the analysis after the data has been imported.

Note: Observe that the morning temperatures were given as integers and the evening temperatures as real numbers. In the unpivoted data table, these values must have the same data type to be used in the same column. Integers are therefore changed to real numbers (changing the real number temperatures to integers, while still somewhat compatible in this case, would have resulted in a loss of information).

UnpivotData1.png

Example:

In this example, we have a larger data set containing data on the sales of entrance tickets for a museum. The original data table shows data for each of the five ticket counters (desks) and the number of tickets they have sold to adults, children and senior citizens each day.

UnpivotData2.png

 

However, at this point, rather than needing to know which counter sold how many tickets to whom, we are more interested in analyzing our ticket sales in general. Therefore, we unpivot the data, combining the Desk columns into one, which we name "Desk" and merging all ticket sales to another column, which we name "Tickets".

UnpivotData3.png

After importing the data into Spotfire, we can start analyzing it.

UnpivotData4.png

Looking at the analysis, we can now see that Thursdays are the days when we sell the least amount of tickets, and that the second and third quarters is the time of year when the museum sells the least amount of tickets.

See also:

Details on Unpivot Data

Transforming Data

Normalizing Data
Normalizing Columns

A number of normalization methods can be written as expressions or used as a transformation step when adding data tables. See the links at the end of this topic for a description of the theory behind each method.

In the expression examples below, the following values are used:

Columns: E and A, where E is the column to normalize and A is a baseline column.

Percentile value: P

Normalize by mean

[E] / Avg([E])

[E] * Avg([A]) / Avg([E])

Normalize by trimmed mean

[E] / TrimmedMean([E], P)

[E] * TrimmedMean([A], P) / TrimmedMean([E], P)

Normalize by percentile

[E] / Percentile([E], P)

[E] * Percentile([A], P) / Percentile([E], P)

Scale between 0 and 1

If( Max([E]) = Min([E]), 0.5, ([E] – Min([E]) / (Max([E]) – Min([E])) )

Subtract the mean

[E] – Avg([E])

Subtract the median

[E] – Median([E])

Normalization by signed ratio

If( [E] > [A], [E] / [A], -[A] / [E])

Normalization by log ratio

Log10( [E] / [A] )

Normalization by log ratio in standard deviation units

Log10( [E] / [A] ) / StdDev(Log10( [E] / [A] ))

Z-score calculation

([E] – Avg([E])) / StdDev([E])

Normalize by standard deviation

[E] / StdDev([E])

See also:

Normalization by Mean

Normalization by Trimmed Mean

Normalization by Percentile

Scale between 0 and 1

Subtract the Mean

Subtract the Median

Normalization by Signed Ratio

Normalization by Log Ratio

Normalization by Log Ratio in Standard Deviation Units

Z-score Calculation

Normalization by Standard Deviation

Details
Normalization by Mean 

Assume that there are n rows with seven variables (columns), A, B, C, D, E, F and G, in the data. We use variable E as an example in the calculations below. The remaining variables in the rows are normalized in the same way.

Without rescaling (Baseline variable = None)

The normalized value of ei for variable E in the ith row is calculated as:

images/n_mean_ekv_without.gif

where

p = the number of records used to calculate the mean

Rescaling by a baseline variable

If we select variable A as baseline variable, the normalized value of ei for variable E in the ith row is calculated as:

images/n_mean_ekv_with.gif

where

p = the number of rows used to calculate the mean

aj = the value for variable A in the jth record

See also:

Normalizing Columns

Average

Normalization by Trimmed Mean 

The trimmed mean for a variable is based on all values except a certain percentage of the lowest and highest values for that variable. This removes the effect of outliers during the normalization. If the trim value is set to 10% then the highest 5% of the values and the lowest 5% of the values are excluded from the calculated mean.

Assume that there are n rows with seven variables, A, B, C, D, E, F and G, in the data. We use variable E as an example in the calculations below. The remaining variables in the rows are normalized in the same way.

Without rescaling (Baseline variable = None)

The normalized value of ei for variable E in the ith row is calculated as:

Normalized3.png

where

T = the set of rows left after trimming

p = the number of rows in T.

Rescaling by a baseline variable

If we select variable A as baseline variable, the normalized value of ei for variable E in the ith row is calculated as:

images/n_trimmed_with.gif

where

T = the set of rows left after trimming

p = the number of rows in T

aj = the value for variable A in the jth row.

See also:

Normalizing Columns

Normalization by Percentile 

Assume that there are n rows with seven variables, A, B, C, D, E, F and G, in the data. We use variable E as an example in the calculations below. The remaining variables in the rows are normalized in the same way.

Without rescaling (Baseline variable = None)

The normalized value of ei for variable E in the ith record is calculated as:

images/n_percentile_without.gif

where

qE,P% = the value that P% of the values for variable E—among the selected rows—are less than or equal to

P = the percentile value that you specify when you normalize the data.

Rescaling by a baseline variable

If we select variable A as baseline variable, the normalized value of ei for variable E in the ith row is calculated as:

images/n_percentile_with.gif

where

qA,P% = the value that P % of the values for variable A—among the selected rows—are less than or equal to

qE,P% = the value that P % of the values for variable E—among the selected rows—are less than or equal to

P = the percentile value that you specify when you normalize the data.

See also:

Normalizing Columns

Normalization by Scaling Between 0 and 1 

Assume that there are n rows with seven variables, A, B, C, D, E, F and G, in the data. We use variable E as an example in the calculations below. The remaining variables in the rows are normalized in the same way.

The normalized value of ei for variable E in the ith row is calculated as:

images/n_scale_between_0_1_ekv.gif

where

Emin = the minimum value for variable E

Emax = the maximum value for variable E

If Emax is equal to Emin then Normalized (ei) is set to 0.5.

See also:

Normalizing Columns

Normalization by Subtracting the Mean 

Assume that there are n rows with seven variables, A, B, C, D, E, F and G, in the data. We use variable E as an example in the calculations below. The remaining variables in the rows are normalized in the same way.

The normalized value of ei for variable E in the ith row is calculated as:

images/n_subtract_mean_ekv.gif

where

n = the total number of rows in the data.

See also:

Normalizing Columns

Average

Normalization by Subtracting the Median 

Assume that there are n rows with seven variables, A, B, C, D, E, F and G, in the data. We use variable E as an example in the calculations below. The remaining variables in the rows are normalized in the same way.

The normalized value of ei for variable E in the ith row is calculated as:

images/n_subtract_median_ekv.gif

where

Emedian = the median of variable E.

The median of a set of values is the middle value when the values are sorted from lowest to highest. If the number of values is even, the median is the average of the two middle values.

See also:

Normalizing Columns

Median

Normalization by Signed Ratio 

Assume that there are n rows with seven variables, A, B, C, D, E, F and G, in the data. We use variable E as an example in the calculations below. All target variables are normalized in the same way.

If we select A as baseline variable, the normalized value of ei for variable E in the ith record is calculated as:

images/n_signed_ratio_ekv_1.gif

 

where

ai = the value for variable A in the ith row.

See also:

Normalizing Columns

Normalization by Log Ratio 

Assume that there are n rows with seven variables, A, B, C, D, E, F and G, in the data. We use variable E as an example in the calculations below. All target variables are normalized in the same way.

If we select A as baseline variable the normalized value of ei for variable E in the ith row is calculated as:

images/n_log_ratio_ekv.gif

where

ai = the value for variable A in the ith row.

See also:

Normalizing Columns

Normalization by Log Ratio in Standard Deviation Units 

Assume that there are n rows with k variables, A, B, C, D, E, F and G, in the data. We use variable E as an example in the calculations below. All target variables are normalized in the same way.

If we select A as baseline variable the normalized value of ei for variable E in the ith row is calculated as:

images/n_log_ratio_std_dev_ekv.gif

where

std = the standard deviation

ai = the value for variable A in the ith row.

See also:

Normalizing Columns

Normalization by Z-score 

Assume that there are five rows with the IDs A, B, C, D and E, each row containing n different variables (columns). We use record E as an example in the calculations below. The remaining rows are normalized in the same way.

The normalized value of ei for row E in the ith column is calculated as:

images/n_z_score_ekv_1.gif

If all values for row E are identical—so the standard deviation of E (std(E)) is equal to zero—then all values for row E are set to zero.

See also:

Normalizing Columns

Normalization by Standard Deviation

Assume that there are five rows with the IDs A, B, C, D and E, each row containing n different variables (columns). We use record E as an example in the calculations below. The remaining rows are normalized in the same way.

The normalized value of ei for row E in the ith column is calculated as:

Normalized14.png

If all values for row E are identical—so the standard deviation of E (std(E)) is equal to zero—then all values for row E are set to zero.

See also:

Normalizing Columns

Standard Deviation

Details
Details on Show Transformations

Transformations can be applied in dialogs that add data, such as in the Add Data Tables dialog, the Add On-Demand Data Table dialog, or, in the Insert Columns or Insert Rows tools. Click on Show transformations to display the controls described below.

SalesData.png

Option

Description

Hide transformations

Hides the transformation part of the dialog. Can be shown again by clicking the "Show transformations" button displayed when the lower part of the dialog is hidden.

Transformations

Lists the available transformations. Select one by clicking on it in the list.

Add...

Opens a new dialog for the chosen transformation where you specify all settings needed for the transformation before it is performed. When the transformation has been completed, it will appear in the list on the left-hand side.

Preview

Opens a new dialog with a preview of the data with the added transformations.

Edit...

Opens a new dialog that allows you to edit the settings for the latest transformation added.

Note: If an earlier transformation is marked in the list, this button is grayed out.

Remove

Removes the latest transformation added.

Note: If an earlier transformation is marked in the list, this button is grayed out.

See also:

Transforming Data

Details on Preview

Preview.png

Option

Description

Preview after step

Lists the transformations you have added so far. Select one of these steps in the transformation chain, or the original data table to see in the preview how your data has changed with each transformation.

Data Table tab

Shows the Data Table preview tab. This tab shows how the data table looks after each transformation.

Data Table Properties tab

Shows the Data Table Properties preview tab. This tab shows a list of all data table properties that have been defined for the data table after each transformation, and the values of these data table properties. There are two columns in the list, one with all properties and one with the corresponding values.

Column Properties tab

Shows the Column Properties preview tab. This tab shows a list of the values of all column properties for all columns in the data table after each transformation. In the list, there is a column for each property, and all columns in the data table are represented by a row.

See also:

Transforming Data

Details on Data Table Properties - General

Details on Column Properties - General

Details on Pivot Data

Pivot Data can be used to transform data from a tall/skinny to a short/wide format when adding or replacing data tables. Tip: You can replace a data table with a transformed version of itself.

  • To reach the Pivot Data dialog:

  1. Select File > Add Data Tables... or File > Add On-Demand Data Table... and add the data of interest.
    If you already have an analysis open, you can also choose:
    File > Replace Data Table... 
    Insert > Columns...
    or Insert > Rows...

  2. Click Show transformations.

  3. Select Pivot from the drop-down list and click Add.....

PivotData.png

Option

Description

Row identifiers

Each unique value in the chosen identity column or hierarchy produces a row in the generated table.

If you choose more than one column, the new table will have a separate row for each unique combination of values in the chosen columns.

Column titles (%C)

Each unique value in the chosen category column or hierarchy produces a new column for each aggregation method in the generated data table.

Selecting more than one column means that the new data table will have a separate column for each unique combination of values in the chosen columns.

The column titles are used in the column naming pattern, see below.

Values (%V) and aggregation methods (%M)

The column from which the data values are calculated. The values in the generated data table are computed according to the method selected under Aggregation in the column selector menu (for example, Average). A list of aggregation methods can be found on the Statistical Functions page.

Note: Except for the methods found on the statistical functions page, the method Count() can be used. It includes all values, including empty values, and therefore returns the total number of rows in the column.

Note: If you are certain that each combination of Identity and Category has a unique value, then you can select the Aggregation: None which will not apply any aggregation of the data. However, the pivot will fail if you select None, and each combination of Identify and Category is not unique.

Column naming pattern

You can select how the pivoted columns should be named. By default the predefined option is:

Method(Value) for Column

You can also create a custom naming scheme for your pivoted columns.

By clicking the drop down list, you can choose from the recently used names.

Transfer columns (%T) and aggregation methods (%A)

This option allows you to include an overall average, or any other aggregation method listed on the Statistical Functions page, of a particular measurement, for each row in the generated table.

Note: Except for the methods found on the statistical functions page, the method Count() can be used. It includes all values, including empty values, and therefore returns the total number of rows in the column.

Transfer column naming pattern

You can select how the transfer columns should be named. By default the predefined option is:

Aggregation(TransferValue)

By clicking the drop down list, you can choose from the recently used names.

Sample

Gives you a sample of what the resulting data table will look like. Note: Uses the first 100 rows from the data table, so there might be some differences between the sample and the resulting data table.

See also:

Pivoting Data

Transforming Data

Details on Unpivot Data

Unpivot Data can be used to transform data from a short/wide to a tall/skinny format when adding or replacing data tables. Tip: You can replace a data table with a transformed version of itself.

  • To reach the Unpivot Data dialog:

  1. Select File > Add Data Tables... or File > Add On-Demand Data Table... and add the data of interest.
    If you already have an analysis open, you can also choose:
    File > Replace Data Table... 
    Insert > Columns...
    or Insert > Rows...

  2. Click Show transformations.

  3. Select Unpivot from the drop-down list and click Add.....

UnpivotData.png

Option

Description

Available columns

The columns available for use in the unpivot operation.

Click a column name in the list to select it. To select more than one column, press Ctrl and click the column names in the list. Use one of the Add > buttons to send the selected column to either the Columns to transform or Columns to pass through field, see below. You can narrow down the list of available columns by typing a part of a name in the "Type to search" field. You can also enter an expression in the field, using the rules described on the Searching in TIBCO Spotfire page.

Add >

Moves the selected columns from the Available columns field to the field next to the button.

< Remove

Removes a column and brings it back to the Available columns field.

Remove All

Removes all columns from the selected columns fields.

Columns to pass through

The selected columns containing information that should be transferred to the unpivoted data set without any transformation. This could be ID columns, categorical information such as Region, Priority etc.

Columns to transform

The selected columns containing the values that you wish to combine into a single column. The column names of these columns will be used as category values in the resulting new category column. Typically, this might be a number of columns containing the same type of data, such as sales figures for different years.

Category column name (contains transformed column names)

Type a column name that summarizes the information provided in the columns that you have selected to transform. For instance, "Year" could be used when sales figures for several different years are to be combined to a single column.

Data type

Allows you to set the data type of the category column name, if several are possible.

Value column name

Type a column name that shows what type of information is included in the new value column. For instance, "Sales" would be a good name for a column containing sales figures for several different years.

Data type

Allows you to set the data type of the value column name, if several are possible.

Include empty values

Select this check box to transfer empty values to the unpivoted data view. If the check box is cleared, all records containing empty values will be discarded.

Sample

Gives you a preview of what the resulting data table will look like. Note: Uses the first 100 rows from the data table, so there might be some differences between the sample and the resulting data table.

See also:

Unpivoting Data

Transforming Data

Details on Calculate and Replace Column

This dialog is used to calculate a new column and replace a column in a data table with the new, calculated column. It is reached as a transformation step when adding or replacing data tables. Tip: You can replace a data table with a transformed version of itself.

  • To reach the Calculate and Replace Column dialog:

  1. Select File > Add Data Tables... or File > Add On-Demand Data Table... and add the data of interest.
    If you already have an analysis open, you can also choose:
    File > Replace Data Table... 
    Insert > Columns...
    or Insert > Rows...

  2. Click Show transformations.

  3. Select Calculate and replace column from the drop-down list and click Add.....

CalculateandReplaceColumn.png

Option

Description

Column to replace

Lists all available columns in the selected data table. Select the column you want to replace by clicking on it.

Available columns

Shows all available columns. Select a column by clicking on it in the list and then click on the Insert Columns button, or double-click on the column to send it to the Expression field. You can narrow down the list of available columns by typing a part of a name in the "Type to search" field. You can also enter an expression in the field, using the rules described on the
Searching in TIBCO Spotfire page. Press Ctrl or Shift to select multiple columns.

Insert Columns

Inserts the selected columns in the Expression field.

Available properties for column

Shows all properties that you can use in the calculation of a new column. The column properties are specific to the column selected in the Available columns list to the left.

Select a property by clicking on it in the list and then click on the Insert Properties button, or double-click on the property to send it to the Expression field. The property will automatically be inserted as a value in this context.

See Properties in Expressions for more information.

You can narrow down the list of available properties by using the search field.

Insert Properties

Inserts the properties selected in the Available properties for column list at the current cursor position in the Expression field.

Category

Select a category of functions to limit the choices in the Function list:

All functions

Binning functions

Conversion functions

Date and Time functions

Logical functions

Math functions

Operators

Property functions

Ranking functions

Spatial functions

Statistical functions

Text functions

Function

Select a function by clicking on it in the list and then click on the Insert Function button, or double-click on the function to send it to the Expression field. You can narrow down the list by typing a part of a name in the field where it says "Type to search".

Description

Shows a brief description of the selected function. For more detailed descriptions, see the Functions chapter.

Insert Function

Inserts the selected function in the Expression field.

Expression

This is the text field in which you build your expression. You can insert columns and functions from the lists or enter text as in any standard text editor.

Recent expressions

Displays the ten expressions you have most recently created. You can select one of these and click the Insert button to insert the expression into the Expression field.

Insert

Inserts the selected Recent expression into the Expression field. This will replace the entire content of the Expression field.

Resulting expression

Of interest when preprocessor functions (such as ${PropertyName}) are used in the expression only. Displays the expression after all occurrences of the property have been replaced with its current value or values.

Column name

Type a name for the calculated column.

Sample result

Displays the result of applying the current expression to the first row of the data table. Note that if aggregating functions (such as, "Count") are used in the expression, only the first 100 rows will be used in the sample calculation.

If this field shows an error there is a problem with the expression. Moving the mouse pointer over the red exclamation mark next to the Expression field will display an explanation of what is wrong.

Type

Shows the data type of the calculated column.

Formatting...

Opens the Formatting dialog, where you can change the formatting of the calculated column.

See also:

Transforming Data

Details on Calculate New Column

This dialog is used to calculate a new column in a transformation step when adding or replacing data tables.

  • To reach the Calculate New Column dialog:

  1. Select File > Add Data Tables... or File > Add On-Demand Data Table... and add the data of interest.

  2. Click Show transformations.

  3. Select Calculate new column from the drop-down list and click Add.....

CalculateNewColumn.png

Option

Description

Available columns

Shows all available columns. Select a column by clicking on it in the list and then click on the Insert Columns button, or double-click on the column, to send it to the Expression field. You can narrow down the list of available columns by typing a part of a name in the "Type to search" field. You can also enter an expression in the field, using the rules described on the
Searching in TIBCO Spotfire page. Press Ctrl or Shift to select multiple columns.

Insert Columns

Inserts the selected columns in the Expression field.

Available properties for column

Shows all properties that you can use in the calculation of a new column. The column properties are specific to the column selected in the Available columns list to the left.

Select a property by clicking on it in the list and then click on the Insert Properties button, or double-click on the property to send it to the Expression field. The property will automatically be inserted as a value in this context.

See Properties in Expressions for more information.

You can narrow down the list of available properties by using the search field.

Insert Properties

Inserts the properties selected in the Available properties for column list at the current cursor position in the Expression field.

Category

Select a category of functions to limit the choices in the Function list:

All functions

Binning functions

Conversion functions

Date and Time functions

Logical functions

Math functions

Operators

Property functions

Ranking functions

Spatial functions

Statistical functions

Text functions

Function

Select a function by clicking on it in the list and then click on the Insert Function button, or double-click on the function to send it to the Expression field. You can narrow down the list by typing a part of a name in the field where it says "Type to search".

Description

Shows a brief description of the selected function. For more detailed descriptions, see the Functions chapter.

Insert Function

Inserts the selected function in the Expression field.

Expression

This is the text field in which you build your expression. You can insert columns and functions from the lists or enter text as in any standard text editor.

Recent expressions

Displays the ten expressions you have most recently created. You can select one of these and click the Insert button to insert the expression into the Expression field.

Insert

Inserts the selected Recent expression into the Expression field. This will replace the entire content of the Expression field.

Resulting expression

Of interest when preprocessor functions (such as ${PropertyName}) are used in the expression only. Displays the expression after all occurrences of the property have been replaced with its current value or values.

Column name

Type a name for the calculated column you want to add.

Sample result

Displays the result of applying the current expression to the first row of the data table. Note that if aggregating functions (such as, "Count") are used in the expression, only the first 100 rows will be used in the sample calculation.

If this field shows an error there is a problem with the expression. Moving the mouse pointer over the red exclamation mark next to the Expression field will display an explanation of what is wrong.

Type

Shows the data type of the new calculated column.

Formatting...

Opens the Formatting dialog where you can change the formatting of the new, calculated column.

See also:

Transforming Data

Details on Data Function – Transformation

This dialog is used to select which function in the library to use as a transformation. Only those data functions that use a data table as input and output parameters will be available for selection.

  • To reach the Data Function - Transformation dialog:

  1. Select File > Add Data Tables... or File > Add On-Demand Data Table... and add the data of interest.

  2. Click Show transformations.

  3. Select Data function from the drop-down list and click Add.....

DataFunctions-Transformation.png

Click to select a keyword in the Keywords list that matches the type of data function you are looking for. You can further limit the number of data functions shown by typing some text in the search field. This limits the data functions visible to the ones matching the current search expression. For more information about valid search expressions, see Searching in TIBCO Spotfire.

See also:

Transforming Data

What are Data Functions?

Details on Normalization

Normalization can be used as a transformation step when adding or replacing data tables. Tip: You can replace a data table with a transformed version of itself.

  • To reach the Normalization dialog:

  1. Select File > Add Data Tables... or File > Add On-Demand Data Table... and add the data of interest.
    If you already have an analysis open, you can also choose:
    File > Replace Data Table... 
    Insert > Columns...
    or Insert > Rows...

  2. Click Show transformations.

  3. Select Normalization  from the drop-down list and click Add....

Normalization.png

Option

Description

Result options

 

   Add columns

Click this radio button to add new normalized columns to the resulting data table. The old columns will also be kept.

   Replace selected columns

Click this radio button to replace the old columns with the new, normalized ones.

Available columns

Lists the columns available in the selected data source.

Add >

Moves the columns selected in the Available columns list to the Selected columns list.

< Remove

Removes the selected columns from the Selected columns list.

Remove All

Removes all columns form the Selected columns list.

Move Up

Moves the selected column in the Selected columns list up one step.

Move Down

Moves the selected column in the Selected columns list down one step.

Selected columns

Lists the columns that are selected to be normalized.

Method

Specifies the normalization method to use. See Normalizing Columns and the theory section for each method for further information about the various methods.

Baseline column

Specifies the baseline column to use (in some normalization methods only).

Percentage

Specifies the percentage value (P) to use when normalizing by percentile or by trimmed mean.

Description

Shows a brief description of the currently selected normalization method.

Column names setting

Specifies how the naming of the normalized columns should be handled. You can either add the word "Normalized:" to the column name of the original columns or use the expression (normalization equation) as a column name.

If you have selected to replace columns you will also get a third option where you can keep the current column names.

See also:

Transforming Data

Details on Exclude Columns

This dialog is reached as a transformation step when adding or replacing data tables.

  • To reach the Exclude Columns dialog:

  1. Select File > Add Data Tables... or File > Add On-Demand Data Table... and add the data of interest.

  2. Click Show transformations.

  3. Select Exclude columns from the drop-down list and click Add.....

ExcludeColumns.png

Option

Description

Include

Shows all included columns. You can narrow down the list of available columns by typing a part of a name in the "Type to search" field. You can also enter an expression in the field, using the rules described on the Searching in TIBCO Spotfire page. Press Ctrl or Shift to select multiple columns.

Add >

Adds the selected columns to the Exclude list.

< Remove

Removes the selected columns from the Exclude list.

Remove All

Removes all columns from the Exclude list.

Exclude

Lists all columns you have chosen to exclude.

Preview

Shows how many columns you have chosen to include and gives you a preview of what the data will look like after the completion of this transformation.

See also:

Transforming Data

Details on Change Column Names

Change Column Names can be used as a transformation step when adding or replacing data tables. Tip: You can replace a data table with a transformed version of itself.

  • To reach the Change Column Names dialog:

  1. Select File > Add Data Tables... or File > Add On-Demand Data Table... and add the data of interest.
    If you already have an analysis open, you can also choose:
    File > Replace Data Table... 
    Insert > Columns...
    or Insert > Rows...

  2. Click Show transformations.

  3. Select Change column names from the drop-down list and click Add.....

ChangeColumnNames.png

Option

Description

Available columns

Shows all available columns. You can narrow down the list of available columns by typing a part of a name in the "Type to search" field. You can also enter an expression in the field, using the rules described on the
Searching in TIBCO Spotfire page. Press Ctrl or Shift to select multiple columns.

Add >

Adds the selected columns to the Columns to rename list.

< Remove

Removes the selected columns from the Columns to rename list.

Remove All

Removes all columns from the Columns to rename list.

Columns to rename

Lists the columns you have selected to rename and shows you the name ([%C]) that can be used to apply the same function on all columns in the Expression field.

Function

Select a function by clicking on it in the list and then click on the Insert button, or double-click on the function to send it to the Expression field.

Description

Shows a brief description of the selected function. For more detailed descriptions, see the Text functions page.

Insert >

Inserts the selected function in the Expression field.

Expression

This is the text field in which you build your expression on how to rename columns. You can insert functions from the lists or enter text as in any standard text editor. Many of the functions require you to type either the name of the column you want to rename or the common name given by the Columns to rename list.

New column names

Shows the renamed columns.

See also:

Transforming Data

Details on Change Data Types

Change Data Types can be used as a transformation step when adding or replacing data tables. Tip: You can replace a data table with a transformed version of itself.

  • To reach the Change Data Types dialog:

  1. Select File > Add Data Tables... or File > Add On-Demand Data Table... and add the data of interest.
    If you already have an analysis open, you can also choose:
    File > Replace Data Table... 
    Insert > Columns...
    or Insert > Rows...

  2. Click Show transformations.

  3. Select Change data types from the drop-down list and click Add.....

ChangeDataTypes.png

Option

Description

Available columns

Shows all available columns. If the data type has been changed for a column, the new data type will appear under "New Data Type". If not, that field will be empty for that column. You can narrow down the list of available columns by typing a part of a name in the "Type to search" field. You can also enter an expression in the field, using the rules described on the Searching in TIBCO Spotfire page. Press Ctrl or Shift to select multiple columns.

New data type

Allows you to choose which data type you want a selected column to have.

Sample value

Shows one sample value from the chosen column with the new data type applied.

Formatting

Opens the Formatting dialog where you can change the formatting of the column with the new data type.

Reset All

Resets the data types of all columns.

Preview

Shows a preview of what the data will look like after the completion of this transformation.

See also:

Transforming Data

Missing File

Details on Missing File

This dialog is shown when you open a linked analysis file in which the file path to one or more of the source files is no longer correct.

MissingFile.png

Option

Description

The following file could not be found

Shows the name and path to the file that the linked analysis file is trying to open.

What do you want to do?

 

   Search for the missing file

Select this option to open the Search for Missing File dialog and automatically search for the file on your local computer or on the network.

   Browse for the missing file

Select this option to manually browse for the missing file. Use this option if the source file used by the linked analysis file has been renamed.

   Use the file found in the same directory as the analysis

Spotfire has found a file in the same directory as the analysis file with the same name as the linked source file. Use this option if you know that this is the file.

Details on Search for Missing File

SearchforMissingFile.png

Option

Description

Search for the file named

Shows the file name of the missing file. This cannot be changed, hence, you cannot specify to use a file that has been renamed, or has changed file type.

Look in

Displays the path to the folder in which the search will be performed.

Browse...

Opens the Browse for Folder dialog, where you can select a different folder, on your local computer or on a network, in which to perform the search.

Search Now

Starts the search, in the specified folder.

Stop Search

Stops a search.

Name

Lists the name of all files that match the search. This is always the same as the file name at the top of the dialog.

In Folder

Lists the path to the file.

Size

Lists the size of the file.

Modified

Lists the date when the file was last modified.

See also:

Details on Missing File

Inserting More Data

Insert Calculated Columns

What is a Calculated Column?

Occasionally, the columns included in a data table do not allow you to perform all necessary operations, or to create the visualizations needed to fully explore the data table. However, in many cases the necessary information can be computed from existing columns by using the mathematical and logical expressions provided by the Insert Calculated Column tool.

Note: A calculated column is treated like any other column and its contents are static during all further analysis. If you want to use expressions that change during filtering of your data table, you should instead use custom expressions that are defined where you need them (for example, select Custom Expression... from the right-click menu on the axis selector).

See also:

How to Insert a Calculated Column

Details on Insert Calculated Column

Embedded or Linked Data?

How to Insert a Calculated Column

TIBCO Spotfire supports two different types of expressions: Insert Calculated Column, which creates a new column in the data table, and Custom Expression, which is used to dynamically modify the expression used on an axis or to define a setting. Both types of expressions are created with a similar user interface.

  • To insert a calculated column in the data table:

  1. Select Insert > Calculated Column....

  2. If you have more than one data table in the document, select the Data table to work on.

  3. Specify a suitable expression by either typing it directly into the Expression text field, or by selecting columns, properties and functions from the list.
    An example of an expression could be: [Exports m$]/[Population].

    Comment: You can always modify the expression by editing the text in the Expression field, using cut-and-paste, or by typing text. For a detailed description of the expression language, see General Syntax and other topics under Expression Language. For details on the syntax to use when adding properties, see Properties in Expressions.

  4. Verify that the result seems reasonable by looking at the Sample result field.

    Comment: If an error message is shown, there is a problem with the expression. Go back and modify the expression until the desired result is achieved.

  5. If desired, you can change the Formatting of the new column.

  6. Type a Column name for the new column.

  7. Click OK.

    Response: The expression is now evaluated for each row in the data table and a new column is created. A filter will appear with the name of the new column you created.

Tip: If you have previously created a suitable expression, you may select it from the Recent expressions list and click the Insert button.

  • To reach the Custom Expression dialog:

  1. Right-click on a column selector on an axis, in a Visualization Properties dialog, or in the Legend, to display the menu.

  2. Select Custom Expression....

    Comment: See How to Insert a Custom Expression for more information.

See also:

What is a Calculated Column?

Details on Insert Calculated Column

Details on Custom Expression

Details on Insert Calculated Column

TIBCO Spotfire supports two different types of expressions: Insert Calculated Column, which creates a new column in the data table, and Custom Expression, which is used to dynamically modify the expression used on an axis or to define a setting. Both types of expressions are created with a similar user interface.

  • To reach the Insert Calculated Column dialog:

Select Insert > Calculated Column....

InsertCalculatedColumn.png

Option

Description

Data table

Only available when more than one data table is present in the analysis and the dialog has been opened via the main menu.

Specifies the data table where the calculated column will be inserted.

Available columns

Shows all columns that you can use in the calculation of a new column.

Select a column by clicking on it in the list and then click on the Insert Columns button, or double-click on the column to send it to the Expression field. Press Ctrl or Shift to select multiple columns.

You can narrow down the list of available columns by typing a part of a name in the "Type to search" field. You can also enter an expression in the field using the rules described on the Searching in TIBCO Spotfire page.

Insert Columns

Inserts the columns selected in the Available columns list at the current cursor position in the Expression field.

Available properties for column

Shows all properties that you can use in the calculation of a new column. You can narrow down the list of available properties by using the search field. The column properties shown are specific to the column selected in the Available columns list to the left.

Select a property by clicking on it in the list and then click on the Insert Properties button; or double-click on the property to send it to the Expression field. The property will automatically be inserted as text. However, there may be occasions where you need to insert the property as a function to receive the desired result. See Properties in Expressions for more information. Use the pop-up menu in this field to select how to insert the property or type the correct syntax manually.

If you want to define a new property to use in the expression, right-click in the Available properties field and select New > [Property Type] Property... from the pop-up menu. You can also edit or delete custom properties by using the pop-up menu.

Insert Properties

Inserts the properties selected in the Available properties for column list at the current cursor position in the Expression field.

Category

Select a category of functions to limit the choices in the Function list:

All functions

Binning functions

Conversion functions

Date and Time functions

Logical functions

Math functions

Operators

Property functions

Ranking functions

Spatial functions

Statistical functions

Text functions

Function

Select a function by clicking on it in the list and then click on the Insert Function button, or double-click on the function to send it to the Expression field.

Type a search string in the text field to limit the number of items in the Functions list.

You can also click on any function and type the first letter of the desired function name to jump to a specific location in the list.

Description

Shows a brief description of the selected function. For more detailed descriptions, see the Expression Language chapter.

Insert Function

Inserts the selected function at the current cursor position in the Expression field.

Expression

This is the text field in which you build your expression. You can insert columns and functions from the lists, or enter text as in any standard text editor.

Cut/Copy/Paste works in the field using standard Ctrl+X/Ctrl+C/Ctrl+V.

Also, it is possible to undo/redo the last action by pressing Ctrl+Z.

Recent expressions

Displays the ten expressions you have most recently created. You can select one of these and click the Insert button to insert the expression into the Expression field.

Insert

Inserts the selected Recent expression into the Expression field. This will replace the entire content of the Expression field.

Resulting expression

Of interest when preprocessor functions (such as ${PropertyName}) are used in the expression only. Displays the expression after all occurrences of the property have been replaced with its current value or values.

Column name

The name of the new calculated column.

Sample result

Displays the result of applying the current expression to the first row of the data table. Note that if aggregating functions (such as, "Count") are used in the expression, only the first 100 rows will be used in the sample calculation.

If this field shows an error there is a problem with the expression. Moving the mouse pointer over the red exclamation mark next to the Expression field will display an explanation of what is wrong.

Type

The type of the new calculated column.

Formatting...

Opens the Formatting dialog, where you can change the formatting of the new calculated column.

Expression Language
General Syntax

Column references

Accessing columns is done by enclosing the column name in "[" and "]" characters (square brackets). The brackets are needed when the column name contains characters other than letters, digits or _ (underscore). They are also required when the column name is the same as a function name, or when the column name begins with a digit. If a column name contains any square brackets then they must be escaped. Escaping of brackets is performed by adding extra brackets before and after the bracket: "[" and "]]". For example, if the column name is [Name], it would be written as Name] in an expression.

If the column name has no special character or whitespace, or is not also a function name, it can be entered without brackets.

Examples:

Column1
[Column1]
[Binned Column1]
[1]
[!@#$%^&*()[\]]\\]

Constants are converted to columns, so even if a method says that the argument has to be a column, it is acceptable to use a constant.

Case sensitivity

  • Variables, functions and keywords are case insensitive: SUM(C1) = Sum(C1) = sum(C1)

  • Column name references are case sensitive.

  • Method call names are case insensitive. All methods which are defined in the add-in framework can be used. See later sections for information about the different methods supported.

Expression results

An expression describes how a new column should be calculated. The newly created column will have the same number of rows as all the other columns in the data table. The default null handling behavior is that operations on null return null. This means that if a new column is calculated as [Column A]*2 and there are empty values on some rows in Column A, then the new column will have empty values on those rows as well.

Multiple columns are normally separated with a comma. If multiple expressions are used the AS keyword can be used to rename the expressions in the custom expression dialog. See examples below.

Categorical expressions, NEST and CROSS

In custom expressions, categorical and hierarchical columns and expressions are written between angles, "<>". When more than one category is available within the expression, which combinations of categories to show must also be specified. This is done using the keywords NEST (which shows all actual combinations of values in the data) or CROSS (which will show all possible combinations of the values, including combinations that currently hold no data). All columns in the expression must be separated by "nest" or "cross" instead of a comma, and mixing the two combination options is not permitted.

For example, if we have a data table containing some sales data for each month during two years, but the data for February is missing for one year, the different options will give the following results:

Nest:

GeneralSyntax.png

Since there are no data available for February 2001, there will not be a bar (nor a placeholder for a bar) there. This visualization is set up using the All values in data (nest) option in the Advanced Settings dialog, reached from the category axis property page for the visualization. It corresponds to the custom expression: <[Year] NEST [Month]>

Cross:

When the CROSS option is selected, all possible combinations of the categories are displayed. This means that there will be a placeholder for the February column for 2001, even though there is no data available for February. The All possible values (cross) option has been selected in the Advanced Settings dialog and the corresponding custom expression would be: <CROSS [Year] CROSS [Month]>

(The first CROSS is optional.)

Examples of expressions:

[Column1]

[Column1], [Column2]

[Column1] AS [My first column], [Column2] AS [My second column]

<[Category column 1]>

<[Category column 1] nest [Category column 2]>

<[Category column 1] cross [Category column 2] cross [Category column 3]>

123.23

39+12*3

-(1-2)

cast (power(2,2) as integer)

null

case Column1 when 10 then 'ten' else 'not ten' end

case when 1 < 3 or 3 < 2 then 10 else 32 end

case when Column1 is not null then Column1 else Column2 end

See also:

Data Types

Functions Overview

Invalid Values

Custom Expressions Overview

Operators
Data Types

The available data types are:

  • Integer

  • LongInteger

  • Real

  • SingleReal

  • Currency

  • Date

  • DateTime

  • Time

  • TimeSpan

  • Boolean

  • String

  • Binary

All data formats except Currency (Decimal) use a binary floating-point number representation of the values. This means that some calculations which should result in an even number may be displayed as a number which needs to be rounded off, due to the nature of the base-two calculation. When more calculations are performed after one another, errors can accumulate and may become a problem.

Data Type

Description

Integer

Integer values are written as a sequence of digits, possibly prefixed by a + or - sign. The integer values that can be specified range from -2147483648 to 2147483647. If used where a decimal value was expected, the integer values are automatically converted to decimal values.

Note: Hexadecimal values can be used in custom expressions and in calculated columns. They cannot be used when opening data. Hexadecimal-formatted values have a size limitation of 8 characters.

Examples:

0
101
-32768
+55
0xff            = 255
0x7fffffff            = 2147483647
0x80000000       = -2147483648

LongInteger

LongInteger can be used if the range for the standard Integer is not enough for your needs. It ranges from -9223372036854775808

to 9223372036854775807. LongInteger cannot be converted to Real without precision loss, but it can be converted to Currency without precision loss.

Note: Hexadecimal values can be used in custom expressions and in calculated columns. They cannot be used when opening data.

Examples:

2147483648

0x7FFFFFFFFFFFFFFF = -9223372036854775808

0x8000000000000000 = 9223372036854775807

Real

Real values are written as standard floating point numbers with a period for a decimal point and no thousands separator. The real values that can be specified range from -8.98846567431157E+307 to 8.98846567431157E+307.

The number of significant digits that can be shown is limited to 15, even though 16 can be used in calculations.

Math operations on real values which produce results that cannot be represented by the real data type generate numeric errors. In the resulting data table, these special cases will be filtered out and replaced by null.

Examples:

0.0
0.1
10000.0
-1.23e-22
+1.23e+22
1E6

SingleReal

SingleReal values are written as standard floating point numbers with lower precision and range than Real. SingleReal occupies 50% less memory than Real. The SingleReal values that can be specified range from -1.7014117E+38 to 1.7014117E+38.

The number of significant digits that can be shown is limited to 7, even though 8 can be used in calculations.

SingleReal can be converted to Real with minor precision loss.

Currency

Currency constants are written as integer or real constants with an 'm' suffix.

The data format behind the currency type is decimal. The decimal data format uses the base 10 in its calculations, which means that the round-off errors that may occur when doing binary calculations can be avoided with this format. However, this also means that heavy calculations take a longer time.

The number of significant digits that can be shown for a currency value is 28 (29 can be used in calculations). Currency values that can be specified range from -39614081257132168796771975168 to 39614081257132168796771975168.

Currency columns cannot be used in data functions.

Date

A date and time format depending on the locale on your computer. Dates from January 1, 1583 and forward are supported.

Examples:

6/12/2006

June 12

June, 2006

Note that the Date format is not directly supported by Spotfire Statistics Services. See also How to Use Data Functions.

DateTime

A date and time format depending on the locale on your computer. Dates from January 1, 1583 and forward are supported.

Examples:

6/12/2006

Monday, June 12, 2006 1:05 PM

6/12/2006 10:14:35 AM

Time

A date and time format depending on the locale on your computer.

Examples:

2006-06-12 10:14:35

10:14

10:14:35

Note that the Time format is not directly supported by Spotfire Statistics Services. See also How to Use Data Functions.

TimeSpan

TimeSpan is a value describing the difference between two dates.

It has 5 possible fields:

Days

1. Min: -10675199

2. Max: 10675199

Hours

1. Min: 0

2. Max: 23

Minutes

1. Min: 0

2. Max: 59

Seconds

1. Min: 0

2. Max: 59

Fractions (decimals of seconds)

1. Up to three decimals, i.e., the precision is 1 ms.

TimeSpan values can be displayed on a compact form: [-]d.h:m:s.f ([-]days.hours:minutes:seconds.fractions) or written out with words or abbreviations for each available field. Some of the descriptive forms can be localized.

Total min: -10675199.02:48:05.477

Total max: 10675199.02:48:05.477

Boolean

True and false. Booleans are used to represent true and false values returned by comparison operators and logical functions.

The display values can be localized.

Examples:

true
false
1 < 5

String

String values are surrounded by double quotes or single quotes. Escaping is performed by entering the delimiter symbol twice in a row (i.e., '' or ""). A string value can contain any sequence of UNICODE characters. A double quote cannot be used within the string unless it is escaped. Backslash is used to escape special characters, so it too must be escaped.

The basic escaping rules are that only the characters defined below can be used after a \; everything else will generate an error.

Examples:

"Hello world"
"25""23"
"1\n2\n"
"C:\\TEMP\\image.png"

Binary

May contain any type of data, encoded in binary form.

Examples:

Images

Chemical structure information

 

Escape sequence

Result

\uHHHH

Any Unicode character expressed as four hexadecimal characters, 0-F.

\DDD

A character in the range 0-255 expressed as three octal digits, 0-7.

\b

\u0008: backspace (BS)

\t

\u0009: horizontal tab (HT)

\n

\u000a: linefeed (LF)

\f

\u000c: form feed (FF)

\r

\u000d: carriage return (CR)

\\

\u005c: backslash \

Conversion to other data types

The data types supported in expressions are the same types as are supported in the data model. Converting a value from one data type to another is called casting.

Implicit casting to real is performed when integer columns are used in calculations and the result is a non-integer. If the result is an integer but larger than the limit for the Integer data type, it will be implicitly cast to a LongInteger. Integers can also be implicitly cast to a Currency. For example, if an Integer and a Currency column are added, then the result will be a Currency column.

You may also end up with a Currency when the result from a LongInteger exceeds the LongInteger limit. This is because a LongInteger cannot be cast to Real without the risk of losing precision. All operations using TimeSpan (except a simple TimeSpan casting) will return a DateTime. For any other conversions, you need to use the Conversion Functions to calculate new columns or use in custom expressions. Binary objects cannot be cast to any other data types.

Conversion of the data types for several columns simultaneously can be done using the Change Data Types transformation tool. Transformations can be done on existing data tables via the Add Data Tables dialog or the Replace Data Table dialog.

See also:

General Syntax

Formatting Overview

Format String

Operators

Operator

Description

 - Arg1

Negates the argument. The argument and the result are of type real.

Arg1 + Arg2

Adds the two arguments. See operator & for string concatenation.

Arg1 – Arg2

Subtracts Arg2 from Arg1.

Arg1 * Arg2

Multiplies the two arguments. The arguments and the result are of type real or decimal.

Arg1 / Arg2

Divides Arg1 by Arg2. The arguments and the result are of type real or decimal. Division by zero results in an invalid value.

 

Examples:  
7/2   -> 3.5
0/0   -> (Empty)
-1/0  -> (Empty)

Arg1 & Arg2

Appends Arg2 to the end of Arg1. The arguments can be of any type, but are converted to strings. The result is of type string. See also function Concatenate.

 

Examples:
"April " & (20+1) & "st"    -> "April 21st"
null & "Ape"                      -> (Empty)

Arg1 % Arg2

Returns the remainder of the division of Arg1 by Arg2. The arguments and the result are of type real or decimal. Invalid values are propagated to the result column.

 

Example:

3.5 % 2.5 -> 1.00

Arg1^Arg2

Returns Arg1 raised to the Arg2 power.

 

Example:

2.5^3

[Value Column]^2

Arg1 < Arg2

Operator which can be a part of an IF or a CASE statement. Returns true if Arg1 is less than Arg2. The arguments can be of any type, but must both be of the same type. The result is of type boolean. If any argument is invalid, the result is invalid. The function is defined for comparing normal numbers to each other. Other combinations result in invalid values.

 

Examples:

If( 1 < 2, "true", "false" )         -> true

Case when 2 < 1 then "true" else "false" end  -> false
If(1<null, "true", "false")         -> (Empty)
If(1 < 1/0, "true", "false")        -> (Empty)

Arg1 > Arg2

Operator which can be a part of an IF or a CASE statement. Returns true if Arg1 is greater than Arg2. The arguments are of type real and the result is of type boolean. See operator < for the definition of valid arguments.

Arg1 <= Arg2

Operator which can be a part of an IF or a CASE statement. Returns true if Arg1 is less than or equal to Arg2. The arguments are of type real and the result is of type boolean. See operator < for the definition of valid arguments.

Arg1 >= Arg2

Operator which can be a part of an IF or a CASE statement. Returns true if Arg1 is greater than or equal to Arg2. The arguments are of type real and the result is of type boolean. See operator < for the definition of valid arguments.

Arg1 = Arg2

Operator which can be a part of an IF or a CASE statement. Returns true if Arg1 is equal to Arg2. The arguments can be of any type, but must both be of the same type. The result is of type boolean. If any argument is null, the result is null. For arguments of type real, see operator < for the definition of valid arguments.

 

Examples:

If(1 = 2, "true", "false" )                                            -> false

Case when 2 = 2 then "true" else "false" end     -> true

If("Hello" = "hello", "true", "false" )       -> false
If("" = null, "true", "false" )                      -> (Empty)
If(null = null, "true", "false" )                  -> (Empty)

Arg1 <> Arg2

Operator which can be part of an 'IF' or a 'CASE' statement. Returns true if Arg1 is not equal to Arg2. The arguments can be of any type, but must both be of the same type. The result is of type boolean. If any argument is invalid, the result is invalid. For arguments of type real, see operator < for the definition of valid arguments.

Arg1 ~= Arg2

 

Operator which can be part of an 'IF' or a 'CASE' statement. The arguments can be of any type, but will be treated as string columns. Returns true if the Arg2 regular expression string matches the Arg1 string.

 

Some characters, like for instance the backslash character "\", need to be escaped to work when using calculated columns. See literature about regular expression language elements, e.g., on MSDN, for more information.

 

Examples:

If( "aab" ~= "a+" , "true", "false" )           -> true

Case when "aba" ~= ".a+$" then "true" else "false" end   -> true

And(Arg1, ...)

Operator which can be part of an 'If' or 'Case' statement. It has two boolean expressions as arguments and returns true if both expressions are true.

 

Examples:

If( 1 < 2 and 2 < 3, "true", "false" )

Case when false and true then "true" else "false" end

Not(Arg1)

Operator which can be part of an 'If' or 'Case' statement. It negates the boolean expression given as argument.

 

Examples:

If( not 1 < 2, "true", "false" )

Case when not true then "true" else "false" end

Or(Arg1, ...)

Operator which can be part of an 'If' or 'Case' statement. It has two boolean expressions as arguments and returns true if one of the expressions is true.

 

Examples:

If( 1 < 2 or 2 < 3, "true", "false" )

Case when false or true then "true" else "false" end

See also:

Operator Precedence

Operator Precedence

Below is a table showing the hierarchy of operators with the highest precedence operator shown first.

Expressions inside parentheses are evaluated first; nested parentheses are evaluated from the innermost parentheses to the outer.

Operators in the same row in the chart have equal precedence

Operators

Type

Order of Evaluation

( )

Parentheses

left to right

- +

Unary minus and plus

right to left

* / %

Multiplicative

left to right

+ -

Additive

left to right

&

Concatenation

left to right

< > <= >=

Relational

left to right

= <>

Equality

left to right

See also:

Operators

Functions
Functions Overview

Select which types of Functions you are interested in:

Binning functions

Conversion functions

Date and Time functions

Logical functions

Math functions

OVER functions

Property functions

Ranking functions

Spatial functions

Statistical functions

Text functions

Note that when you are using in-db (external) data tables the functions available depend on which functions are available in the external data source. See the documentation of your data source for more information about data source specific functions.

Binning Functions

Function

Description

BinByDateTime

Creates a binned column based on a natural date, datetime or time hierarchy.

The first argument is the Date, Time or DateTime column to bin. The second argument is the definition of the levels in the hierarchy. The hierarchy levels should be written in the form of a string containing the desired date parts, separated by dots, for example "Year.Quarter.Month". The third argument is the pruning level which specifies the level of the hierarchy to display.

If you write a custom expression based on a BinByDateTime expression you will see a column selector with all levels of the specified hierarchy available, but with the hierarchy slider handle positioned at the specified pruning level.

 

Valid arguments for Arg2 are combinations of:

'year' or 'yy' - The year.

'quarter' or 'qq' - The quarter.

'month' or 'mm' - The month.

'day of year' or 'dy' - The day of year.

'day' or 'dd' - The day.

'week' or 'wk' - The week.

'day of week' or 'dw' - The weekday.

'hour' or 'hh' - The hour.

'minute' or 'mi' - The minute.

'second' or 'ss' - The second.

'millisecond' or 'ms' - The millisecond.

 

Example:

BinByDateTime([Column],"Year.Quarter.Month.Day",2)

 

For a date column called Order Date, the expression above would result in the column selector and hierarchy slider shown below:

OrderDateMonth.png

Pruning level 0 would set the slider handle to the year position, 1 would mean the quarter, 2 the month, and 3 the day.

BinByEvenDistribution

Creates a binned column where each bin has the same number of unique values as the others. The last bin may have more unique values than the others. The first argument is the column to bin and the second argument is the number of bins. Invalid values will give an invalid result.

 

Example:

BinByEvenDistribution([Column], 5)

BinByEvenDistribution(Rank([Column])*Count() + RowId(), 3)

BinByEvenIntervals

Creates a binned column where the value range is divided into equal intervals. The first argument is the column to bin and the second argument is the number of bins.

 

Example:

BinByEvenIntervals([Column], 5)

BinBySpecificLimits

Creates a binned column with specific limits for the bins. The first argument is the column to bin and the following arguments are the limits for the bins. All rows which have values larger than the largest limit will have the same bin value. Invalid values will give an invalid result.

 

Example:

BinBySpecificLimits([Column], 1, 2, 3, 10)

BinByStdDev

Creates a binned column where the values are divided into bins depending on the values distance from the mean, measured in standard deviations. The first argument is the column to bin and the following arguments are the number of standard deviations to show from the mean. The standard deviation arguments should be given in ascending order and all values should be positive.

 

Example:

BinByStdDev([Column], 0.5, 1)

This will create a binning for:

<= -1 standard deviation

-1 standard deviation

-0.5 standard deviation

0.5 standard deviation

1 standard deviation

> 1 standard deviation

BinBySubstring

Creates a binned column based on beginning or end of value. The first argument is the string column to bin and the following is the number of characters in the substring. If the second argument is negative the substring starts from the end of the value.

 

Examples:

BinBySubstring([Column], -4)

BinBySubstring(String([Integer Column]), 1)

See also:

Conversion functions

Date and Time functions

Logical functions

Math functions

OVER functions

Property functions

Ranking functions

Spatial functions

Statistical functions

Text functions

Conversion Functions

The data types available for conversion are further described here..

Function

Description

Cast(Arg1 as type)

Casts any expression to any type (except Null/Undefined).

Invalid values are propagated. Casting performed for different types of input and output types results in different outputs. See Cast Method for more information.

 

Example:

Cast([IntegerColumn] as Currency)

Boolean(Arg1)

Converts the column or value to a Boolean.

 

Example:

Boolean([Column])

Currency(Arg1)

Converts the column or value to a Currency.

 

Example:

Currency([Column])

Date(Arg1, ..., Arg3)

Converts the column or values to a Date. If a single argument is used, Arg1 can be of type String or DateTime. If a String is specified, the date must be written in a format that Spotfire can recognize. Additionally, all parts of the date (year, month and day) must be present. See examples below. If a DateTime is specified, the time part is removed.

If three integer arguments are given, then the first argument is the year, the second is the month and the third is the day of the month.

See also Date and Time functions.

 

Examples:
Date("2003-03-21")                        -> 3/21/2003

Date("3/21/03")                               -> 3/21/2003
Date("10")                                        -> (Empty)
Date(null)                                         -> (Empty)
Date("2003-03-21 11:37:00")       -> 3/21/2003

Date(2003,03, 21)                          -> 3/21/2003

(The output formats available are dependent on your current locale.)

DateTime(Arg1, Arg2,..., Arg7)

Converts the column or values to a DateTime. If a single argument is used, Arg1 can be of type String or Date. If a String is specified, the date must be written in a format that Spotfire can recognize. Additionally, at least all parts of the date (year, month and day) must be present. If a Date is specified, the time part is set to 00:00:00 (12:00:00 AM).

If seven integer arguments are given, then the first argument is the year, the second is the month, the third is the day of the month, the fourth is the hour, the fifth is the minute, the sixth is the second and the seventh argument is the millisecond.

See also Date and Time functions.

 

Examples:
DateTime("2003-03-21 11:37:00")    -> 3/21/2003 11:37:00 AM
DateTime("10")                                      -> (Empty)  
DateTime(null)                                       -> (Empty)
DateTime("2003-03-21")                      -> 2003-03-21 00:00:00

DateTime(2003, 03, 21, 11, 37, 00)   -> 2003-03-21 11:37:00

(The output formats available depend on your current locale.)

Integer(Arg1)

Converts the column or value to an integer number. If the conversion fails, an error is returned. Arg1 can be of type integer, real or string, and the result is of type integer. Real numbers are truncated, i.e., only the integer part is used.

 

Examples:
Integer("-123")       -> -123
Integer(-2.99)        -> -2
Integer("0%")         -> (Empty)
Integer(1e20)        -> (Empty)

Integer(null)           -> (Empty)

LongInteger(Arg1)

Converts the column or value to a LongInteger.

 

Example:

LongInteger([Column])

Real(Arg1)

Converts the column or value to a real number. If the conversion fails, an error is returned. Arg1 can be of type integer, real or string, and the result is of type real.

 

Examples:
Real(1.23)      -> 1.23
Real(2)            -> 2
Real("0%")      -> (Empty)
Real(null)        -> (Empty)

SingleReal(Arg1)

Converts the column or value to a SingleReal.

 

Example:

SingleReal([Column])

SN(Arg1, Arg2)

Substitutes null values. Returns Arg1 if it is not null, Arg2 otherwise. Arg1 and Arg2 can be of any type, but both must be of the same type or null. The result is of the same type as the arguments.

 

A common usage is to substitute null values in a column. If Arg1 is a column, Arg2 can be either a value of the same type as the contents of the column or a column with the same content type. If Arg2 is also a column, the error in Arg1 will be replaced with the value from the same row in Arg2.

 

Examples:
SN(1, 2)        -> 1
SN(null, 2)    -> 2
SN(0/0, 2)     -> #NA

SN([Column], 1) -> 1 (if null value in column)

SN([Column1], [Column2]) -> (value from Column2 if null value in Column1)

String(Arg1)

Converts the column or value to a string. This conversion never fails except if Arg1 is null. Arg1 can be of any type and the result is of type string.

 

Examples:
String(1.23)     -> "1.23"
String(null)      -> (Empty)

Time(Arg1, Arg2,..., Arg4)

Converts the column or values to a time. If the conversion fails, an error is returned. If a single argument is used, Arg1 can be of type String or DateTime.  If a String is specified, the time must be written in a format that Spotfire can recognize. Additionally, both the hour and the minute must be specified. See examples below. If a DateTime is specified, the date part is removed.

If four integer arguments are given, then the first argument is the hour, the second is the minute, the third is the second and the fourth is the millisecond.

See also Date and Time functions.

 

Examples:
Time("11:37:00")                         -> 11:37:00
Time("10")                                    -> (Empty)
Time (null)                                    -> (Empty)
Time(11, 30, 20, 4)                       ->11:30:20

(The output formats available depend on your current locale.)

TimeSpan(Arg1, Arg2,...Arg5))

Creates a TimeSpan from a column or values. If a single argument is given, the input column can be of type String or TimeSpan. If a String is specified, TimeSpan must be written in the format "[-]d.h:m:s.ms".

If five arguments are given, then the first argument is the days, the second is the hours, the third is the minutes, the fourth is the seconds and the fifth is the milliseconds. The first four arguments are integers, the last is a real number.

 

Examples:

TimeSpan([Column])

TimeSpan("247.5:17:11.5002")

TimeSpan(247, 05, 17, 11, 500.2)

See also:

Binning functions

Date and Time functions

Logical functions

Math functions

OVER functions

Property functions

Ranking functions

Spatial functions

Statistical functions

Text functions

Cast Method

This is an overview of what will happen when a column is cast from one data type to another.

Input

Output

 

Integer

Real

Decimal

Date/DateTime/Time

String

Binary

Integer

Same value.

 Value cast to real.

Value converted to decimal if it fits in the limit. Null otherwise.

Date* value is created using the integer value as ticks.

Formatted using input formatter.

Null.

Real

Integer part of real value is it fits in the limit. Null otherwise.

 Same value.

Value converted to decimal if it fits in the limit. Null otherwise.

Date* value is created using the integer part of the value as ticks.

Formatted using input formatter.

Null.

Decimal

Integer part of decimal value is it fits in the limit. Null otherwise.

Decimal value rounded to Real if it fits, null otherwise.

Same value.

Date* value is created using the integer part of the value as ticks.

Formatted using input formatter.

Null.

Date/DateTime/Time

Number of ticks if it fits in the limit. Null otherwise.

Number of ticks converted to real.

Number of ticks converted decimal if it fins in the limit. Null otherwise.

Same value.

Formatted using input formatter.

Null.

String

Try to parse using output formatter. Null if it failed to parse.

Try to parse using output formatter. Null if it failed to parse.

Try to parse using output formatter. Null if it failed to parse.

Try to parse using output formatter. Null if it failed to parse.

Same value.

Null.

Binary

Null.

Null.

 Null.

Null.

Null.

Same value.

* Date, DateTime or Time.

See also:

Conversion Functions

Date and Time Functions

Function

Description

DateAdd(Arg1, Arg2, (Arg3))

Adds an interval to a Date, Time or a DateTime. The method can add either a TimeSpan or an integer representing a specified date or time part (e.g., a number of days).

 

If a TimeSpan is to be added, two arguments are needed: a DateTime column and a TimeSpan column.

 

If an integer value is to be added to a date or time part, three arguments are used: Arg1 is a string describing which part to add. Arg2 is a number which contains the number of parts to add. Arg3 is the Date, Time or DateTime column.

 

Valid arguments for Arg1 are:

'year' or 'yy' - The year.

'quarter' or 'qq' - The quarter.

'month' or 'mm' - The month.

'day' or 'dd' - The day.

'week' or 'wk' - The week.

'hour' or 'hh' - The hour.

'minute' or 'mi' - The minute.

'second' or 'ss' - The second.

'millisecond' or 'ms' - The millisecond.

 

Examples:

DateAdd([Date Column], [TimeSpan Column])

DateAdd('year', 2, [Date Column])

DateAdd('month', 1, [Date Column])

DateDiff(Arg1, Arg2, (Arg3))

Calculates the difference between two Date, Time or DateTime columns. The result is presented either as a TimeSpan or as a real value representing a specified time part (e.g., number of days).

 

If two arguments are used (a start date column and a stop date column) the result will be a TimeSpan value displaying the total difference.

 

If three arguments are used, the first argument should be the part to compare. The second argument is the start date column and the third argument is the stop date column. The result of the operation is a real value.

 

Valid arguments for Arg1 are:

'year' or 'yy' - The year.

'quarter' or 'qq' - The quarter.

'month' or 'mm' - The month.

'day' or 'dd' - The day.

'week' or 'wk' - The week.

'hour' or 'hh' - The hour.

'minute' or 'mi' - The minute.

'second' or 'ss' - The second.

'millisecond' or 'ms' - The millisecond.

 

Example:

DateDiff([Order Date], [Delivery Date])

DateDiff('day', [Order Date], [Delivery Date])

DatePart(Arg1, Arg2)

Returns a specified part of a Date, Time or DateTime. Arg1 is a string describing which part of the date to get and Arg2 is the Date, Time or DateTime column.

 

Valid arguments for Arg1 are:

'year' or 'yy' - The year.

'quarter' or 'qq' - The quarter.

'month' or 'mm' - The month.

'day of year' or 'dy' - The day of year.

'day' or 'dd' - The day.

'year and week' or 'yywk' - The year and week.

'week' or 'wk'  - The week.

'day of week' or 'dw' - The weekday.

'hour' or 'hh' - The hour.

'minute' or 'mi' - The minute.

'second' or 'ss' - The second.

'millisecond' or 'ms' - The millisecond.

 

Example:

DatePart('year', [Date Column])

DateTimeNow()

Returns the current system time.

 

Example:

DateTimeNow()

Day(Arg1)

Extracts the day of month from a Date or DateTime column. The result is an integer between 1 and 31.

 

Example:
Day([Date Column])      

DayOfMonth(Arg1)

Extracts the day of month from a Date or DateTime column. The result is an integer between 1 and 31.

 

Example:
DayOfMonth([Date Column])      

DayOfWeek(Arg1)

Extracts the day of week from a Date or DateTime column. The underlying data of the new column is an integer between 0 and 6, but regional settings determine the start of week as well as the formatted output.

 

Example:

DayOfWeek([Date Column])

DayOfYear(Arg1)

Extracts the day of year for a Date or DateTime column. Returns an integer between 1 and 366.

 

Example:

DayOfYear([Date Column])

Days(Arg1)

Returns the number of days for a TimeSpan as an integer between -10675199 and 10675199.

 

Example:

Days([TimeSpan Column])

Hour(Arg1)

Extracts the hour from a DateTime or Time column. Returns an integer between 0 and 23.

 

Example:

Hour([Time Column])

Hours(Arg1)

Returns the number of hours for a TimeSpan as an integer between 0 and 23.

 

Example:

Hours([TimeSpan Column])

Millisecond(Arg1)

Extracts the millisecond from a DateTime or Time column. Returns an integer between 0 and 999.

 

Example:

Millisecond([Time Column])

Milliseconds(Arg1)

Returns the number of milliseconds for a TimeSpan as a real value between 0.0 and 999.0.

Minute(Arg1)

Extracts the minute from a DateTime or Time column. Returns an integer between 0 and 59.

 

Example:
Minute([Time Column])

Minutes(Arg1)

Returns the number of minutes for a TimeSpan as an integer between 0 and 59.

 

Example:

Minutes([TimeSpan Column])

Month(Arg1)

Extracts the month from a Date or DateTime column. The underlying data of the new column is an integer between 1 and 12, but regional settings determine the formatted output.

 

Example:
Month([Date Column])

Quarter(Arg1)

Extracts the quarter from a Date or DateTime column. The underlying data of the new column is an integer between 1 and 4, but regional settings determine the formatted output.

 

Example:

Quarter([Date Column])

Second(Arg1)

Extracts the second from a DateTime or Time column. Returns an integer between 0 and 59.

 

Example:
Second([Time Column])

Seconds(Arg1)

Returns the number of seconds for a TimeSpan as an integer between 0 and 59.

 

Example:

Seconds([TimeSpan Column])

TotalDays(Arg1)

Returns the number of days for a TimeSpan as a real value expressed in whole days and fractional days.

 

Example:

TotalDays([TimeSpan Column])

TotalHours(Arg1)

Returns the number of hours for a TimeSpan as a real value expressed in whole and fractional hours.

 

Example:

TotalHours([TimeSpan Column])

TotalMilliseconds(Arg1)

Returns the number of milliseconds for a TimeSpan as a real value expressed in whole and fractional milliseconds.

 

Example:

TotalMilliseconds([TimeSpan Column])

TotalMinutes(Arg1)

Returns the number of minutes for a TimeSpan as a real value expressed in whole and fractional minutes.

 

Example:

TotalMinutes([TimeSpan Column])

TotalSeconds()

Returns the number of seconds for a TimeSpan as a real value expressed in whole and fractional seconds.

 

Example:

TotalSeconds([TimeSpan Column])

Week(Arg1)

Extracts the week from a Date or DateTime column as an integer between 1 and 54, where the first week of year is dependent on the regional settings.

 

Example:

Week([Date Column])

Year(Arg1)

Extracts the year from a Date or DateTime column. The result is of type Integer.

 

Example:
Year([Date Column])

YearAndWeek(Arg1)

Extracts the year and week from a Date or DateTime column. Returns an integer (Year*100 + Week number), for example, the date 2005-10-13 will return 200541.

 

Example:

YearAndWeek([Date Column])

See also:

Binning functions

Conversion functions

Logical functions

Math functions

OVER functions

Property functions

Ranking functions

Spatial functions

Statistical functions

Text functions

Logical Functions

Function

Description

Case

The case statement has two different forms.

 

Simple:

case Arg1 when Arg2 then Arg3 else Arg4 end

The Arg1 expression is evaluated and when Arg1 is equal to Arg2 then Arg3 is returned. Multiple when/then expressions can be entered and are evaluated in left to right order.

 

Searched:

case when Arg1 then Arg2 else Arg3 end

Returns Arg2 if Arg1=true, and Arg3 if Arg1=false. Multiple

when/then expressions can be entered and are evaluated in left to right order.

 

Example:

case when 1 < 2 then "a" when 1 < 3 then "b" else "c" end

case [Column] when 3 then "a" when 2 then "b" else "c" end

If(Arg1,Arg2,Arg3)

Returns Arg2 if Arg1=true, and Arg3 if Arg1=false. Arg1 is of type boolean, usually the result of a comparison. Arg2 and Arg3 can be of any type, but must both be of the same type or null.

 

Examples:
If([Count] > 3, "many", "few")
If(true, null, null)                        -> (Empty)
If(true, 1, null)                            -> 1
If(false, null, 2)                          -> 2
If(null, 1, 2)                                 -> (Empty)

If(1 < 2, "Small", "Big")             -> Small

If([Column] Is Null,"0","has value")

Is Not Null

Used within an If- or Case- statement, to determine whether or not an expression yields an empty value (null value).

 

Example:

If([Column] Is Not Null, "value was not null", "value was null")

 

If an expression contains empty values (null values), you can use the SN function to substitute the null values with the specified value.

Is Null

Used within an If- or Case- statement, to determine whether or not an expression yields an empty value (null value).

 

Example:

If([Column] Is Null, "value was null", "value was not null")

 

If an expression contains empty values (null values), you can use the SN function to substitute the null values with the specified value.

See also:

Binning functions

Conversion functions

Date and Time functions

Math functions

OVER functions

Property functions

Ranking functions

Spatial functions

Statistical functions

Text functions

Math Functions

Function

Description

Abs(Arg1)

Returns the absolute value of Arg1. The argument and the result are of type real.

ACos(Arg1)

Returns the arccosine of Arg1 as an angle expressed in radians in the interval [0 ,p ]. Arg1 must be in the interval [-1.0, 1.0], otherwise #NA is returned. The argument and the result are of type real.

ASin(Arg1)

Returns the arcsine of Arg1 as an angle expressed in radians in the interval [- p /2 , p /2]. Arg1 must be in the interval [-1.0, 1.0], otherwise #NA is returned. The argument and the result are of type real.

ATan(Arg1)

Returns the arctangent of Arg1 as an angle expressed in radians in the interval [-p /2 , p /2]. The argument and the result are of type real.

Ceiling(Arg1)

Rounds Arg1 up to the nearest natural number. The argument and the result are of type real.

 

Examples:  
Ceiling(1.01)      -> 2.0
Ceiling(-1.99)     -> -1.0

Cos(Arg1)

Returns the cosine of Arg1 where Arg1 is an angle expressed in radians. The argument and the result are of type real.

Exp(Arg1)

Returns e (2.718281...) raised to the Arg1 power. The argument and the result are of type real.

Floor(Arg1)

Rounds Arg1 down to the nearest natural number. The argument and the result are of type real.

 

Examples:
Floor(1.99)      -> 1.0
Floor(-1.01)     -> -2.0

Ln(Arg1)

Returns the natural logarithm of Arg1. The arguments and the result are of type real. If Arg1 is negative, the result is a #NA error. If Arg1 is zero, the result is also #NA.

Log(Arg1, Arg2)

Returns the logarithm of Arg1 expressed in the base specified by Arg2. Equivalent to Ln(Arg1)/Ln(Arg2). The arguments and the result are of type real. See function Ln for the definition of valid arguments.

Log10(Arg1)

Returns the 10-based logarithm of Arg1. Equivalent to Ln(Arg1)/Ln(10). The arguments and the result are of type real. See function Ln for the definition of valid arguments.

Mod(Arg1, Arg2)      

Returns the reminder of the division of Arg1 by Arg2, The arguments and the result are of type real. If Arg2 is 0, the result is a  #NA error.

Mod(Arg1, Arg2) is defined as:
Arg1 – Arg2*Floor(Arg1/Arg2)

PI()

Returns the numerical constant p .

The result is of type real.

Power(Arg1, Arg2)

Returns Arg1 raised to the Arg2 power. The arguments and the result are of type real.

 

Examples:
Power(10, 3)    -> 1000
Power(10, -3)   -> 0.001
Power(0, 0)       -> 1

Rand(Arg1)

Returns a random real number between 0.0 and 1.0.

The integer argument is a constant seed value that is used to initialize the random number generator. It also assures that the same values are generated if the column is recalculated.

The seed value cannot be a column reference.

 

Example:

Rand(147)

RandBetween(Arg1, Arg2, Arg3)

Returns a random integer number within the specified range.

The first and the second arguments set the range for the random numbers. These arguments can be constant values or integer column references.

The third argument is a constant seed value that is used to initialize the random number generator. It also assures that the same values are generated if the column is recalculated.

The seed value cannot be a column reference.

Example:

RandBetween(100, -100, 147)

RandBetween(0, [Column 1], 147)

RandBetween([Column 1], [Column 2], 37)

Product(Arg1, ...)

Product is available under Statistical Functions.

Returns the product of the arguments. The arguments and the result are of type real. Null arguments are ignored and do not contribute to the product.

 

Examples:
Product(-1)                 -> -1
Product(1.5, -2, 3)     -> -9
Product(1, null, 3)      -> 3
Product(null)              -> (Empty)

Round(Arg1, Arg2)

Rounds Arg1 to the number of decimal places specified by Arg2. The arguments and the result are of type real, but for Arg2, only the integer part is used. Note that Arg2 can be negative to round to even 10s, 100s, etc. 0.5 is rounded upwards to a number with higher magnitude (ignoring the sign).

 

Examples:
Round(PI(), 3)      -> 3.142
Round(-0.5, 0)     -> -1
Round(25, -1)      -> 30

Sin(Arg1)

Returns the sine of Arg1 where Arg1 is an angle expressed in radians. The argument and the result are of type real.

Sqrt(Arg1)

Returns the square root of Arg1. The argument and the result are of type real. If Arg1 is negative, the result is a #NA error.

Sum(Arg1, ...)

Sum is available under Statistical Functions.

Returns the sum of the arguments. Null arguments are ignored and do not contribute to the sum.

Examples:
Sum(-1)                -> -1
Sum (1.5, -2, 3)   -> 2.5
Sum (1, null, 3)   -> 4
Sum (null)            -> (Empty)

Tan(Arg1)

Returns the tangent of Arg1 where Arg1 is an angle expressed in radians. The argument and the result are of type real.

See also:

Binning functions

Conversion functions

Date and Time functions

Logical functions

OVER functions

Property functions

Ranking functions

Spatial functions

Statistical functions

Text functions

OVER Functions

The OVER functions are used to determine how data should be sliced, for example, relative to time periods. For more information, see OVER in Custom Expressions and Advanced Custom Expressions.

Option

Description

All

Uses all the nodes in the referenced hierarchy. This can be useful when intersecting the current node with more than one hierarchy. For example, you can show the relative sales of different product categories for each month.

 

Examples:

Sum([Sales]) / Sum([Sales]) OVER (Intersect(All([Axis.Color]), [Axis.X]))

Sum([Sales]) / Sum([Sales]) OVER (All([Axis.X])) * 100

AllNext

Uses all nodes, including the current, to the end of the level.

 

Example:

Sum([Sales]) OVER (AllNext([Axis.X]))

AllPrevious

Uses all nodes, including the current, from the start of the level. This can be used to calculate the cumulative sum.

 

Examples:

Sum([Sales]) OVER (AllPrevious([Axis.X]))

Sum([Sales]) OVER (Intersect(Parent([Axis.X]), AllPrevious([Axis.X])))

Intersect

Returns the intersected rows from nodes in different hierarchies. See also AllPrevious and All.

 

Example:

Intersect(Parent([Axis.X]), All([Axis.Color]), Parent([Axis.Rows]), ...)

LastPeriods

Includes the current node and the n - 1 previous nodes. This can be used to calculate moving averages.

 

Example:

Sum([Sales]) OVER (LastPeriods(3, [Axis.X]))/3

Next

Compares the current node with the next node on the same level in the hierarchy. If there is no next node, that is, if the current node is the last node for the current level, the resulting subset will not contain any rows.

 

Example:

Sum([Sales]) - Sum([Sales]) OVER (Next([Axis.X]))

NextPeriod

Uses the next node which has the next value on the same level as the current node. If there is no next node, that is, if the current node is the last node for the current level, the resulting subset will not contain any rows.

 

Example:

Sum([Sales]) OVER (NextPeriod([Axis.X]))

ParallelPeriod

Uses the previous parallel node with the same value on the same level as the current node. For example, this can be used to compare sales results for each month with the corresponding months the previous year.

 

Example:

Sum([Sales])-Sum([Sales]) OVER (ParallelPeriod([Axis.X]))

Parent

Uses the parent subset of the current node. If the node does not have a parent, all rows are used as the subset.

 

Examples:

Sum([Sales]) / Sum([Sales]) OVER (Parent([Axis.Color]))

Sum([Sales]) / Sum([Sales]) OVER (Parent([Axis.X])) * 100

Previous

Uses the previous node on the same level as the current node to compare the result of the current node with the previous one. If there is no previous node, that is, if the current node is the first node for the current level, the resulting subset will not contain any rows.

 

Example:

Sum([Sales]) - Sum([Sales]) OVER (Previous([Axis.X]))

PreviousPeriod

Uses the previous node which has the previous value on the same level as the current node. If there is no previous node, that is, if the current node is the first node for the current level, the resulting subset will not contain any rows.

 

Example:

Sum([Sales]) OVER (PreviousPeriod([Axis.X]))

See also:

Binning functions

Conversion functions

Date and Time functions

Logical functions

Math functions

Property functions

Ranking functions

Spatial functions

Statistical functions

Text functions

Property Functions

Function

Description

$csearch

Selects a number of columns from a data table using a limiting search expression. The first argument is a data table and the second argument is a string that contains the search expression determining which column names should be returned. The function returns a list of the (unescaped) column names from the data table that match the search expression.

Examples:

$csearch([Data Table],"*")

-> Returns a list of all values in the data table called Data Table.

$csearch([Data Table], "Col*")

-> Returns a list of all values in the data table Data Table beginning with "Col", e.g., Column 1, Column 2, etc.

$esc

Replaces "]" in column names with "]]" and encloses the escaped column names in "[" and "]". The argument is a property value or a property function that starts with a dollar sign ($). See Properties in Expressions for more information.

Examples:

$esc(${PropertyName})

-> Returns the property value as a column name (within [ and ]).

$esc($csearch([Data Table], "Col*"))

-> Returns a list of all columns in the data table Data Table beginning with "Col", e.g., [Column 1], [Column 2], etc.

$map

Maps a list-valued property to a single string. The first argument is a template to use for each value in the list and the second argument is a specification of how the list values should be connected in the resulting expression. See Properties in Expressions for more information.

Examples:

$map("sum([${PropertyName}])", ",")

-> Returns a comma separated list of the sum of the columns included in the list-valued property, e.g., sum([Column 1]),sum([Column 2])

<$map("[${PropertyName}]", " NEST")>

-> Returns a nested categorical hierarchy using the columns included in the list-valued property, e.g., <[Column 1] NEST[Column 2]>

BaseRowID

Returns a unique identifier for each calculated row in the visualization. This identifier is selected from identifiers calculated on the Data Table.  This value may change when filtering or marking is performed.

Example:

BaseRowId()

ColumnProperty

The first argument is a column and the second argument is the column property name, presented as a string. Returns the value of the named column property from the column. The column property value cannot be a list and the column property has to exist before creating the expression.

Custom column properties can be specified using Edit > Column Properties, Properties tab, New-button.

Example:

ColumnProperty([Column], "Description")

DataTableProperty

Returns the value of the data table property. The argument to the method is the name of the data table property, presented as a string.

Example:

DataTableProperty("Table.CreationDate")

DocumentProperty

Returns the value of the document property. Custom document properties can be specified under Edit > Document Properties, Properties tab. Document properties can be used throughout the entire document.

The argument to the method is the name of the document property, presented as a string.

Example:

DocumentProperty("Extension.NumberOfBins")

RowID

Returns a unique identifier for each calculated row in the visualization.  This identifier will not change when filtering or marking is performed.

Example:

RowId()

See also:

Binning functions

Conversion functions

Date and Time functions

Logical functions

Math functions

OVER functions

Ranking functions

Spatial functions

Statistical functions

Text functions

Ranking Functions

Function

Description

DenseRank(Arg1, Arg2, Arg3...)

Returns an integer value ranking of the values in the selected column. The first argument is the column to be ranked.

An optional argument is a string determining whether to use an ascending (default) or a descending ranking. For the highest value to retrieve rank 1, use the argument "desc", for the lowest value to retrieve rank 1, use "asc".

Ties are given the same rank value and the highest ranking number equals the number of unique values in the column.

Additional column arguments (optional) can be used when the column should be split into separately ranked categories.

Examples:

DenseRank([Sales])

DenseRank([Sales], "desc", [Region])

Rank(Arg1, Arg2, Arg3...)

Returns an integer value ranking of the values in the selected column. The first argument is the column to be ranked.

An optional argument is a string determining whether to use an ascending (default) or a descending ranking. For the highest value to retrieve rank 1, use the argument "desc", for the lowest value to retrieve rank 1, use "asc".

Ties are given rank values depending on optional argument values:

"ties.method=minimum" (default),

"ties.method=maximum", or

"ties.method=first".

See More about ranking ties below for more information about the available arguments.

Additional column arguments (optional) can be used when the column should be split into separately ranked categories.

Examples:

Rank([Sales])

Rank([Sales], "desc", [Region])

Rank([Sales], "ties.method=first")

RankReal(Arg1, Arg2, Arg3...)

Returns a real value ranking of the values in the selected column. The first argument is the column to be ranked.

An optional argument is a string determining whether to use an ascending (default) or a descending ranking. For the highest value to retrieve rank 1, use the argument "desc", for the lowest value to retrieve rank 1, use "asc".

Ties are given rank values depending on optional argument values:

"ties.method=minimum" (default),

"ties.method=maximum",

"ties.method=first", or

"ties.method=average".

See More about ranking ties below for more information about the available arguments. The average ties method is used when calculating data relationships using Spearman R.

Additional column arguments (optional) can be used when the column should be split into separately ranked categories.

 

Examples:

RankReal([Sales])

RankReal([Sales], "desc", [Region])

RankReal([Sales], "ties.method=average")

More about ranking ties:

With the functions Rank and RankReal, you can add an optional ties method argument depending how you want equal values to be ranked:

Argument

Description

"ties.method=minimum"

Gives all ties the smallest rank value of the tie values.

"ties.method=maximum"

Gives all ties the largest rank value of the tie values.

"ties.method=first"

Gives the first found tie value the lowest rank value, and continues with the following rank value for the next tie.

"ties.method=average"

Gives all ties the average of the rank values for all ties.

 

Example:

When a list is to be ranked, its values are first of all sorted. Then, the sorted values are assigned a rank value depending on the order in the sorted list. What rank is given to a tie value depends on the ties method. Empty values are left empty and do not receive any rank.

List of values

Rank with "ties.method=minimum"

Rank with

"ties.method=maximum"

 

Rank with

"ties.method=first"

Rank with

"ties.method=average"

 

1

1

1

1

1

2

2

3

2

2.5

3

4

4

4

4

2

2

3

3

2.5

(Empty)

(Empty)

(Empty)

(Empty)

(Empty)

5

5

5

5

5

If DenseRank was used, the resulting rank values in the example would be 1 2 3 4.

See also:

Binning functions

Conversion functions

Date and Time functions

Logical functions

Math functions

OVER functions

Property functions

Spatial functions

Statistical functions

Text functions

Spatial Functions

The spatial functions are used to transform data so that it can be used to set up map charts in TIBCO Spotfire. If the map information is included in a Shape file, none of this is necessary. However, if you have geographic information in some other type of BLOB column containing WKB (Well-Known Binary) data, then this information needs to be extracted into seven different columns: Geometry, XMax, XMin, YMax, YMin, XCenter and YCenter. The Geometry column is the original, binary column.

The bounding box for a geometry is called the envelope. It is specified by the four coordinates XMax, XMin, YMax and YMin. XCenter and YCenter specifies the center of the geometry. These coordinate columns can be calculated from the binary WKB column using the spatial functions with the binary WKB column as an argument. In order for the map chart to identify these columns they must also have the required property values (same as the column names listed above) set on the mapchart.columntypeid property. This is automatically done when the spatial functions below are applied.

See also Configuration of Geographical Data for Map Charts.

Function

Description

WKBEnvelopeXCenter(Arg1)

Calculates the X center of the geometry envelope and sets the XCenter value on the mapchart.columntypeid property. The argument is a binary WKB column.

 

Example:

WKBEnvelopeXCenter([WKB])

WKBEnvelopeXMin(Arg1)

Calculates the X min of the geometry envelope and sets the XMin value on the mapchart.columntypeid property. The argument is a binary WKB column.

 

Example:

WKBEnvelopeXMin([WKB])

WKBEnvelopeXMax(Arg1)

Calculates the X max of the geometry envelope and sets the XMax value on the mapchart.columntypeid property. The argument is a binary WKB column.

 

Example:

WKBEnvelopeXMax([WKB])

WKBEnvelopeYCenter(Arg1)

Calculates the Y center of the geometry envelope and sets the YCenter value on the mapchart.columntypeid property. The argument is a binary WKB column.

 

Example:

WKBEnvelopeYCenter([WKB])

WKBEnvelopeYMin(Arg1)

Calculates the Y min of the geometry envelope and sets the YMin value on the mapchart.columntypeid property. The argument is a binary WKB column.

 

Example:

WKBEnvelopeYMin([WKB])

WKBEnvelopeYMax(Arg1)

Calculates the Y max of the geometry envelope and sets the YMax value on the mapchart.columntypeid property. The argument is a binary WKB column.

 

Example:

WKBEnvelopeYMax([WKB])

See also:

Binning functions

Conversion functions

Date and Time functions

Logical functions

Math functions

OVER functions

Property functions

Ranking functions

Statistical functions

Text functions

Statistical Functions

Function

Description

Avg(Arg1, ...)

Returns the average (arithmetic mean) of the arguments. The arguments and the result are of type real. If one argument is given, then the result is the average of all rows. If more than one argument is given, then the result is the average for each row. Null arguments are ignored and do not contribute to the average.

Examples:

Avg([Column])

Avg(2,-3,4)            -> 1

Avg(-1)                   -> -1
Avg(1.5, -2, 3.5)   -> 1
Avg(1, null, 3)       -> 2
Avg(null)               -> (Empty)

ChiDist(Arg1)

Returns the (upper tail) chi-square p-value of the argument.

Example:

ChiDist(x, deg_freedom)

ChiDist(7.377759, 2) =0.025

ChiInv(Arg1)

Returns the (upper tail) chi-square quantile value of the argument.

Example:

ChiInv(p, deg_freedom)

ChiInv(0.025, 2) =7.377759

Count(Arg1)

 

Calculates the number of non-empty values in the argument column, or, if no argument is specified, the total number of rows.

Example:

Count([Column])

Covariance(Arg1, Arg2)

Calculates the covariance of two columns given as arguments.

Example:

Covariance([Column1], [Column2])

FDist(Arg1)

Returns the upper tail F p-value of the argument.

Example:

FDist(x, deg_freedom1, deg_freedom2)

FDist(6.936728, 1, 10) =0.025

FInv(Arg1)

Returns the upper tail F quantile value of the argument.

Example:

FInv(p, deg_freedom1, deg_freedom2)

FInv(0.025, 1, 10) =6.936728

First(Arg1)

Returns the first valid value based on the physical order of the rows of data in the argument column.

Example:

First([Column])

GeometricMean()

Calculates the geometric mean value. If any input value is negative then the result will be "Empty". If any input value is equal to zero then the result will be zero.

Example:

GeometricMean([Sales])

IQR(Arg1)

Calculates the value difference Q3-Q1, or, the 75th percentile minus the 25th percentile. IQR is also referred to as the H-spread.

Example:

IQR([Column])

L95(Arg1)

Calculates the lower endpoint of the 95% confidence interval.

Example:

L95([Column])

Last(Arg1)

Returns the last valid value based on the physical order of the rows of data in the argument column.

Example:

Last([Column])

LAV(Arg1)

Calculates the lower adjacent value.

Example:

LAV([Column])

LIF(Arg1)

Calculates the lower inner fence. This is the threshold located at Q1 – (1.5*IQR).

Example:

LIF([Column])

LOF(Arg1)

Calculates the lower outer fence. This is the threshold located at Q1 – (3*IQR).

Example:

LOF([Column])

Max(Arg1, ...)

Calculates the maximum value. If one argument is given, then the result is the maximum for the entire column. If more than one argument is given, then the result is the maximum for each row. The argument and the result are of type real. Null arguments are ignored.

Examples:

Max([Column])

Max(-1)                 -> -1
Max (1.5, -2, 3)    -> 3
Max (1, null, 3)     -> 3
Max (null)              -> (Empty)

MeanDeviation(Arg1, ...)

Calculates the mean deviation value (average absolute deviation, AAD). If one argument is given, then the result is the mean deviation of all rows. If more than one argument is given, then the result is the mean deviation for each row.

Examples:

MeanDeviation([Column])

MeanDeviation(2,-3,4)         -> 2.67

Median(Arg1)

Calculates the median of the argument. If one argument is given, then the result is the median of all rows. If more than one argument is given, then the result is the median for each row.

Examples:

Median([Column])

Median(2,-3,4)

MedianAbsoluteDeviation(Arg1, ...)

Calculates the median absolute deviation value (MAD). If one argument is given, then the result is the median absolute deviation of all rows. If more than one argument is given, then the result is the median absolute deviation for each row.

Examples:

MedianAbsoluteDeviation([Sales])

MedianAbsoluteDeviation(2,-3,4)

Min(Arg1, ...)

Calculates the minimum value. If one argument is given, then the result is the minimum for the entire column. If more than one argument is given, then the result is the minimum for each row. The argument and the result are of type real. Null arguments are ignored.

Examples:

Min([Column])

Min(-1)                 -> -1
Min (1.5, -2, 3)    -> -2
Min (1, null, 3)    -> 1
Min (null)             -> (Empty)

NormDist(Arg1)

Returns the (upper tail) normal p-value of the argument. If you do not specify them yourself, the default is mean=0 and standard deviation=1.

Example:

NormDist(x, mean, standard_dev)

NormDist(1.96) =0.025

NormInv(Arg1)

Returns the (upper tail) normal quantile value of the argument. If you do not specify them yourself, the default is mean=0 and standard deviation=1.

Example:

NormInv(p, mean, standard_dev)

NormInv(0.025) =1.96

Outliers(Arg1)

Outer value count. Calculates the count of values that are greater than the upper adjacent value or lower than the lower adjacent value.

Example:

Outliers([Column])

P10(Arg1)

The 10th percentile is the value at which 10 percent of the data values are equal to or lower than the value.

Example:

P10([Column])

P90(Arg1)

The 90th percentile is the value at which 90 percent of the data values are equal to or lower than the value.

Example:

P90([Column])

PctOutliers(Arg1)

Outer value percentile. Calculates the percent of values that are greater than the upper adjacent value or lower than the lower adjacent value.

Example:

PctOutliers([Column])

Percentile(Arg1, Arg2)

The percentile is the value at which a certain percent of the data values are equal to or lower than the value. The first argument is the column to analyze and the second argument is the percent.

Example:

Percentile([Column], 15.0)

Product(Arg1, ...)

Calculates the product of the values. If one argument is given, then the result is the product of the entire column. If more than one column is given, then the result is the product of each row.

Example:

Product([Column])

Product(1,2,3)

Q1(Arg1)

Calculates the first quartile.

Example:

Q1([Column])

Q3(Arg1)

Calculates the third quartile.

Example:

Q3([Column])

Range(Arg1)

The range between the largest and the smallest value in the column.

The result will be presented as a real or a timespan, depending on the data type of the argument.

Example:

Range([Column])

StdDev(Arg1)

Calculates the standard deviation.

Example:

StdDev([Column])

StdErr(Arg1)

Calculates the standard error.

Example:

StdErr([Column])

Sum(Arg1, ...)

Calculates the sum of the values. If one argument is given, then the result is the sum of the entire column. If more than one column is given, then the result is the sum of each row.

Examples:
Sum(-1)                -> -1
Sum (1.5, -2, 3)   -> 2.5
Sum (1, null, 3)   -> 4
Sum (null)            -> (Empty)

TDist(Arg1)

Returns the (upper tail) t p-value of the argument.

Example:

TDist(x, deg_freedom)

TDist(4.302653, 2) =0.025

TInv(Arg1)

Returns the (upper tail) t quantile value of the argument.

Example:

TInv(p, deg_freedom)

TInv(0.025, 2) =4.302653

TrimmedMean(Arg1, Arg2)

Calculates the trimmed mean value (trimmed average). The first argument is the column to analyze and the second argument is, in percent, how many values to exclude from the calculation. If the trim value is set to 10%, then the highest 5%  and the lowest 5% of the values are excluded from the calculated mean.

Example:

TrimmedMean([Sales], 10)

U95(Arg1)

Calculates the upper endpoint of the 95% confidence interval.

Example:

U95([Column])

UAV(Arg1)

Calculates the upper adjacent value.

Example:

UAV([Column])

UIF(Arg1)

Calculates the upper, inner fence. This is the threshold located at Q3 + (1.5*IQR).

Example:

UIF([Column])

UniqueCount(Arg1)

Calculates the number of unique, non-empty values in the argument column.

Example:

UniqueCount([Column])

UOF(Arg1)

Calculates the upper, outer fence. This is the threshold located at Q3 + (3*IQR).

Example:

UOF([Column])

ValueForMax(Arg1, Arg2)

Returns the value of column 2 for the maximum value of column 1.

If there are more than one of the column 1 maximum value, then the result will be the value for the first max row.

Example:

ValueForMax([Column 1], [Column 2])

ValueForMin(Arg1, Arg2)

Returns the value of column 2 for the minimum value of column 1.

If there are more than one of the column 1 minimum value, then the result will be the value for the first min row.

Example:

ValueForMin([Column 1], [Column 2])

Var(Arg1)

Calculates the variance.

Example:

Var([Column])

WeightedAverage(Arg1, Arg2)

Calculates the weighted average of two columns. Arg1 is the weight column and Arg2 is the value column.

Example:

WeightedAverage([Column1],[Column2])

See also:

Binning functions

Conversion functions

Date and Time functions

Logical functions

Math functions

OVER functions

Property functions

Spatial functions

Ranking functions

Text functions

Text Functions

Function

Description

~=

Can be part of an 'If' or 'Case' statement. Returns true if the Arg2 regular expression string matches the Arg1 string.

 

Examples:

If( "aab" ~= "a+" , "true", "false" )

Case when "aba" ~= ".a+$" then "true" else "false" end

Concatenate(Arg1, ...)

Concatenates (appends) all the arguments into a string. If one argument is given, then the result is the concatenation of all rows. If more than one argument is given, then each row is concatenated. The arguments can be of any type, but are converted to strings. The result is of type string. Null arguments are ignored.

 

Examples:
Concatenate("April ", 20+1, "st")       -> "April 21st"
Concatenate(null, "Ape")                   -> "Ape"
Concatenate (null, null)                     -> (Empty)

Find(Arg1, Arg2)

Returns the 1-based index of the first occurrence of the string Arg1 in Arg2. If not found, 0 is returned. The search is case-sensitive. The arguments are of type string and the result is of type integer. If Arg1 is the empty string, 0 is returned.

 

Examples:
Find("lo", "Hello")   -> 4
Find("a", "Hello")    -> 0
Find("", "Hello")      -> 0
Find("", null)            -> (Empty)

If(Find("Pri 1", [Col1])>0, "Important", "Not important")

Left(Arg1, Arg2)

Returns the first Arg2 characters of the string Arg1. Arg1 and the result are of type string. Arg2 is of type real, but only the integer part is used. If Arg2 > the length of Arg1, the whole string is returned. If Arg2 is negative, an error is returned.

 

Examples:
Left("Daddy", 3.99)     -> "Dad"
Left("Daddy", 386)      -> "Daddy"
Left("Daddy", -1)         -> (Empty)

Len(Arg1)

Returns the length of Arg1. Arg1 is of type string and the result is of type integer.

 

Examples:
Len("Hello")      -> 5
Len(null)            -> (Empty)

Lower(Arg1)

Returns Arg1 converted to lowercase. Arg1 and the result are of type string.

Mid(Arg1, Arg2, Arg3)

Returns the substring of Arg1 starting at index Arg2 with a length of Arg3 characters. Arg1 and the result are of type string. Arg2 and Arg3 are of type real, but only the integer part is used. If Arg2 > Len(Arg1), an empty string is returned. Else, if Arg2+Arg3 > Len(Arg1), Arg3 is adjusted to 1+Len(Arg1)-Arg2. If either of Arg2 or Arg3 is negative or if Arg2 is zero, an error is returned.

 

Examples:
Mid("Daddy", 2, 3)            -> "add"
Mid("Daddy", 386, 4)       -> ""
Mid("Daddy", 4, 386)       -> "dy"
Mid("Daddy", -1, 2)          -> (Empty)

Mid("Daddy", 2, -1)          -> (Empty)

MostCommon(Arg1)

Returns the most common value of the specified column. If several values are equally common, the first one will be used.

 

Example:

MostCommon([Column])

NameDecode(Arg1)

Replaces all substring codes with decoded characters.

Column names in TIBCO Spotfire are stored as UTF-16 encoded strings, while variable names in TIBCO Spotfire Statistics Services are built from 8-bit ASCII characters matching [.0-9a-zA-Z] or ASCII strings enclosed in grave accents. Thus, the column names that are sent to TIBCO Spotfire Statistics Services must be encoded. Column names received from TIBCO Spotfire Statistics Services are automatically decoded by the built-in data functions output handlers. This function can be used to decode results that have not been automatically decoded.

 

Example:

NameDecode("Column %02D")

NameEncode(Arg1)

Encodes characters in the string so that the string only contains characters matching the regular expression [.0-9a-zA-Z].

 

Column names in TIBCO Spotfire are stored as UTF-16 encoded strings, while variable names in TIBCO Spotfire Statistics Services are built from 8-bit ASCII characters matching [.0-9a-zA-Z]. Thus, the column names that are sent to TIBCO Spotfire Statistics Services must be encoded. This is done automatically when sending data to TIBCO Spotfire Statistics Services via the built-in data functions input handlers. If you need to provide column name input by some other means (e.g., via a document property) you may need to use this function to encode the column names before applying the data function.

 

Example:

NameEncode("Column £")

Repeat

Repeats a string a specified number of times.

 

Example:

Repeat("Hello", 2)        -> "HelloHello"

Right(Arg1, Arg2)

Returns the last Arg2 characters of the string Arg1. Arg1 and the result are of type string. Arg2 is of type real, but only the integer part is used. If Arg2 > the length of Arg1, the whole string is returned. If Arg2 is negative, an error is returned.

 

Examples:
Right("Daddy", 3.99)     -> "ddy"
Right("Daddy", 386)      -> "Daddy"
Right("Daddy", -1)         ->  (Empty)

RXReplace(Arg1, Arg2, Arg3, Arg4)

 

Replaces a substring according to a regular expression. Search for the Arg2 regular expression in Arg1 and replace it with Arg3.

Arg4 specifies the options for the replacement:

"g" specifies that if Arg2 matches more than once then all matches should be substituted.

"i" specifies that the comparison should be case insensitive.

"s", for single-line mode, specifies that the dot (.) matches every character (instead of every character except newline).

 

Some characters, like for instance the backslash character "\", need to be escaped to work when using calculated columns. See literature about regular expression language elements, e.g., on MSDN, for more information.

 

Example:

RXReplace("Hello", "L+", "LL", "i")            -> "HeLLo"

RXReplace("3 Minor", "(\\d).*", "$1", "")    -> 3

RXReplace("change\\slashdirection","\\\\","/","")                 -> change/slashdirection

 

(In the last example, the backslash needs to be escaped twice; once for the Spotfire string and once for the regular expression.)

Substitute(Arg1, Arg2, Arg3)

 

Replaces all occurrences of Arg2 in Arg1 with Arg3. The search is case sensitive.

 

Example:

Substitute("Test","t","ting")                -> "Testing"

Trim(Arg1)

Removes whitespace characters from the beginning and end of a string.

 

Example:

Trim(" Example ")                               ->"Example"

UniqueConcatenate(Arg1)

 

Concatenates the unique values converted to strings. The values are ordered according the comparator.

 

Example:

UniqueConcatenate([Column])

Upper(Arg1)

Returns Arg1 converted to uppercase. Arg1 and the result are of type string.

 

Example:

Upper("hello")      ->"HELLO"

See also:

Binning functions

Conversion functions

Date and Time functions

Logical functions

Math functions

OVER functions

Property functions

Ranking functions

Spatial functions

Statistical functions

Invalid Values

An expression is considered valid if it is syntactically correct and all function, operator and column references can be resolved. If an expression is not valid, it cannot be evaluated. This will be indicated in the Sample result field of the Insert Calculated Column dialog as "#Error", (Empty), or similar. When generating a result data table from the expression, errors are converted to null. Wrap the expression with a call to the SN(Arg1, Arg2) function to override this behavior. The SN(Arg1, Arg2) function can be used to substitute null with a certain value, for example, 0.

Empty values are generated whenever a column value from the data table is missing, when a calculation involves an invalid value, or by explicitly writing null in the expression. Results that are null, are displayed as "(Empty)" or simply left blank.

When aggregating within a column, the invalid value will be ignored, whereas row-wise calculations between columns will result in invalid values each time one of the involved columns contains an invalid value.

Details on Formatting

This dialog lets you format values on column level. If you change settings for a specific column or hierarchy in this dialog the new settings will be used for that specific column or hierarchy everywhere in the analysis from then on.

For general information about formatting, see Formatting Overview.

  • To reach the Formatting dialog:

  1. Right-click on a filter in the filters panel and select Format Values... from the pop-up menu, or, in the Insert Calculated Column dialog, click on the Formatting button.

    Comment: The same functionality is also available in the Formatting tab of the Column Properties dialog (Edit  > Column Properties).

Formatting.png

Option

Description

Category

Lists the available formatting categories for the selected column or hierarchy. Each category in this list has separate settings. What categories are available depends on the data type of the selected column. See Formatting Settings for a full description of all possible options.

Apply formatting from column

 

   Data table

Specifies the data table containing the column from which you want to apply formatting.

   Column

Lists all columns of the same type as the selected column, from which it is possible to reuse the formatting.

Apply Formatting

Applies the formatting from the column selected in the drop-down list.

See also:

Column Properties - Formatting

Formatting Overview

Axis Formatting

Format String

If the format you want to use cannot be created with the given settings, the custom format string allows you to create your own formats using a code explained in the examples below.

The special characters allow you to multiply, divide, separate numbers, etc. Other characters are printed out in the resulting data.

Custom Numeric Format Strings

Special characters:

Character

Description

0

Always returns a value for the position it is written in. If there is no number in its place in the data, 0 (zero) will be used.

#

Returns values if there are numbers in its place in the data.

If used to the left of the decimal point, all digits are returned even if there is one # in the format string and three digits in the data.

If used to the right of the decimal point, the same number of digits are returned as there are # to the right of the decimal point, and the number gets rounded up or down.

See example below.

,

If used before a decimal point, divide the number in the data by 1000.

Note: A difference from Excel is that Excel allows for "," as divider after the decimal point as well.

%

Multiplies the number by 100 and inserts a "%" in the number in the location it is written in the format string.

.

Decimal point.

Note: If no decimal point is used and there are decimals in the value you apply the format string on, the value gets rounded up or down.

;

Used to divide a format string if different formats are to be used for positive numbers, negative numbers and 0 (zero).

 

If no semicolon is used, the format string is used for all numbers.

 

If one semicolon is used, it divides the format string like this:

String for positive numbers and zero;String for negative numbers

 

If two semicolons are used, they divide the format string like this:

String for positive numbers;String for negative numbers;String for zero

\

If a "\" is added before a special character that character will not modify the number, the character will only be added to the value.

Examples:

Note: All these examples use the number 12345.67 as the value from the data.

Format string

Result

# ####

1 2346

#.#

12345.7

#.000

12345.670

#,.#

12.3

#,,.##

.01

#%

1234567%

#\%

12345.67%

$#

$12346

#.##E+0

1.23E+4

#.#;(#.#)

12345.7

Note: Had the number been negative, the result would be:

(12345.7)

23

23

See literature about custom numeric format strings, for example, on MSDN, for more information.

Custom DateTime Format Strings

Below are some examples of custom format strings for datetime formats. See literature about custom datetime format strings, such as that on MSDN, for more information.

Character

Description

yy

Returns the year, measured as a number between 0 and 99.

yyyy

Returns the year as a four-digit number.

M

Returns the month, measured as a number between 1 and 12, with one or two digits depending on the value.

MM

Returns the month with two digits, measured as a number between 1 and 12. This means that June will be written as '06', when this format string is applied.

MMM

Returns the abbreviated name of the month. For example, 'Jun'.

MMMM

Returns the full name of the month. For example, 'June'.

d

Returns the day of the month, measured as a number between 1 and 31, with one or two digits depending on the value.

dd

Returns the day of the month with two digits, measured as a number between 1 and 31. This means that the 6th of a month will be written as '06', when this format string is applied.

ddd

Returns the abbreviated name of the day of the week. For example, 'Fri'.

dddd

Returns the full name of the day of the week. For example, 'Friday'.

h

Returns the hour using a 12-hour clock, with one or two digits depending on the value.

hh

Returns the hour using a 12-hour clock, with two digits. This means that 6 o'clock will be written as '06', when this format string is applied.

H

Returns the hour using a 24-hour clock, with one or two digits depending on the value.

HH

Returns the hour using a 24-hour clock, with two digits. This means that 6 o'clock in the morning will be written as '06' and 6 o'clock in the evening will be written as '18', when this format string is applied.

m

Returns the minute with one or two digits, depending on the value.

mm

Returns the minute with two digits. This means that six minutes will be written as '06', when this format string is applied.

s

Returns the second with one or two digits, depending on the value.

ss

Returns the second with two digits. This means that six seconds will be written as '06', when this format string is applied.

f

Returns the tenths of a second.

ff

Returns the hundredths of a second.

fff

Returns the milliseconds.

tt

Returns the AM/PM designator.

:

Returns the time separator.

/

Returns the date separator.

You can also add any custom string value, but if any of the specifier characters are included in the string, they need to be escaped by a backslash (\).

Examples:

Note: All the examples below use the following value from the data: Friday, October 16, 2009, at 25 minutes past three in the afternoon.

Format string

Result

dd\t\h o\f MMMM yyyy

16th of October 2009

MMM d yyyy, HH:mm

Oct 16 2009, 15:25

\year: YY, \mon\t\h: MM, \da\y: dd

year: 09, month: 10, day: 16

hh:mm tt

03:25 PM

m \minu\te\s pa\s\t h, MMM d

25 minutes past 3, Oct 16

Custom TimeSpan Format Strings

There are five different data values included in the TimeSpan format: day, hour, minute, second and fractions of seconds. These can be combined to a suitable format using a format string built by the following specifier characters:

Character

Description

d

Returns the number of days.

h

Returns the number of hours with one or two digits, depending on the value.

hh

Returns the number of hours with two digits. This means that six hours will be written as '06', when this format string is applied.

m

Returns the number of minutes with one or two digits, depending on the value.

mm

Returns the number of minutes with two digits. This means that six minutes will be written as '06', when this format string is applied.

s

Returns the number of seconds with one or two digits, depending on the value.

ss

Returns the number of seconds with two digits. This means that six seconds will be written as '06', when this format string is applied.

f

Returns the fractions of seconds. You can also add a number between 1 and 3 after the 'f', defining how many decimals will be shown. If no number has been specified, three numbers will be shown, if available.

Between each specifier character, you need to supply some kind of separator. This could be a custom string value, but if any of the specifier characters are included in the string, they need to be escaped by a backslash (\). You can also include an initial and a conclusive string.

Examples:

Note: All the examples below use the following value from the data:  -5 days, 7 hours, 11 minutes 3.1234 seconds.

Format string

Result

d.h:m:s.f

-5.7:11:3.123

d.hh:mm:ss.f2

5.07:11:03.12

Ti\me\span i\s d \day\s

Timespan is -5 days

d \day\s h \hour\s m \minute\s s \secon\d\s

-5 days 7 hours 11 minutes 3 seconds

f s m h d

123 3 11 7 -5

See also:

Formatting Overview

Formatting Settings

Details on Formatting

Properties
Properties in Expressions

There are three types of properties available in TIBCO Spotfire: document properties, data table properties and column properties. All of these can be used in expressions that control one or many settings. The properties can be interpreted in two different ways, either as actual values or as text values that can be used instead of column names or measures.

Insert as Text

When you want to use a string property in an expression where it should be interpreted as a column name or a measure you must use a specific syntax. The name of the property should be enclosed in the following: “${“ and “}”. So, for a document property this could look something like: ${MyProperty}. For a data table property, the first part should be the data table name within the symbols specified above, followed by a period and the property name also within curly brackets: ${My Data Table}.{MyProperty}. For a column property, the column name is also required: ${My Data Table}.{Column Name}.{MyProperty}.

Property call in expression

Description

${MyProperty}

Denotes the document property named MyProperty.

${My Data Table}.{MyProperty}                            

Denotes the data table property named MyProperty defined for the data table "My Data Table".

${My Data Table}.{Column Name}.{MyProperty}

Denotes the column property named MyProperty defined for the column "Column Name" in the data table "My Data Table".

$map("template", "concatenation string")

The $map() function is used to map list-valued properties to a single string. The first argument is a template to use for each value in the list and the second argument is a specification of how the list values should be connected in the resulting expression.

See List-Valued Properties (Multiple Select) below for more information.

Examples:

$map("sum([${PropertyName}])", ",")

<$map("[${PropertyName}]", " NEST")>

$esc(expression)

Replaces "]" in column names with "]]" and encloses the escaped column names in "[" and "]". The argument is an expression which could be a property value or a property function that starts with a dollar sign ($).

Examples:

$esc(${MyProperty})

$esc("$csearch([Data Table], "Col*")")

$csearch([Data Table], "search string")

The $csearch() function is used to select a number of columns from a data table using a limiting search expression. The first argument is a data table and the second argument is a string that contains the search expression determining which column names should be returned. The function returns a list of the (unescaped) column names from the data table that match the search expression.

Examples:

$csearch([Data Table],"*")

$csearch([Data Table], "Col*")

When the $ sign is encountered in an expression, the property will be replaced by its corresponding value before the actual expression is evaluated (a preprocessing step). This means that if you type an expression like: sum(${Property1}) -${Property1}-${Property2}, and the value of Property1 is the column name Column1, and the value of Property2 is Column2, then the resulting expression will be: sum([Column1]) -[Column1]-[Column2].

If you do not want the property value to be interpreted as a column name but as a literal string value, you should put the property call within quotation marks. For example, if you have a property called "MyProperty" with the value "MyValue", the expression ${MyProperty} would return the resulting expression MyValue, where a column called "MyValue" is retrieved. However, the expression "${MyProperty}" would return the text string "MyValue" instead. If the string value consists of several words (e.g., "My Value"), then you should use $esc() or put brackets around the property expression in order to return a column: [${MyProperty}]. See General Syntax for more information about column names.

Note that it is very important to use the correct syntax for the expression at all times. Otherwise you may end up with a different result than you expected. If a property is inserted as a column where the expression expected a string, the first value of the specified column will be retrieved. For string columns, the result may be a valid expression, but for a column of another data type, you may receive an error.

If a property is inserted using the Insert Properties button in the Insert Calculated Column dialog or in the Custom Expression dialog, the property will automatically be added using the text (preprocessor) syntax.

AvailablePropertiesforColumns.png

You can also right-click on the property in the Available properties for column field and select Insert as Text from the pop-up menu.

If a property containing a column name is to be used on an axis, there is a shortcut called Set from Property available from the pop-up menu on the column selectors. If any changes to the automatically added syntax are required, you must use the Custom Expression dialog to modify the expression. Right-click on the column selector and select Remove All if you want to remove the property expression from the axis.

A property can also be used to specify an aggregation measure. For example, you can define a property called MyMeasure with the default value "sum". If such a property is to be used in an expression you need to manually add parentheses and arguments for the measure in the expression where it is used. You can then create a property selector in a text area displaying a list of fixed aggregation measures that you want to be available in an axis expression and let web player users change the axis expression using the property selector.

Since string properties inserted as text in most cases will be interpreted as column names, you may encounter situations where you do not achieve the expected result when creating a custom expression. If the string property is to be interpreted as a value (e.g., in conditional expressions) it needs to be surrounded by quotation marks. For example, if you want to replace the string "ProductA" used in an axis expression condition such as Sum(if(([Product])=("ProductA"),[Quantity],null)) with a document property, then the document property needs to be surrounded by quotation marks in order to make the expression work:

Sum(if ( [Product] = "${MyProduct}",[Quantity],null))

You can also insert the property as a function rather than as text in order to retrieve the same results (see also Insert as Value below):

Sum(if ( [Product] = DocumentProperty("MyProduct"),[Quantity],null))

Insert as Value

When the value of a property is to be part of an expression, the recommendation is to use the standard property functions: ColumnProperty(), DataTableProperty() and DocumentProperty() to encapsulate the property name. For example, use this syntax to write an expression with a document property holding an exchange rate times a value column expressed in some currency to be converted:

DocumentProperty("ExchangeRate")*[Value Column]

Note that the property name should always be written within quotation marks when property functions are applied. If you want a property to be interpreted as a value, you should right-click on the property in the Available properties for column field and select Insert as Value from the pop-up menu.

InsertasValue.png

You can of course also manually edit the syntax for the property.

List-Valued Properties (Multiple Select)

Multiple-select properties, or list-valued properties, are based on a list of values instead of a single value. They can be created when defining a list box (multiple select) property control. A list-valued property cannot be used in all places where a regular single-valued property can be applied, but it is an effective way to specify multiple columns on an axis.

TextArea.png

List-valued properties often require some additional manual work when used in expressions. However, you will in most cases get a hint of what the expression should look like in the current context when inserting the property using the Insert Properties button.

When a list-valued property is added as text to a custom expression (for example, on a continuous axis like the Y-axis in a Line Chart or the value axis in a Bar Chart) the syntax will be something like $map("sum([${PropertyName}])", ","). In this example, the default aggregation is set to "sum", so if you want to use a different aggregation you need to manually edit the expression. The concatenation string is set to a comma, which means that each list value should be interpreted as a separate column. If the PropertyName property contains a list with three columns: "ColumnA", "ColumnB" and "ColumnC", the resulting expression after expansion is sum([ColumnA]),sum([ColumnB]),sum([ColumnC]). If the concatenation string is changed to a "+", then the axis would show the sum of the values from all columns included in the list: sum([ColumnA])+sum([ColumnB])+sum([ColumnC]).

For a categorical axis, such as the X-axis in a Line Chart or the category axis in a Bar Chart, you want to display a hierarchy rather than some calculated values. The map expression must then use angle brackets, "<>", and either NEST or CROSS to define what combinations to show. For example:

<$map("[${PropertyName}]", " NEST")>

See General Syntax for more information about the NEST and CROSS alternatives.

Note: As always when setting up expressions with multiple columns, you need to make sure that the columns you use are of the same type and that they match the rest of the expression. For example, you cannot mix categorical and continuous columns on some axes, nor can you use categorical columns in an expression using any type of aggregation.

More Expression Examples

If nothing else is stated, the expression examples below assume that you have a data table called Data Table containing three integer columns called "Column 1", "Column 2", and "Sales".

Requested result

Expression example

Resulting expression

Sum all integer columns in the data table called Data Table.

$map("sum([$csearch([Data Table],"datatype:int")])","+")

sum([Column 1])+sum([Column 2])+sum([Sales])

Return a list of all columns in the data table called Data Table.

$csearch([Data Table],"*")

Column1,Column 2,Sales

Return an escaped column name from a property (MyProperty) with the value "Column name with bracket (])".

$esc(${MyProperty})

[Column name with bracket (]])]

Use a list-valued property expression as input to a data function.

In the example, the property MyListProperty contains three column names: Column 1, Column 2, and Sales.

$map("[Data Table].[${MyListProperty}]", ",")

[Data Table].[Column 1],[Data Table].[Column 2],[Data Table].[Sales]

Use multiple columns on an axis where one of the columns is retrieved via a property control.

In the examples, the property MyProperty has the value Column 2.

[Column 1],[${MyProperty}]

or

<[Column 1] NEST [${MyProperty}]>

or

Sum([Column 1]), Sum(${MyProperty}])

etc.

[Column 1],[Column 2]

or

<[Column 1] NEST [Sales]>

or

Sum([Column 1]), Sum(Column 2])

etc.

Change the display name of multiple columns on an axis using a list-valued property.

In the example, the property MyListProperty contains three column names: Column 1, Column 2 and Sales.

$map("Sum([${MyListProperty}]) as [${MyListProperty}]", ",")

 

 

 

All list-valued properties in the expression must be of the same size.

Sum([Column 1]) as [Column 1],Sum([Column 2]) as [Column 2],Sum([Sales]) as [Sales]

See also:

Using Properties in the Analysis

Troubleshooting Property Expressions

Searching in TIBCO Spotfire

Troubleshooting Property Expressions

Since properties can be inserted and interpreted in two different ways, there may be occasions where a seemingly correct expression does not work as expected. The following messages may be encountered when inserting properties in the Expression field of the Custom Expression dialog or the Insert Calculated Column dialog.

Some of the problems may also occur if you add a property to an axis using Set from Property and the expression does not match the axis. In that case, you need to right-click and go to the Custom Expression dialog to manually change your expression. Try to identify your problem using the table below.

Error text for the expression field

Expression example

Possible errors

Solution

The expression is not complete.

 

or

 

The expression is empty.

 

or

 

The expression cannot be evaluated.

${MyProperty}

When a property is inserted to an expression using double-click or by clicking on the Insert Properties button, it is inserted as text.

With this syntax, Spotfire will try to interpret a string property value as a column name or a part of an expression rather than as a value.

If you want to use the value of the string property, put quotation marks around the expression:

"${MyProperty}"

You can also right-click on the property in the Available properties for column list and select Insert as Value from the pop-up menu instead :

DocumentProperty("MyProperty")

 

If the property is supposed to hold a column name, but the column name contains space characters, you need to put "[" and "]" characters around the property expression. You can also use the $esc() function that both escapes any "]" characters and converts the property string to a column:

$esc(${MyProperty})

If the property is added using Set from Property, you will automatically get the escaped version of the expression.

The expression is not complete.

Concatenate("My first string", ${EmptyProperty})

If a string property inserted as text is empty then it cannot be interpreted as a column and Spotfire will not see that the second argument in this example is there at all.

If you want to use the value of the string property, put quotation marks around the expression:

Concatenate("My first string", "${EmptyProperty}")

You can also right-click on the property in the Available properties for column list and select Insert as Value from the pop-up menu instead :

Concatenate("My first string", DocumentProperty("EmptyProperty"))

 

If you want to use the content of a column in the concatenation, you should put '[' and ']' characters around the property to make sure the property is interpreted as a column (or use $esc() as described above).

Concatenate("My first string", [${EmptyProperty}])

Invalid type for function call 'DocumentProperty'

DocumentProperty(MyProperty)

The property name should always be written within quotation marks when property functions are applied. Quotation marks are automatically added if you use the Insert as Value shortcut from the pop-up menu.

Put quotation marks around the property name:

DocumentProperty("MyProperty")

Expected 'End of expression' but found ',' on line 1 character 12

 

or

 

The expression is not valid

$map("sum([${MyListProperty}])", ",")

 

When list-valued properties are used on an axis you need to map the list-valued properties to a single string. The expression must contain a template to use for each value in the list (e.g., an aggregation measure) and also a specification of how the list values should be connected in the resulting expression.

The default expression obtained when first inserting the property is suitable for continuous axes where a simple listing of the columns included in the list-valued property is desired. In all other cases it must be manually modified.

 

Depending on what you want to display you need to modify the default expression somewhat differently.

If a simple listing of column names is desired (e.g., if you want to show multiple columns on a bar chart value axis), then the expression in the example works fine. If you want to use an aggregation measure other than "sum" simply replace "sum" in the expression.

If the list of columns is to be shown on a categorical axis, you need to modify the default expression to something like this:

<$map("[${MyListProperty}]", "NEST")>

Categorical expressions must be surrounded by angle brackets, "<>", and you must also specify how different combinations of categories should be handled.

Also, the current selection of columns in the list-valued property may be a mixture of continuous and categorical columns. Make sure that only columns of the same type are included in the property list.

There are also more cases where the expression needs to be modified. See the section List-Valued Properties (Multiple Select) under Properties in Expressions for more information.

Expected ':' but found '3' on line 1 character 5

${TimeSpanProperty}

If you try to use a TimeSpan, Date, Time or DateTime property in an expression, the expression language will not be able to interpret it correctly without some manual editing.

First, it needs quotation marks around the property name. This will interpret the property as a string and you will be able to get rid of the error. Second, you need to use one of the conversion functions in order to actually interpret the value as a TimeSpan, Date, Time or DateTime.

Put quotation marks around the property name and use the corresponding conversion function:

TimeSpan("${TimeSpanProperty}")

You can also right-click on the property in the Available properties for column list and select Insert as Value from the pop-up menu instead :

TimeSpan(DocumentProperty("TimeSpanProperty"))

{Table is undefined in ${{Table}

$Table.{MyProperty}

If a name contains right curly bracket (}) it needs to be escaped by a backslash character (\).

To access the data table property MyProperty in a data table that is named "{Table}" one must write (${{"Table\"}}.{MyProperty}).

More about $esc and $csearch:

$esc(expression)

The $esc() function is used to escape "]" which normally denotes the end of a column from column names and adds "[" and "]" to the column name. The brackets are required for column names containing space characters to be interpreted as columns. $esc() can be used together with the $csearch() function. The argument is an expression which could be a property value or a property function that starts with a dollar sign ($).

For example, let the data table "A Data Table" have three columns called "Column 1","Column 2[example]", and "Sales".  The expression $esc($csearch([A Data Table], "Col*")) returns a list with two elements—the strings "[Column 1]" and "[Column 2[example]]]".

$csearch([Data Table], "search string")

The $csearch() function is used to produce a "filtered" list of column names. It allows you to select a number of columns from a data table using a limiting search expression. This function is likely to be used together with the $map() function. The first argument is a data table and the second argument is a string that contains the search expression to be used to determine what column names should be returned. The function returns a list of (unescaped) column names contained in the data table that fulfills the search expression.

For example, let the data table "A Data Table" have three columns called "Column 1", "Column 2", and "Sales". The expression $csearch([A Data Table], "Col*") returns a list with two elements—the strings "Column 1" and "Column 2". If the property MyTable contains the string [A Data Table] and the property MyA contains the string "Col*" then $csearch(${MyTable}, "${MyA}") will return the same result.

$csearch together with $map() and $esc() can produce column lists or calculations based on columns from list-valued properties. For example, $map("sum($esc($csearch(${MyTable}, "*")))", "+") is expanded to sum([Column 1])+sum([Column 2])+sum([Sales]) since the search expression * will return all columns in the table. The $esc() function is necessary if you want the strings in the list to be interpreted as columns and the column names contain space characters.

Note: $csearch() is primarily intended to be used in visualization axis expressions or included in data function argument expressions. While $csearch() is looking at all columns in a data table, including any calculated columns, it is less suitable for use in calculated columns. If used in a calculated column, cyclic dependencies may occur.

See also:

Properties in Expressions

Insert Binned Columns

What is Binning?

Binning is a way to group a number of more or less continuous values into a smaller number of "bins". For example, if you have data about a group of people, you might want to arrange their ages into a smaller number of age intervals. Numeric columns can also be temporarily grouped by right-clicking on a column selector and clicking Auto-bin Column.

Example:

The data table contains information about a number of persons.

WhatisBinning.png

By binning the age of the people into a new column, data can be visualized for the different age groups instead of for each individual.

 

See also:

How to Use Binning

Details on Insert Binned Column

How to Use Binning

  • To use the binning tool:

  1. Select Insert > Binned Column....

  2. If you have more than one data table in the document, select the Data table to work on.

  3. Select a Column to bin.

  4. Select a Bin method:

    Specific Limits

    Allows you to explicitly enter values, separated by semicolons, of the limits to use for each bin.

    Even Intervals

    Allows you to specify the desired number of bins and divides the value range into equal intervals.

    Even Distribution of Unique Values

    Allows you to specify the desired number of bins and divides the bins so that each one contains an equal number of unique values.

    Based on Standard Deviation

    Allows you to divide the range into sections as described by the selected standard deviation multipliers.

    Substring

    Allows you to group the values by the first or last characters in the column to be binned.

  5. Type a New column name for the binned column.

  6. Click OK.

See also:

What is Binning?

Details on Insert Binned Column

Details on Insert Binned Column

  • To reach the Insert Binned Column dialog:

Select Insert > Binned Column....

 

InsertBinnedColumns.png

Option

Description

Data table

Only available when more than one data table is present in the analysis and the dialog has been opened via the main menu.

Specifies the data table where the binned column will be inserted.

Column

Displays the available columns on which you can perform binning. It is possible to search for columns by typing in the field provided when the drop-down list is expanded. The values from the selected column will be sorted into several bins or categories based on your selections.

Specific limits

Allows you to explicitly enter the values of the limits to use for each bin.

 

Enter the values you wish to use for the limits of your bins and separate them with a semicolon. For example, typing "20;30;40" will result in the  following bins:

x<=20

20<x<=30

30<x<=40

40<x

Even intervals

Allows you to specify the desired number of bins and divides the value range into equal intervals.

 

This method works for all data types except string. The current data range is divided up into the specified number of bins. Empty values will be empty in the new column, and when loading linked data tables, new values will be placed inside one of the available bins.

Even distribution of unique values

Allows you to specify the desired number of bins and divides the bins so that each one contains an equal number of unique values.

 

The suggested division works for all data types and is made so that the bins each contain an equal number of unique values. Extra values are placed in the final bins, so if you have four values and you want three bins with one value in each bin, your final value will be placed in the third bin. Empty values will be empty in the new column, and when loading linked data tables the bin ranges will be modified to fit the new data range.

Based on standard deviation

Divides the range into sections as described by the selected standard deviation multipliers.

 

This method works for numeric columns only. The range is divided into sections as described by the selected standard deviation multipliers. Bins are created using any of the standard deviations +/- 0.5, 1, 2, 3, 6. In the example below, the range is divided into the following six subsections (µ denoting the average value for the column and s the corresponding standard deviation):

lower limit -> (µ-3s)

(µ-3s) -> (µ-s)

(µ-s) -> µ

µ -> (µ+s)

(µ+s) -> (µ+3s)

(µ+3s) -> upper limit

Empty values will be empty in the new column, and when loading linked data tables the standard deviation will be recalculated.

Substring

Groups the rows by the first or last characters of the values in the column to be binned. The exact number of characters to take into account must be supplied.

Example:

Suppose the column to be binned contains family names, beginning with Adams and ending with Winter. To bin the rows according to the first letter in the name, use the Substring option considering one character from the beginning. Bin names are generated from the substring, and if Ignore case is used, the bin names are all formatted as upper case.

 

Empty values will be empty in the new column, and when loading linked data tables the new values will be placed in suitable bins, taking the substrings into consideration.

New column name

The name of the new, binned column.

See also:

What is Binning?

How to Use Binning

Binning Functions

The Binning Slider

When using a numeric column for the X-axis in a visualization (the category axis in a bar chart), you sometimes may want to bin the values to compare segments of the data to each other. One very handy tool to help you dynamically do this is the binning slider.

The bar chart below shows the average purchases of a customer group, where each bar represents the age of the customers.

TheBinningSlider.png

By right-clicking on the category axis selector, and selecting Auto-bin Column, the bars are automatically binned. In the example below there are five bins, which means all customers between 0 to 29 are gathered in the first bar, customers between 30 to 41 in the second bar, and so on.

 

As you see there is a small slider with a handle just above the axis selector. This is the binning slider. By dragging this horizontally, you can alter the number of bins dynamically. In the example below, the slider has been set to show 20 bins.

 

See also:

What is Binning?

Insert Columns

How to Insert Columns

  • To insert columns from file:

  1. Select Insert > Columns....

  2. If you have more than one data table in the document, select which data table to add columns to, then click Next.

  3. Click the File radio button.

  4. Click Browse... to locate the file to add columns from, then click Open.

    Response: If the selected file is a text file, the Import Settings dialog is displayed. If the selected file is an Excel file, the Excel Import dialog is displayed. Optionally, make the desired adjustments in the dialog, then click OK.

  5. Apply transformations (optional).

    Comment: See Transforming Data to learn more.

  6. Click Next > to go to the Match Columns step of the wizard. (If you want the columns to be matched automatically, go straight to step 11 instead.)

  7. Click on the Match All Possible button, or manually select one column From current data and one From new data and click Match Selected. Repeat if necessary.

    Comment: The columns used for matching should together create a unique identifier for all rows.

  8. Click Next > to go to the Import step of the wizard.

  9. Select the columns that you wish to add from the new data.

  10. Select a Join method to determine which rows to keep from both data tables.

    Comment: Note that selecting an inner join may result in that no data remains in TIBCO Spotfire, if no matching rows are found.

  11. Click Finish.

    Response: The selected columns are added to your data table.

  • To insert columns from an information link:

  1. Select Insert > Columns....

  2. If you have more than one data table in the document, select which data table to add columns to, then click Next.

  3. Click the Information Link radio button.

  4. Click Browse... to locate the information link to add columns from.

    Response: The Select Information Link dialog is displayed.

  5. Locate and select the information link of interest, and then click OK.

  6. Apply transformations (optional).

    Comment: See Transforming Data to learn more.

  7. Click Next > to go to the Match Columns step of the wizard. (If you want the columns to be matched automatically, go straight to step 12 instead.)

  8. Click on the Match All Possible button, or manually select one column From current data and one From new data and click Match Selected. Repeat if necessary.

    Comment: The columns used for matching should together create a unique identifier for all rows.

  9. Click Next > to go to the Import step of the wizard.

  10. Select the columns that you wish to add from the new data.

  11. Select a Join method to determine which rows to keep from both data tables.

    Comment: Note that selecting an inner join may result in that no data remains in TIBCO Spotfire, if no matching rows are found.

  12. Click Finish.

    Response: The selected columns are added to your data table.

  • To insert columns from a database:

  1. Select Insert > Columns....

  2. If you have more than one data table in the document, select which data table to add columns to, then click Next.

  3. Click the Database radio button.

  4. Click on Browse... to locate the database with additional data.

    Response: The Open Database dialog is displayed.

  5. Follow the instructions for the desired data source type under Opening Data from a Database.

  6. Apply transformations (optional).

    Comment: See Transforming Data to learn more.

  7. Click Next > to go to the Match Columns step of the wizard. (If you want the columns to be matched automatically, go straight to step 12 instead.)

  8. Click on the Match All Possible button, or manually select one column From current data and one From new data and click Match Selected. Repeat if necessary.

    Comment: The columns used for matching should together create a unique identifier for all rows.

  9. Click Next > to go to the Import step of the wizard.

  10. Select the columns that you wish to add from the new data.

  11. Select a Join method to determine which rows to keep from both data tables.

    Comment: Note that selecting an inner join may result in that no data remains in TIBCO Spotfire, if no matching rows are found.

  12. Click Finish.

    Response: The selected columns are added to your data table.

See also:

Details on Insert Columns – Select Source

Details on Insert Columns – Match Columns

Details on Insert Columns – Import

Insert Columns – Example

How to Insert Rows

Transforming Data

Details on Insert Columns – Select Destination

This step is only visible if you have more than one data table open in the document.

InsertColumns-SelectDestinations.png

Option

Description

Add columns to data table

Specifies which data table to add columns to.

Next >

Continues to the next step of the wizard where the data source to add data from is selected.

See also:

Details on Insert Columns – Select Source

Details on Insert Columns – Match Columns

Details on Insert Columns – Import

Details on Insert Columns – Select Source

InsertColumns-SelectSource.png


Option

Description

Add columns from

 

  File
 

Allows you to add columns from files.

  Information Link

Allows you to add columns from information links.

  Database

Allows you to add columns from any supported database.

  Clipboard

Allows you to add columns from the clipboard.

  Existing data table in my analysis

Allows you to add columns from the current analysis.

Location

Shows the path and file name of the selected file.

Browse...

Opens a dialog where you can select which file, information link, or database to open.

Show transformations

Expands the dialog and allows you to apply transformations on the columns you want to add. For more information, see the Show transformations dialog.

Next >

Continues to the next step of the wizard where the matching columns are selected.

Finish

Automatically matches all columns with the same external ID or, of subordinate importance, the same name. The columns that were not used in the matching are also added to the data table.

Note: If you have columns with identical names that do not contain the same identifiers, this option might result in that no data is added. In that case, it is probably better to use the Next > button (see above), and match on columns that contain correct identifiers.

See also:

Details on Insert Columns – Match Columns

Details on Insert Columns – Import

Details on Insert Columns – Match Columns

InsertColumns-MatchColumns.png


Option

Description

From current data

Lists all columns in the current data. Click here to select the column you wish to match against a column from the new data, then click Match Selected.

From new data

Lists all columns in the new data. Click here to select the column you wish to match against a column from the current data, then click Match Selected.

Match Selected

Sends the selected column pair (From current data and From new data) to the Matched columns list.

Match All Possible

Sends all column pairs that have the same external ID or, of subordinate importance, columns with the same name to the Matched columns list.

Matched columns

Lists all column pairs that have been selected for matching.

Unmatch Selected

Removes the selected column pair from the Matched columns list.

Unmatch All

Removes all column pairs from the Matched columns list.

Next >

Continues to the next step of the wizard where the columns to add and the join method are selected.

Finish

Adds all available columns that were not used in the matching to the data table using a left outer join.

See also:

Details on Insert Columns – Select Source

Details on Insert Columns – Import

Details on Insert Columns – Import

InsertColumns-Import.png


Option

Description

Columns to add from new data

Lists all columns in the new data that can be added to the current data table. Only columns that have not been used in a matching in the previous step are available. Select the check box for all columns you wish to add.

Select All

Selects the check boxes for all available columns.

Clear All

Clears the check boxes for all available columns.

Join method

 

  Left outer

Data will be kept (and columns added) only for rows that are available in the current data table. If additional rows exist in the new data, they will not be added to the current data table.

  Full outer

Data will be kept (and columns added) for all rows available in any of the data tables. If additional rows exist in the new data, they will be added to the current data table.

  Inner

Data will be kept (and columns added) only for rows that are available in both the current and the new data. If the new data contains fewer rows than the current data table, rows will be removed from the current data table after this operation.

  Right outer

Data will be kept (and columns added) only for rows that are available in the new data. If the new data contains fewer rows than the current data table, rows will be removed from the current data table after this operation.

Finish

Adds the selected columns to the selected data table in Spotfire.

See also:

Details on Insert Columns – Select Source

Details on Insert Columns – Match Columns

Insert Rows

How to Insert Rows

  • To insert rows from file:

  1. Select Insert > Rows....

  2. If you have more than one data table in the document, select which data table to add columns to, then click Next.

  3. Click the File radio button.

  4. Click Browse... to locate the file to add columns from, then click Open.

    Response: If the selected file is a text file, the Import Settings dialog is displayed. If the selected file is an Excel file, the Excel Import dialog is displayed. Optionally, make the desired adjustments in the dialog, then click OK.

  5. Apply transformations (optional).

    Comment: See Transforming Data to learn more.

  6. Click Next > to go to the Match Columns step of the wizard. (If you want the columns to be matched automatically, go straight to step 11 instead.).

  7. Click on the Match All Possible button, or manually select one column From current data and one From new data and click Match Selected. Repeat if necessary.

    Comment: The columns used for matching should together create a unique identifier for all rows.

  8. Click Next > to go to the Additional Settings step of the wizard.

  9. If the new data contains more columns than the original data table, you may add the new columns by selecting their check boxes.

  10. If desired, you can add information about the origin of new rows to a specified column.

    Comment: Update existing column is only relevant when rows have been previously added to the document.

  11. Click Finish.

    Response: The selected columns are added to your data table.

  • To insert rows from information link:

  1. Select Insert > Rows....

  2. If you have more than one data table in the document, select which data table to add columns to, then click Next.

  3. Click the Information Link radio button.

  4. Click Browse... to locate the information link to add columns from.

    Response: The Select Information Link dialog is displayed.

  5. Locate and select the information link of interest, and then click OK.

  6. Apply transformations (optional).

    Comment: See Transforming Data to learn more.

  7. Click Next > to go to the Match Columns step of the wizard. (If you want the columns to be matched automatically, go straight to step 12 instead.)

  8. Click on the Match All Possible button, or manually select one column From current data and one From new data and click Match Selected. Repeat if necessary.

    Comment: The columns used for matching should together create a unique identifier for all rows.

  9. Click Next > to go to the Additional Settings step of the wizard.

  10. If the new data contains more columns than the original data table, you may add the new columns by selecting their check boxes.

  11. If desired, you can add information about the origin of new rows to a specified column.

    Comment: Update existing column is only relevant when rows have been previously added to the document.

  12. Click Finish.

    Response: The selected columns are added to your data table.

  • To insert rows from a database:

  1. Select Insert > Rows....

  2. If you have more than one data table in the document, select which data table to add columns to, then click Next.

  3. Click the Database radio button.

  4. Click on Browse... to locate the database with additional data.

    Response: The Open Database dialog is displayed.

  5. Follow the instructions for the desired data source type under Opening Data from a Database.

  6. Apply transformations (optional).

    Comment: See Transforming Data to learn more.

  7. Click Next > to go to the Match Columns step of the wizard. (If you want the columns to be matched automatically, go straight to step 12 instead.)

  8. Click on the Match All Possible button, or manually select one column From current data and one From new data and click Match Selected. Repeat if necessary.

    Comment: The columns used for matching should together create a unique identifier for all rows.

  9. Click Next > to go to the Additional Settings step of the wizard.

  10. If the new data contains more columns than the original data table, you may add the new columns by selecting their check boxes.

  11. If desired, you can add information about the origin of new rows to a specified column.

    Comment: Update existing column is only relevant when rows have been previously added to the document.

  12. Click Finish.

    Response: The selected columns are added to your data table.

See also:

Details on Insert Rows – Select Source

Details on Insert Rows – Match Columns

Details on Insert Rows – Additional Settings

How to Insert Columns

Transforming Data

Details on Insert Rows – Select Destination

InsertRows-SelectDestination.png


This step is only visible if you have more than one data table open in the document.

Option

Description

Add rows to data table

Specifies which data table to add rows to.

Next >

Continues to the next step of the wizard where the data source to add data from is selected.

See also:

Details on Insert Rows – Select Source

Details on Insert Rows – Match Columns

Details on Insert Rows – Additional Settings

Details on Insert Rows – Select Source

InsertRows-SelectSource.png


Option

Description

Add rows from

 

  File

Allows you to add rows from files.

  Information Link

Allows you to add rows from information links.

  Database

Allows you to add rows from any supported database.

  Clipboard

Allows you to add rows from the clipboard.

  Existing data table in my analysis

Allows you to add rows from the current analysis.

Location

Shows the path and file name of the selected file.

Browse...

Opens a dialog where you can select which file, information link, or database to open.

Show transformations

Expands the dialog and allows you to apply transformations on the rows you want to add. For more information, see the Show transformations dialog.

Next >

Continues to the next step of the wizard where the matching columns are selected.

Finish

Automatically matches all columns with the same external ID or, of subordinate importance, the same name. The columns that were not used in the matching are also added to the data table.

Note: If you have columns with identical names that do not contain the same identifiers, this option might result in that no data is added. In that case, it is probably better to use the Next > button (see above), and match on columns that contain correct identifiers.

See also:

Details on Insert Rows – Match Columns

Details on Insert Rows – Additional Settings

Details on Insert Rows – Match Columns

InsertRows-MatchColumns.png


Option

Description

From current data

Lists all columns in the current data. Click here to select the column you wish to match against a column from the new data, then click Match Selected.

From new data

Lists all columns in the new data. Click here to select the column you wish to match against a column from the current data, then click Match Selected.

Match Selected

Sends the selected column pair (From current data and From new data) to the Matched columns list.

Match All Possible

Sends all column pairs that have the same external ID or, of subordinate importance, columns with the same name to the Matched columns list.

Matched columns

Lists all column pairs that have been selected for matching.

Unmatch Selected

Removes the selected column pair from the Matched columns list.

Unmatch All

Removes all column pairs from the Matched columns list.

Next >

Continues to the next step of the wizard where it is possible to determine if additional columns should be included and whether or not to use a column to identify the origin of new rows.

Finish

Adds all available new rows and includes data from any new columns that were not used in the matching.

See also:

Details on Insert Rows – Select Source

Details on Insert Rows – Additional Settings

Details on Insert Rows – Additional Settings

InsertRows-AdditionalSettings.png


Option

Description

Include additional columns from the new data

Lists all columns in the new data that can be added to the current data table. Only columns that have not been used in a matching in the previous step are available. Select the check box for all columns you wish to add.

Select All

Selects the check boxes for all available columns.

Clear All

Clears the check boxes for all available columns.

Identify origin of new rows

Select the check box if you want to use a column with information about the origin of the new (and the original) rows.

  Create new column

Use this option if you have not previously added any rows and created a "column of origin".

  Update existing column

Use this option when you add rows from many different sources and want to update a previously added "column of origin".

Column name

The column name of the "column of origin".

Value for new rows

The value you want to tag all new rows with.

Value for original rows

The value you want to tag all original rows with. This option is only available when you create a new column. Once a "column of origin" has been created, all previously added values will be kept when the column is updated.

Finish

Adds the selected rows and (optionally) columns to the specified data table in Spotfire.

See also:

Details on Insert Rows – Select Source

Details on Insert Rows – Match Columns

Multiple Data Tables

How to Insert Multiple Data Tables into the Analysis


Data can be added to the analysis in several different ways: as new columns, as new rows or as new data tables. Adding data as separate data tables is useful if the new data is unrelated to the previously opened data table or if the new data is in a different format (pivoted vs. unpivoted).

If you have a visualization made from a particular data table which has filtering and marking that you would like to apply to visualizations made from another data table, then you must define a relation between the two tables. For a relation to be useful, you need to have one or more key columns (identifier columns) available in both data tables, and use these to define which rows in the first data table will correspond to rows in the second data table. If you need more than one key column to set up a unique identifier, you must add one relation for each identifier column.

Note: The map chart is the only visualization where you can use different data tables in the same visualization. If you need to bring in-memory data from different data sources together in any other single visualization, use the Insert Columns or Insert Rows tools instead. With in-database data tables you can often join several database tables into a single virtual data table before adding it to Spotfire. See Details on Data Tables in Connection for more information.

Tip: For a simple line from a different data table in a scatter plot, see Details on Line from Data Table.

  • To add new data tables to the analysis:

  1. Select File > Add Data Tables....

    Response: The Add Data Tables dialog is displayed.

  2. Click Add and select the type of data to add from the drop-down list.

    Comment: You can add data tables from files, information links, databases, the clipboard, external connections, data functions or from current data tables within your analysis. You may also have access to other sources if they have been set up by your administrators.

    Response: Depending on your selection you will be presented with a dialog where you can specify which file, information link, etc., to add. If you need more information on specific data sources, see Opening a Text File, Opening an Excel File, Opening a SAS File, Opening an Information Link,  Opening Data from a Database or Adding Data Connections.

  3. Select the source data and specify any required settings.

  4. If desired, type a new Data table name.

  5. Apply transformations (optional and not applicable for in-database data tables).

  6. If you want to add more data tables, repeat steps 2-5 for each data table.

  7. Determine whether or not the new data tables will be related to each other or to previously added data tables. If a relation is necessary, click Manage Relations... and specify the relation.

    Comment: See To define a new relation below for more information. Remember that you need to define a relation if the new data table is to be used to create details visualizations for the previously added data tables.

  8. Click OK.

    Response: The new data tables are incorporated into the analysis and are ready to be used.

Note: If you want to add a new data table that is loaded on demand you should instead use the File > Add On-Demand Data Table option. See Loading Data on Demand for more information.

  • To define a new relation:

  1. In the Add Data Tables dialog, click Manage Relations....

    Response: The Manage Relations dialog is displayed.

  2. Click on New....

    Response: The New Relation dialog is displayed.

  3. Select the two data tables you want to connect from the Left data table and Right data table drop-down lists.

  4. Select the columns containing the identifiers from the Left column and Right column drop-down lists.

  5. If desired, you can apply a Left method or Right method to modify the values of one or both columns.

    Comment: For example, if the identifiers are written in uppercase letters in one of the data tables and in lowercase letters in the other, you can use the Lower method on the uppercase column and change the letters to lowercase.

    Response: The result of the method application is shown in the Sample field.

  6. Click OK.

Tip: You can always go back and edit relations as well as create new ones using the Data Table Properties dialog.

See also:

Transforming Data

How to Handle Multiple Data Tables in One Analysis


When you set up an analysis in TIBCO Spotfire, you may want to be able to visualize data from more than one data table. Adding other data tables is fairly easy; just select File > Add Data Tables... and use the Add button to select the data source of interest. See How to Insert Multiple Data Tables to the Analysis for more information. However, if you choose to bring in a lot of data tables, you may find it difficult to keep track of which data tables are related and which are not. Therefore, TIBCO Spotfire will add some extra visual hints when more than one data table is available.

You can always get a collected view of all data tables in the analysis by selecting Edit > Data Table Properties or View > Data which opens the data panel.

  • To see which data table is used by a certain visualization:

Data from different data tables cannot be used in the same visualization; not even if they are related (with some exception for map charts). Each visualization can be specified to show data from any data table.

  1. Look for the data table selector in the legend of the visualization.
    SalesDataIcon.png

    Comment: When a new data table is added, the default visualization created will normally show the legend with the data table selector visible. However, any old visualizations created before the second data table was added will not display their data table selectors automatically.

  2. If you cannot see the data table selector in the legend, right-click in a white area of the legend and select Data table.

  3. Alternatively, in the Visualization Properties dialog you can go to the Data page and directly see which data table is being used, or, you can go to the Legend page and display the legend and the data table selector by selecting the appropriate check boxes.

    Comment: You can switch to a different data table from the menu on the data table selector.

  • To view information about the active data table:

The status bar at the bottom of the TIBCO Spotfire application window displays information about the data table used by the active visualization (the visualization that was last clicked upon).

113of216Rows.png

The information displayed is:

  • the current number of filtered rows,

  • the total number of rows in the data table (but if the data table is loaded on demand, then the number of currently loaded rows is shown),

  • the number of marked rows,

  • the number of available columns,

  • the name of the active data table.

Note that in-database data tables (external data) do not have access to the detailed information about rows and columns since the data shown in a visualization is aggregated by the external data source and not within Spotfire.

  • To see which visualizations are related:

  1. Look at the color stripe on the left-hand side of the title bar for the visualizations. Visualizations using the same data table or related data tables will display the same color on the color stripe.

    Comment: The true color of the color stripe is only visible for the active visualization and any related visualizations. All unrelated visualizations are shown with a gray color stripe until activated.

  2. Click to activate a visualization using an unrelated data table.

    Response: The visualizations working on the new data table get their relations color shown and the previously colored visualizations become gray.

  • To differentiate two data tables in the filters panel:

The filters from one data table are always grouped within a single data table group, which can be expanded or collapsed in the filters panel. Filters cannot be moved from one data table group to another. Note that no data table group headers will be shown for in-database data tables until filters have been created.

  1. Look at the color stripe on the left-hand side of the filters.

    Comment: Filters belonging to the same data table are marked with the same color stripe. The color used is identical to the color of visualizations that use the same data table. Only the filters belonging to the data table used by the active visualization or any related data table show their true color. Other filters have a gray color stripe. This is regardless of how filtering in related data tables has been specified (whether the filtering in related data tables affects the filtering in another data table or not).

  2. Click to activate a visualization using an unrelated data table.

    Response: The filters working on the new data table get their relations color shown and the previously colored filters become gray.

  • To see which data tables are related:

When more than one data table is available, a color stripe is added to the data table group and its subgroups and filters in the filters panel. If two data tables are related, they will have the same color in the filters panel and in the data panel. The currently active data table is written in bold typeface in the filters panel.

Filters.png

In the image above, the two data tables "Sales Data" and "Sugar Content" are related to each other (and the active visualization uses Sales Data), whereas the "Stores" data table is unrelated to the others. You can also check the relations between data tables in the Data Table Properties dialog:

  1. Select Edit > Data Table Properties.

  2. All related data tables will have the same relations color. On the Relations tab, you can change the relations color for a group of related data tables.

    Comment: The relations color is used in the title bars of the visualizations as well as in the filters panel and in the Details-on-Demand. You can change the Relations color for all related data tables by selecting a different color from the drop-down list.

  • To change the way filtering in a related data table affects a data table:

  1. Go to the filters panel and locate the data table header for the data table of interest.

    Comment: Note that you need to specify how each table should respond to filtering in all other related tables separately, to be certain of what will be shown in the visualizations after filtering.

  2. Click on the Manage relations icon, ManageRelationsIcon.png.

    Response: A drop-down menu is shown, where all related data tables are available.

  3. Select the data table for which you want to change how the  filtering should affect the current data table, and select one of the following options: Include Filtered Rows Only, Exclude Filtered Out Rows or Ignore Filtering.

    Comment: See Filtering in Related Data Tables for more information about the different options.

See also:

How to Insert Multiple Data Tables to the Analysis

Data Tables Overview


With TIBCO Spotfire is possible to work with more than one data table within a single analysis. Below is a short description of the different concepts used when handling multiple data tables.

A data table is either fetched from a data source, or created within the application. Data loaded from a data source can be handled either in-memory or in-database depending on how it is added to the analysis. In-memory data tables have one or more columns and zero or more rows, whereas in-database data tables technically do not contain any data but simply fetch the requested data directly from the source. See Data Overview for more information.

In-memory data tables can be linked or embedded. Linked data tables can be loaded completely into the application, but if the source is an information link they can also be configured to load data on demand only.

Data tables can be related to each other, using primary and/or foreign keys (key columns), but they can also be unrelated. When data tables are related, any marking or filtering in one data table may be propagated to the other related data tables, but data from multiple data tables cannot be used in a single visualization.

Tip: If you want to use data from different sources in a single visualization, you should use the Insert Columns or Insert Rows tools to add the data to an existing in-memory data table, rather than defining another data table with a relation to the first data table. With in-database data tables you can often join several database tables into a single virtual data table before adding it to Spotfire.

On-Demand Data Tables

On-demand data tables are data tables to which only rows related to a defined input are loaded. The input could be something like the marked rows in another, related, data table, the filtered rows of another data table or a property value selected in a text area. Changing the input means changing the "demand", i.e., that more, fewer or other rows are loaded into the data table. On-demand data tables can be used by Details Visualizations, and only data from information links can be loaded on demand.

In-Database Data Tables

While data from in-database data tables is retrieved only when needed, the use of an in-database data table as a details visualization may also be seen as a type of on-demand visualization.

Related Data Tables

As a means of helping you keep track of which data tables are related, a stripe of color will be added to the left of the filters in the filters panel when more than one data table is available. Filters from related data tables (which may affect each other when they are manipulated) all have the same color. Also, the visualizations that use related data tables will show the same color in the title bar, if it is displayed.

Note: You can specify whether or not filtering in a data table should affect what is shown in visualizations used by other, related data tables. The default setting is to ignore filtering in related data tables. See Filtering in Related Data Tables for more information.

  • To add a new data table:

  1. See How to Insert Multiple Data Tables to the Analysis.

  • To delete a data table:

  1. Select Edit > Data Table Properties.

  2. Click on the data table you wish to remove from the analysis.

  3. Click on Delete.

  • To rename a data table:

  1. Select Edit > Data Table Properties.

  2. Click on the data table you wish to rename.

  3. Click on Rename....

  4. Type a new data table name and click OK.

  • To reload a data table:

Note: Reload may affect embedded data tables, as well as linked ones. See Embedded or Linked Data? for more information.

Note: Reload of in-database data tables will only reload the data, not the schema. See Data Connection Properties – General if you need a full schema refresh.

  1. Select Edit > Data Table Properties.

  2. Click on the data table you wish to reload.

  3. Click Refresh Data.

    Comment: The Refresh Data button may be unavailable for some of your data tables. For example, this happens if  you have added rows or columns to an embedded data table, or if you have frozen some columns in an embedded data table. In that case the data table cannot be reloaded.

  • To reload all data tables:

Note: Reloading all data tables may take a long time if one or more data tables are very big.

Note: Reload may affect embedded data tables, as well as linked ones. See Embedded or Linked Data? for more information.

  1. Click on the Reload Data button on the toolbar, ReloadDataButton.png.

  • To set up an on-demand data table:

  1. Select File > Add On-Demand Data Table....

  2. Specify an information link to use and click OK.

  3. Define what type of input will control the on-demand loading.

    Comment: For more details see Loading Data on Demand and Details on Define Input.

  4. Click OK.

  • To update a visualization using an on-demand data table manually:

  1. When the input controlling the on-demand data table is changed, a red refresh button is shown on the title bar of the visualization.

    Comment: If the title bar has been hidden, right-click on the visualization and select Properties. On the General page, select the Show title bar check box.

  2. Click on the refresh button, RefreshButton.png.

  • To replace a data table:

  1. See Replacing Data.

  • To recalculate a data table:

  1. When the filtering behind a calculated data table is changed, a red refresh button is shown on the title bar of the visualization.

    Comment: If the title bar has been hidden, right-click on the visualization and select Properties. On the General page, select the Show title bar check box.

  2. Click on the refresh button, RefreshButton.png.

  • To save data tables:

All data tables currently in the analysis will be saved in the document when saving an analysis file. See Saving an Analysis File or Saving an Analysis File in the Library for more information.

  • To export data from a data table:

  1. See Exporting Data.

  • To prompt for settings each time an analysis file is loaded:

  1. Select Edit > Data Table Properties.

  2. Click Linked to source.

  3. Select Prompt for new settings before loading.

  4. Click OK.

    Comment: You can also change this setting when you save your analysis. Click the Edit button in the Save dialog or in the third step of the Save as Library Item wizard. This will open the Data Table Properties dialog.

  • To filter a data table:

  1. In the Filters panel, locate the data table header for the data table of interest.

  2. Use the filters to modify what is shown in the visualizations using the specified data table (and, optionally, in other related data tables).

  • To use a data table in a visualization:

  1. Click on the data table selector in the legend of the visualization and select the data table of interest.

    Comment: See How to Handle Multiple Data Tables in One Analysis for more information.

See also:

How to Handle Multiple Data Tables in One Analysis

How to Edit Column Properties

Examples

Master-Detail Visualizations

This is an example of multi-step Master-Detail visualizations. The visualizations in this example are based on the same data table and show different levels of detail. However, the visualizations could just as well be based on data from different data tables. Marking in one visualization defines the data of the next visualization, making it possible to drill down in level of detail.

Note: Related visualizations (as the Master-Detail case) can be placed on different pages in a visualization. This means that markings in a visualization that is not visible for the moment can affect the analysis that you are looking at. If a visualization is empty, it may be because it is based on markings from another visualization. Go to the master visualization and mark an item to display information in the details visualization.

Note: The Details-on-Demand displays information about the marked rows from the active visualization; it could be either the master or the details visualization.

In this example the master visualization shows Sales per Year. If you mark a year, for example, 2003, in the master visualization, data will be displayed in the details visualization. This details visualization shows Sales per Category (fruit and vegetables) for 2003.

Master-DetailsVisualization1.png

Marking an item in the next visualization, Sales per Category, can also be setup to display an even more detailed visualization. Below, another visualization has been created, where marking the category "vegetables" in Sales per Category displays a more limited visualization; in this case the percentage of sales per type (cucumber, lettuce and tomato) in that category for 2003.

Master-DetailsVisualization2.png

This image shows three different visualizations displaying different aspects of the same data table.

See also:

What is a Details Visualization?

Details on Create Details Visualization

How to Handle Multiple Related Data Tables in One Analysis

Multiple Related Data Tables

Independent Data Tables

This is an example of independent data tables. These two visualizations are placed on the same page, but they are not related to each other. The visualizations correspond to separate data tables. Marking or filtering in one visualization will not affect the other when they are independent. The Details-on-Demand displays information about the marked item in the active visualization. Color stripes are used to indicate what visualization, filter and Details-on-Demand that are related.

In this example, the bar chart shows the sum of sales for different types of fruits and vegetables. The scatter plot shows the content of fructose and glucose for different types of fruits and vegetables.

IndependentDataTables.png

See also:

Multiple Related Data Tables

Multiple Related Data Tables

This is an example of multiple related data tables. The visualizations are based on different data tables that are related. Marking items in one visualization will mark the corresponding items in the related visualizations. Filtering data in one data table may filter the related data in the other data tables. The relation between the data tables is set up in TIBCO Spotfire. Visualizations that are related share the same color in the color stripe to the left in the visualization. Filters belonging to related data tables also share the same color stripe.

Note: Related visualizations can be placed on different pages in an analysis. This means that markings that are not visible for the moment can affect the analysis that you are looking at.

In this case, two data tables with information about fruit and vegetables are related. The scatter plot shows the amount of glucose and fructose for different types of fruits and vegetables, while the bar chart shows the sum of sales for the same types of fruits and vegetables. Marking an item in the scatter plot, in this case the one with the highest level of fructose (Apples), will mark the Sum(Sales) for Apples in the Bar Chart.

MultipleRelatedDataTables.png

See also:

How to handle Multiple Related Data Tables in One Analysis

Independent Data Tables

Master-Detail Visualizations

Insert Columns – Example

By inserting columns or rows, it is possible to combine data from different sources into a single data table that can be used in a visualization.  

In this example, a data table containing information about the cost and sales for different kinds of fruits and vegetables (Table 1) has been joined together with another data table containing information about the content of Glucose, Fructose, Maltose and Saccharose (Table 2) per fruit and vegetables. In this case, two columns from Table 2 have been added (Glucose and Fructose) to Table 3.

Table 1

InsertColumnsExample.png

Table 2

 

Table 3

 

See also:

How to Insert Columns

How to Insert Rows

Data Panel

What is the Data Panel?


The Data panel is used to get an overview of the columns in all data tables, in-memory as well as in-database (in-db). When working with in-database data the Data panel is the starting point for configuring both visualizations and the filters panel, since no filters are created automatically for external data. Depending on the data source, there will be different sections available for a selected data table, see below.

In-Memory or In-Database Relational Data

In-MemoryorIn-DatabaseRelationalData.png

Data from in-memory data tables or in-database data tables based on relational databases is simply displayed as a list of the available columns in the selected data table, If data from in-db database tables have been joined with relations in the Data Tables in Connection dialog, they can be treated as a single, virtual data table within Spotfire. If no relations have been defined, each data table in the external connection will be a separate data table within Spotfire.

Number

Section

Description

1

Data tables

[Only available if more than one data table has been added to the analysis.]

Lists all data tables in the analysis. Related tables have the same color on the color stripe.

2

Columns

Lists all columns available in the selected data table.

In-Database Cube Data

In-DatabaseCubeData.png

When you are working with cube data you will see more fields in the Data panel than for the other data tables:

Number

Section

Description

1

Data tables

[Only available if more than one data table has been added to the analysis.]

Lists all data tables in the analysis. Related tables have the same color on the color stripe.

2

Measure groups

Lists all measure groups in the cube. If there are measures that do not belong to a specific measure group, they can be located through the virtual group (Other measures).

3

Related dimensions

Lists all dimensions within the cube that are related to the selected measure group of the selected data table.

4

Columns

Lists all measures, attribute hierarchies and user hierarchies available in the selected data table, measure group and related dimension. Note that measures and dimension columns are divided into two different groups, in order to help you setting up relevant visualizations. See Working With Cubes for more information.

Toolbar

Both flavors of the Data panel have a toolbar at the bottom of the panel:

Button

Description

[Only available for external data tables and only for columns that currently does not have a corresponding filter.]

Creates a filter for the selected columns.

Tip: To delete a filter, right-click on the filter in the Filters panel and select Delete Filter from the pop-up menu.

Note: When working with cube data it is not possible to create filters for measures or sets, only for dimension columns.

Opens the Column Properties dialog where you can view detailed information about all columns, change the name of a column, add custom column properties, etc.

Allows you to change the sorting of the columns in the list from the original database sorting to an Ascending or Descending alphabetical order.

Note: If the selected data table is a cube then the measures and the dimension columns are sorted respectively within their section.

You can use drag-and-drop from the Data panel to configure visualizations or to create filters. If hierarchies are available in the data panel columns list, they can also be dragged and dropped to utilize a hierarchy slider in a visualization.

See also:

Data Panel Pop-up Menu

Data Overview

Data Panel Pop-up Menu


Right-click in the data panel to bring up the pop-up menu. You will have access to different options depending on where in the data panel you click.

Data Tables

Option

Description

Rename...

Allows you to change the name of the selected data table.

Edit Data Tables in Connection...

Opens the Data Tables in Connection dialog where you can add and remove data tables in the connection.

You can also add new structural relations between the source tables which allows you to join multiple tables into a single virtual table in Spotfire.

Data Table Properties

Opens the Data Table Properties dialog where you can edit the properties of the data tables in the analysis. Here, you can add regular relations between the data tables in Spotfire, internal as well as external, which allow you to use details visualizations and propagate marking and filtering between the data tables.

Columns

Option

Description

Rename...

Allows you to change the name of the selected column.

Note: The columns use the same name as their corresponding filters, so renaming a column will also change the filter name, if a filter exists for the column.

Delete

[Only available for internal data tables.]

Deletes the selected columns from the data table.

Create Filter

[Only available for columns which currently do not have a corresponding filter.]

Creates filters for the selected columns.

Note: When working with cube data it is not possible to create filters for measures or sets, only for dimension columns.

Sort

 

   No Sorting

Applies the sorting from the original database to the columns in the list.

   Ascending

Applies an alphabetical sorting of the columns in the list.

Note: If the selected data table is a cube then the measures and the dimension columns are sorted respectively within their section.

   Descending

Applies a reversed alphabetical sorting of the columns in the list.

Note: If the selected data table is a cube then the measures and the dimension columns are sorted respectively within their section.

Column Properties

Opens the Column Properties dialog where you can edit the properties of the columns in the analysis.

See also:

What is the Data panel?

Details on Rename Column

RenameColumn.png


Option

Description

Name

Specify a new name for the column.

Note: The columns use the same name as their corresponding filters, so renaming a column will also change the filter name, if a filter exists for the column.

See also:

What is the Data panel?

Data Connection Properties

How to Edit Data Connection Properties


  • To add data tables to an existing data connection:

  1. Select Edit > Data Connection Properties.

  2. Click on the connection you want to add data tables to in the Connections list.

  3. Go to the Data Tables tab.

  4. Click on the Edit... button.

    Response: The Data Tables in Connection dialog is opened.

  5. In the Available tables in database list, double-click on the tables you want to work with in Spotfire.

    Response: The tables are moved to the Data tables in Connection list.

    Comment: Click on a table in the Data tables in connection list to view the columns in the table.

    Comment: If the Available tables in database does not show all the tables in the database, you can click in the Edit Tables... button to open the Select Database Tables dialog where you can add more tables. This may require a higher level of database permissions.

  6. When you are done, click OK.

    Response: The added data tables are shown in the Data tables list.

  7. Click OK to close the Data Connection Properties dialog.

  • To remove data tables from an existing data connection:

  1. Select Edit > Data Connection Properties.

  2. Click on the connection you want to remove data tables from in the Connections list.

  3. Go to the Data Tables tab.

  4. Click on the Edit... button.

    Response: The Data Tables in Connection dialog is opened.

  5. In the Data tables in Connection list, click on a table you want to remove.

  6. Click on the < Remove button.

    Response: The table is removed from the Data tables in Connection list.

  7. Repeat for each table you want to remove.

  8. When you are done, click OK.

    Response: The removed data tables are no longer available in the Data tables list or in the analysis.

  9. Click OK to close the Data Connection Properties dialog.

  • To edit the configuration of a data connection:

If you need to change any settings regarding how the connection to the data source is made, you can reach the connection dialog for each connection from the Data Connection Properties dialog. It can be useful to edit those settings if you want to switch from a test environment to a production environment, or to change an expired password. However, make sure not to change the connection settings unless necessary, since visualizations using the selected data connection will become invalid if the connection fails.

  1. Select Edit > Data Connection Properties.

  2. Click on the connection you want to edit in the Connections list.

  3. On the General tab, click on the Edit... button to the right of the Data source details.

    Response: The connection dialog for the selected data connection is opened.

  4. Make the desired changes in the dialog and click OK when done.

    Response: The data connection is updated with the changes. Note that some type of changes need to be followed by a Refresh Schema operation (see below) and possibly also an update of the available data tables.

Refreshing data:

To refresh data from the database, open the Data Connection Properties dialog, and click on the data connection of interest in the Connections list. The refresh buttons are located in the upper part of the dialog:

  • Click Refresh Data if you want the reload to include only changes in the rows of the data in the selected connection, for example, added or removed rows.

  • Click Refresh Schema for a more extensive reload of the data. This will include updates in the underlying schema, added or removed columns, or changes in structural relations. Note that permissions set on the database server may prevent you from using Refresh Schema.

See also:

Data Connection Properties – General

Data Connection Properties – Data Tables

Data Connection Properties – Credentials

Data Connection Properties – Cache Settings

Details on Data Connection Properties – General


  • To reach the Data Connection Properties dialog:

  1. Select Edit > Data Connection Properties.

  2. Click on the General tab.

DataConnectionProperties-General.png

Option

Description

Connections

Lists the data connections available in the analysis.

Rename...

Opens the Rename Data Connection dialog where you can change the name of the selected data connection.

Refresh Data

Click to update the data from the data source. To include any database schema modifications, use Refresh Schema instead.

Refresh Schema

Click to update the data and any changes to the database schema from the data source.

Note: Permissions set on the database server may prevent you from updating the schema.

Add New

Allows you to add a new data connection to the analysis. Select the data source type from the menu. The corresponding connection dialog will open, allowing you to specify which server and database to connect to. See Adding Data Connections for detailed instructions on adding connections to the different source types.

Delete

Removes the selected data connection from the analysis. Any visualizations that use the deleted data connection will become invalid.

Description

Allows you to add a description of the data connection.

Data source

Shows information about the connection. The information available depends on the type of data source, but typically includes connection type, server, and database.

Edit...

Opens the login dialog for the selected data connection. This can be useful if you want to switch from a test environment to a production environment, or to change an expired password. However, make sure not to change any settings here unless necessary, since visualizations using the selected data connection will become invalid if the connection fails.

See also:

Data Connection Properties – Data Tables

Data Connection Properties – Credentials

Data Connection Properties – Cache Settings

How to Edit Data Connection Properties

Details on Data Connection Properties – Data Tables


  • To reach the Data Connection Properties dialog:

  1. Select Edit > Data Connection Properties.

  2. Click on the Data Tables tab.

DataConnectionProperties-Tables.png

Option

Description

Connections

Lists the data connections available in the analysis.

Rename...

Opens the Rename Data Connection dialog where you can change the name of the selected data connection.

Refresh Data

Click to update the data from the data source. To include any database schema modifications, use Refresh Schema instead.

Refresh Schema

Click to update the data and any changes to the database schema from the data source.

Note: Permissions set on the database server may prevent you from updating the schema.

Add New

Allows you to add a new data connection to the analysis.

Delete

Removes the selected data connection from the analysis. Any visualizations that use the deleted data connection will become invalid.

Data tables

Lists the data tables in the selected data connection.

Edit...

Opens the Data Tables in Connection dialog where you can change which data tables to include in the selected data connection. Note that if you remove a data table from the connection, visualizations using the removed data table will become invalid.

See also:

Data Connection Properties – General

Data Connection Properties – Credentials

Data Connection Properties – Cache Settings

How to Edit Data Connection Properties

Details on Data Connection Properties – Credentials


  • To reach the Data Connection Properties dialog:

  1. Select Edit > Data Connection Properties.

  2. Click on the Credentials tab.

DataConnectionProperties-Credentials.png

Option

Description

Connections

Lists the data connections available in the analysis.

Rename...

Opens the Rename Data Connection dialog where you can change the name of the selected data connection.

Refresh Data

Click to update the data from the data source. To include any database schema modifications, use Refresh Schema instead.

Refresh Schema

Click to update the data and any changes to the database schema from the data source.

Note: Permissions set on the database server may prevent you from updating the schema.

Add New

Allows you to add a new data connection to the analysis.

Delete

Removes the selected data connection from the analysis. Any visualizations that use the deleted data connection will become invalid.

Save credentials in analysis

 

   No, do not save any credentials

Use this option if you do not want to save any credentials for the selected connection in the analysis. If the connection uses database authentication, all users of the analysis will be prompted for username and password to the database when the analysis is opened.

   No, but save credentials profile (may be used when opening in TIBCO Spotfire Web Player or running TIBCO Spotfire Automation Services jobs)
   

Use this option if you want to save a credentials profile for the selected connection instead of saving the actual credentials in the analysis.

A credentials profile consists of a profile name, a username, and a password (only the profile name is saved in the analysis). It can be used for logging in to the database in the connection when opening the analysis in TIBCO Spotfire Web Player, or when running jobs in TIBCO Spotfire Automation Services.

Specify the name of the credentials profile you want to use in the text field.

Opening the analysis in the Web Player:

To use this option when opening the analysis in the Web Player, specify a profile name in the text field, and make sure a matching profile has been defined in the Web.config file. The username and password defined in that credentials profile in the Web.config file will be used to log in to the database in the connection when the analysis is opened in the Web Player. This means that the user will not be prompted for username and password to the connection when opening the analysis in the Web Player. See TIBCO Spotfire Web Player – Installation and Configuration Manual for a detailed description of how to set up the Web.config file.

Including the analysis in an Automation Services job:

To use this option when including the analysis in Automation Services jobs, specify a profile name in the text field, and make sure a matching profile is defined in the Set Credentials for External Connection task in Automation Services. The username and password specified in the task will be used for logging in to the database in the connection when the job runs. See TIBCO Spotfire Automation Services – User's Manual for more information on Automation Services jobs.

Note: This option is only useful if the analysis is going to be opened in the Web Player, or if it will be included in jobs run in Automation Services. When opening the analysis in TIBCO Spotfire Professional, the behavior will be the same as when using the option No, do not save any credentials.

   Yes, save credentials in analysis

Select this check box if you want Spotfire to remember username and password for the selected connection. This means that the user will not be prompted for credentials to the data connection when opening the analysis. This option can only be used if the connection is set to use database authentication.

Note: Use this option carefully, since it may be a security risk to save credentials in the analysis.

See also:

Data Connection Properties – General

Data Connection Properties – Data Tables

Data Connection Properties – Cache Settings

How to Edit Data Connection Properties

Details on Data Connection Properties – Cache Settings


  • To reach the Data Connection Properties dialog:

  1. Select Edit > Data Connection Properties.

  2. Click on the Cache Settings tab.

DataConnectionProperties-CacheSettings.png

Option

Description

Connections

Lists the data connections available in the analysis.

Rename...

Opens the Rename Data Connection dialog where you can change the name of the selected data connection.

Refresh Data

Click to update the data from the data source. To include any database schema modifications, use Refresh Schema instead.

Refresh Schema

Click to update both the data and any changes to the database schema from the data source.

Note: Permissions set on the database server may prevent you from updating the schema.

Add New

Allows you to add a new data connection to the analysis.

Delete

Removes the selected data connection from the analysis. Any visualizations that use the deleted data connection will become invalid.

Enable caching of data retrieved from this connection

 

No, always get fresh data from the external source

Use this option if you do not want to cache data from the selected data connection. Note that it can put a very high load on the database server to always get fresh data if many users are working with data from the same server simultaneously.

Yes, but let the cached data expire after

Use this option to cache data from the selected data connection, but to refresh if the cached data is older than the specified time limit.

Yes, but let the cached data expire every [day or day of the week] at [time]

Use this option to cache data from the selected data connection, but to refresh once every day or on a specified day every week, at the specified time.

Share cached data between all concurrent users of TIBCO Spotfire Web Player

Select this check box if you want to share the cached data for the specified data connection.

Note: If the external data source has been set up so that the data available for each user depends on who you are, then you should not allow sharing of cached data.

See also:

Data Connection Properties – General

Data Connection Properties – Data Tables

Data Connection Properties – Credentials

How to Edit Data Connection Properties

Details on Rename Data Connection


  • To reach the Rename Data Connection dialog:

  1. Select Edit > Data Connection Properties.

  2. In the list of Connections, select the connection you want to rename.

  3. Click on the Rename button.

RenameDataConnection.png

Option

Description

Name

Specify a new name for the data connection. Each data connection must have a unique name.

See also:

Data Connection Properties – General

Data Connection Properties – Data Tables

Data Connection Properties – Credentials

Data Connection Properties – Cache Settings

How to Edit Data Connection Properties

Data Table Properties

How to Edit Data Table Properties


The dialog found under Edit > Data Table Properties contains settings that apply to the data tables used in the analysis. For example, you can define which data table to use as default when creating new visualizations, set up sharing routines, or define how data should be stored when saving the analysis. To learn more about using multiple data tables, see Data Tables Overview.

  • To change the default data table to use when creating new visualizations:

  1. Select Edit > Data Table Properties.

  2. Click on the data table to use in the Data tables list.

    Comment: New data tables are added by selecting File > Add Data Tables... or File > Add On-Demand Data Table....

  3. Click on the Set as Default button to the right of the Data tables list.

  4. Click OK.

    Response: All new visualizations created from here on will use the specified data table.

    Comment: To change the data table used in an already-created visualization, right-click on the visualization and select Properties from the pop-up menu, then go to the Data page.

  • To define a new relation between two data tables:

  1. See To define a new relation.

  • To add a new data table property:

  1. Select Edit > Data Table Properties.

  2. Go to the Properties tab.

  3. Click on the New... button.

    Response: The New Property dialog is opened.

  4. Enter a name for the new property.

  5. Select a data type for the new property.

  6. Enter a value to use as default value for the property.

  7. Click OK.

    Response: The new property is added to the list of available properties.

    Comment: New properties can also be created on most places where you can use them. For example, by right-clicking in the Available properties for column list in the expression dialogs.

See also:

Details on Data Table Properties - General

Details on Data Table Properties - Source Information

Details on Data Table Properties - Relations

Details on Data Table Properties - Properties

Details on Data Table Properties - Sharing Routines

Data Tables Overview

Details on Data Table Properties – General


  • To reach the Data Table Properties dialog:

  1. Select Edit > Data Table Properties.

  2. Click on the General tab.

DataTableProperties-General.png

Option

Description

Data tables

Lists the data tables available within the document. The names of the data tables in this list are the names that will be shown in the data table selectors, in the legends of visualizations, etc.

You cannot have two data tables with identical names in the same analysis.

Rename...

Allows you to change the display name of the selected data table.

Refresh Data

Reloads the data from the source. This is a way to refresh the data while analyzing, without having to reload the entire file. See Embedded or Linked Data? for information about what a refresh does with embedded data.

Tip: If you want to reload multiple data tables simultaneously, you can instead select File > Reload Data on the main menu.

Note: If a data table is loaded on demand or calculated using a data function you need to click OK in the Data Table Properties dialog to actually start the data refresh.

Delete

Removes the selected data table from the analysis. Any visualizations that use the deleted data table will become invalid.

Set as Default

Sets the selected data table to be the default data table. This means that the selected data table will be used when new visualizations are created.

You can always change the data table to use in a specific visualization from the Data page in Visualization Properties or from the Data table selector in the legend.

Store Data

Defines how you want data from the selected data table to be stored when saving the analysis.

   Embedded in analysis

Use this option to embed the data from the selected data table in the analysis. By embedding all data in the analysis file it will be self-contained with data. This allows you to share the analysis with others who do not have access to the same databases you do, or who need to use their laptops offline.

   Linked to source

Use this option to link the data from the selected data table to the original data sources. This is useful when data is updated or changed from time to time. For example, if you create an analysis file that gets its data from a database that is updated each night, then the linked option allows you to open the analysis file and have it automatically show the latest numbers. It will still use the visualizations and settings you already set up, but base them on the updated data. Also, data might take up lots of space so you might not want to embed a copy of a large data table if you can access it from another data source.

Note: Make sure that all people who are going to use the analysis also have permissions to access the linked data. If you use this option to create an analysis file which is linked to a source file on your local computer, other people might not be able to open the file.

      Prompt for new settings before loading

Select this check box if you want to see the import settings or any available prompt steps for the selected data table when you open the analysis. If the check box is cleared, the last used settings will be applied.

Key columns for linked data

If specified, lists the columns that have been specified to define an identifier for all rows in the selected data table.

Key columns are used to identify rows when markings, tags or bookmarks are saved with a linked data source. However, there is no guarantee that a selection always can be reapplied even if key columns are specified since a selection of a visualization item might include references to other columns than the key columns.

Edit...

Opens the Select Key Columns dialog where you can specify the columns to use to create a unique identifier for all rows in the selected data table.

Type of data

Displays the type of data source.

Settings...

If applicable, opens a dialog where the data source settings can be modified. If the data table is the result of a calculation (for example, a data relationships calculation), then the dialog for calculating the data table is opened again. For information links that are loaded on demand, the load method settings can be changed.

Filters

[Not applicable for in-database data tables which are always managed manually with regards to filtering.]

   Create automatically for all columns

Use this option for an automatic addition of one filter for each column in the (internal) data table.

   Manage manually

Use this option to disable the automatic creation of filters and allow manual creation or deletion of filters.

See also:

Data Table Properties - Source Information

Data Table Properties - Relations

Data Table Properties - Properties

Data Table Properties - Sharing Routines

How to Edit Data Table Properties

Column Properties - General

Column Properties - Formatting

Column Properties - Properties

Column Properties - Sort Order

Details on Data Table Properties – Source Information


  • To reach the Data Table Properties dialog:

  1. Select Edit > Data Table Properties.

  2. Click on the Source Information tab.

DataTableProperties-SourceInformation.png

Option

Description

Data tables

Lists the data tables available within the document. The names of the data tables in this list are the names that will be shown in the data table selectors, in the legends of visualizations, etc.

You cannot have two data tables with identical names in the same analysis.

Rename...

Allows you to change the display name of the selected data table.

Refresh Data

Reloads the data from the source. This is a way to refresh the data while analyzing, without having to reload the entire file.

Tip: If you want to reload multiple data tables simultaneously, you can instead select File > Reload Data on the main menu.

Note: If a data table is loaded on demand or calculated using a data function you need to click OK in the Data Table Properties dialog to actually start the data refresh.

Delete

Removes the selected data table from the analysis. Any visualizations that use the deleted data table will become invalid.

Set as Default

Sets the selected data table to be the default data table. This means that the selected data table will be used when new visualizations are created.

You can always change the data table to use in a specific visualization from the Data page in Visualization Properties or from the Data table selector in the legend.

Source

Displays information about the origin of the data table together with any transformations or other modifications that have been applied to the original source data.

If the source is a file, then the file name and path are shown. For an information link, the source origin shown is the name of the information link, and for a database, it is the data source name given when adding the data table.

Copy to Clipboard

Copies the information under Source so that you can paste it in another application.

See also:

Data Table Properties - General

Data Table Properties - Relations

Data Table Properties - Properties

Data Table Properties - Sharing Routines

How to Edit Data Table Properties

Column Properties - General

Column Properties - Formatting

Column Properties - Properties

Column Properties - Sort Order

Details on Data Table Properties – Relations


  • To reach the Data Table Properties dialog:

  1. Select Edit > Data Table Properties.

  2. Click on the Relations tab.

DataTableProperties-Relations.png

Option

Description

Data tables

Lists the data tables available within the document. The names of the data tables in this list are the names that will be shown in the data table selectors, in the legends of visualizations, etc.

You cannot have two data tables with identical names in the same analysis.

Rename...

Allows you to change the display name of the selected data table.

Refresh Data

Reloads the data from the source. This is a way to refresh the data while analyzing, without having to reload the entire file.

Tip: If you want to reload multiple data tables simultaneously, you can instead select File > Reload Data on the main menu.

Note: If a data table is loaded on demand or calculated using a data function you need to click OK in the Data Table Properties dialog to actually start the data refresh.

Delete

Removes the selected data table from the analysis. Any visualizations that use the deleted data table will become invalid.

Set as Default

Sets the selected data table to be the default data table. This means that the selected data table will be used when new visualizations are created.

You can always change the data table to use in a specific visualization from the Data page in Visualization Properties or from the Data table selector in the legend.

Related data tables

Lists all other data tables which have been specified to have a relation to the selected data table.

When data tables have been related, they can be set up to propagate marking and filtering  (see Filtering in Related Data Tables) from one data table to another. A relation between data tables is necessary if you want to set up a details visualization where the marking in one visualization allows you to drill down to details about the selected data in another visualization.

Manage Relations...

Opens the Manage Relations dialog where you can add, edit or remove relations between data tables.

Relations color

Displays the color used to distinguish the data tables related to this data table from other, unrelated data tables.

See also:

Data Table Properties - General

Data Table Properties - Source Information

Data Table Properties - Properties

Data Table Properties - Sharing Routines

How to Edit Data Table Properties

Column Properties - General

Column Properties - Formatting

Column Properties - Properties

Column Properties - Sort Order

Details on Data Table Properties – Properties


On the Properties tab it is possible to specify custom data table properties which are applicable throughout the document. The data table properties can be used inside expressions using Insert Column from Expression or Custom Expressions.

  • To reach the Data Table Properties dialog:

  1. Select Edit > Data Table Properties.

  2. Click on the Properties tab.

DataTableProperties-Properties.png

Option

Description

Data tables

Lists the data tables available within the document. The names of the data tables in this list are the names that will be shown in the data table selectors, in the legends of visualizations, etc.

You cannot have two data tables with identical names in the same analysis.

Rename...

Allows you to change the display name of the selected data table.

Refresh Data

Reloads the data from the source. This is a way to refresh the data while analyzing, without having to reload the entire file.

Tip: If you want to reload multiple data tables simultaneously, you can instead select File > Reload Data on the main menu.

Note: If a data table is loaded on demand or calculated using a data function you need to click OK in the Data Table Properties dialog to actually start the data refresh.

Delete

Removes the selected data table from the analysis. Any visualizations that use the deleted data table will become invalid.

Set as Default

Sets the selected data table to be the default data table. This means that the selected data table will be used when new visualizations are created.

You can always change the data table to use in a specific visualization from the Data page in Visualization Properties or from the Data table selector in the legend.

Available properties

Lists all properties currently available for the selected data table. For example, any transformations applied when adding the data table will be visible here. When data tables have been added using information links, the Keywords and Description defined in Information Designer will also be displayed.

If you have defined custom properties for the data table, then these properties are also listed here.

New...

Opens a dialog where you can add new data table properties to the document.

Edit...

Opens a dialog where you can edit the selected data table property.

Delete

Deletes the selected property.

See also:

Data Table Properties - General

Data Table Properties - Source Information

Data Table Properties - Relations

Data Table Properties - Sharing Routines

How to Edit Data Table Properties

Column Properties - General

Column Properties - Formatting

Column Properties - Properties

Column Properties - Sort Order

Details on Data Table Properties – Sharing Routines


When you publish analyses to the TIBCO Spotfire Library, many persons may access the same analysis file simultaneously and, hence, access the same data source by using TIBCO Spotfire Web Player. If desired, the loaded data can be shared between concurrent users from the TIBCO Spotfire Web Player server cache. Sharing data reduces the need for the server to reload the same data and can improve the performance of the server. Since TIBCO Spotfire cannot know when the original data sources have been updated and need to be reloaded, the settings on the Sharing Routines tab allows you to specify an update schedule that can match the actual times when your databases or network files are updated.

  • To reach the Data Table Properties dialog:

  1. Select Edit > Data Table Properties.

  2. Click on the Sharing Routines tab.

DataTableProperties-SharingRoutines.png

Option

Description

Data tables

Lists the data tables available within the document. The names of the data tables in this list are the names that will be shown in the data table selectors, in the legends of visualizations, etc.

You cannot have two data tables with identical names in the same analysis.

Rename...

Allows you to change the display name of the selected data table.

Refresh Data

Reloads the data from the source. This is a way to refresh the data while analyzing, without having to reload the entire file.

Tip: If you want to reload multiple data tables simultaneously, you can instead select File > Reload Data on the main menu.

Note: If a data table is loaded on demand or calculated using a data function you need to click OK in the Data Table Properties dialog to actually start the data refresh.

Delete

Removes the data table from the analysis. Any visualizations that use the deleted data table will become invalid.

Set as Default

Sets the selected data table to be the default data table. This means that the selected data table will be used when new visualizations are created.

You can always change the data table to use in a specific visualization from the Data page in Visualization Properties or from the Data table selector in the legend.

Share data between concurrent users of TIBCO Spotfire Web Player

 

   No, always load new data

Use this option to always load new data. Note that this can put a very high load on the server, if many end users are accessing files from the Library simultaneously.

   Yes, but refresh data if older than X full hours

Use this option to share data and only refresh if the data are older than the specified number of hours.

When someone accesses linked data for a certain data table, the update schedule is checked and the data pool is investigated to see if any data with the same timestamp are available. For example, if the time is 09.35 am when the person accesses the analysis file and the update schedule has been set to refresh data every hour, then the timestamp will be set to 09.00. If any other person has loaded the data between 09.00 and 09.35, then there will be cached data available which will be shared with the new person. If not, then new data are loaded.

   Yes, but refresh data every [day or day of the week] at [time]
  
 (Yes, but refresh data every [day or day of the week] at [time])

Use this option to share data and only refresh once every day or on a specified day every week, at the specified time.

See above for information about how data are loaded.

   Yes, always share when possible

Use this option to always attempt to share data.

In this case, the data are presumed never to be changed.

Note: This tab is only relevant if a Web Player server has been installed

Details

Details on Select Key Columns

This dialog is used to define key columns for a data table in an analysis. The key columns are used to uniquely identify all rows in the data table. You should specify key columns if you want to be able to see the markings that were active when saving the file, or if you want any specified tags or bookmarks to be able to be reapplied when reopening the analysis file. However, there is no guarantee that a selection always can be reapplied even if key columns are specified since a selection of a visualization item might include references to other columns than the key columns.

  • To reach the Select Key Columns dialog:

  1. Select Edit > Data Table Properties.

    Comment: You can also reach the Data Table Properties dialog from the third step of the Save as Library Item wizard by clicking on the Edit... button.

  2. On the General tab, click to select the data table of interest.

  3. Click on the Edit... button next to the Key columns for linked data field.

SelectKeyColumns.png

Option

Description

Limit available columns to

From this drop-down list you can limit the available columns to choose from. Options are:

Columns with unique values for all rows (Recommended)

Since these columns have unique values for all rows, it is likely that they are good choices for determining a unique identifier for each tagged row.

Columns of appropriate data types

This option only shows columns with INTEGER or STRING data types, since these are more likely to provide unique identifiers.

All columns

This option shows all columns.

Available columns

Select which columns to use when identifying keys for the tagged or marked rows.

Each tagged or marked row must be determined by a unique combination of values in the specified columns. For each row with a tag or a marking in your current analysis, the values for the specified columns are noted in the saved analysis file, and when the analysis file is opened again rows matching those criteria will be tagged or marked again.

This means that if a new row has been added to the data table that also matches a criterion for a tag or a marking, the tag or marking is not unique and therefore invalid. Neither the new row nor the original row that was tagged, will receive any tag.

Selected columns

These are the columns that will be used when identifying keys for the tagged rows.

Add >

Select a column from the Available columns list and click Add > to move it to the Selected columns list.

< Remove

Select a column from the Selected columns list and click < Remove to move it to the Available columns list.

Remove All

Removes all columns from the Selected columns list.

See also:

How to Edit Data Table Properties

Data Table Properties - General

Saving an Analysis File

Saving an Analysis File in the Library

Details on Load Method

This dialog is reached by clicking on the Settings... button in the Data Table Properties dialog for a data table that originates from an information link which has been specified to load data on demand.

LoadMethod.png

Option

Description

All data at once

Click this radio button to load all data immediately.

Data on demand

Click this radio button to load data on demand only. If this option is selected you need to specify what parameters will be used to control the on-demand loading.

Define input for parameters that should control loading

This is where you select what will affect the loading of data from the perspective of the information link. All columns and parameters available in the selected information link are listed. Click to select the parameter in the list and click Define Input... to specify a condition that must be fulfilled for any data to be loaded.

Any required prompts or parameters that were specified upon the creation of the information link will be listed as Required parameters in this field. This means that you must specify input handling of these parameters to be able to load any on-demand data at all.

Define Input...

Opens the Define Input dialog where you can specify how the selected parameter will be connected to the on-demand data.

Clear Input

Removes the previously added input from the selected parameter.

Load automatically

Select this check box if the on-demand data should be loaded automatically each time the specified input conditions are changed. If the check box is cleared the visualization can be manually updated using the refresh icon, , in the visualization title bar.

A data table set to load automatically will switch to manual update if cyclic dependencies are detected in the analysis.

Allow caching

Select this check box to allow caching of data. This may speed up the process when loading new subsets of data. However, if the underlying information link data are updated during the current TIBCO Spotfire session you may end up with different results for a specific set of input values depending on whether or not the current selection is stored in the cache. You should always clear the check box if you know that the underlying data may be updated during your current session.

See also:

Details on Data Table Properties

Details on Manage Relations

This dialog is used to manage relations between both new and previously added data tables in your analysis. When data tables have been related, they can be set up to propagate marking and filtering  (see Filtering in Related Data Tables) from one data table to another. A relation between data tables is necessary if you want to set up a details visualization where the marking in one visualization allows you to drill down to details about the selected data in another visualization.

  • To reach the Manage Relations dialog:

  1. Select Edit > Data Table Properties.

  2. Go to the Relations tab.

  3. Click on Manage Relations....

    Comment: You can also reach the Manage Relations dialog from the Data page of the Map Chart Visualization Properties, or from the Add Data Tables or the Add On-Demand Data Table dialogs.

ManageRelations.png

Option

Description

Show relations for

Select the data table whose relations you wish to view, or select All data tables to view all relations in the document.

Relations

Lists all relations for the selected data table or all relations in the document, depending on your selection above.

Note: If one or more relations have become invalid, these will appear in red.

New...

Opens the New Relation dialog where you can define a new relation between two data tables.

Edit...

Opens the Edit Relation dialog where you can edit the relation selected in the Relations list.

Delete

Removes the selected relation from the Relations list.

See also:

Details on Data Table Properties

How to Insert Multiple Data Tables to the Analysis

Details on New/Edit Data Table Property

It is possible to add data table properties to the data tables in the analysis. These can be used as parts of an expression and can help you classify different types of data tables.

  • To reach the New Property dialog:

  1. Select Edit > Data Table Properties.

    Comment: The New Property dialog is also available by right-clicking in the Available properties list in the Insert Calculated Column and Custom Expression dialogs, as well as from the dialogs used when adding property controls to a text area.

  2. Click on the Properties tab.

  3. Click New....

NewProperty.png

Option

Description

Property name

Specifies the name of the data table property.

Data type

Specifies the data type of the property.

Description

Optional. A description of the intended use of the property.

Default value

Shows the default value of the property. Data table and Column properties have default values. If the value is cleared (set to empty) for a specific data table then that data table property will automatically revert to use the default value.

To change the value for a specific data table, click to select it in the list and then click Edit....

  • To reach the Edit Property dialog:

  1. Select Edit > Data Table Properties.

    Comment: The Edit Property dialog is also available by right-clicking in the Available properties list in the Insert Calculated Column and Custom Expression dialogs, as well as from the dialogs used when adding property controls to a text area.

  2. Click on the Properties tab.

  3. Click to select the property you wish to edit in the list of available properties.

  4. Click Edit....

EditProperty.png

Option

Description

Property name

Specifies the name of the data table property.

Data type

Specifies the data type of the property.

Description

Optional. A description of the intended use of the property.

Default value

Displays the default value of the property.

Note: If you change the default value, it will be set as the default value for both new data tables and already created data tables.

Edit...

Opens the Edit Value dialog where the default value and description can be specified.

Value

Shows the value of the property.

See also:

Data Table Properties - Properties

Column Properties

How to Edit Column Properties


Column properties are any type of metadata available for the columns (and, in some cases, also for hierarchies) in your data table. For example, this could be the name or number of decimals of a column, the data type, an optional description of the column content, or, a customized sort order for a string column. All properties can be viewed, and some can be edited, by selecting Edit > Column Properties.

  • To change a column name:

  1. Select Edit > Column Properties.

  2. If you have more than one data table in the document, select the Data table to work on.

  3. Locate the column of interest by scrolling in the list or by typing a search expression in the field provided.

  4. Click to select the column.

  5. On the General tab, type a new name in the Name field.

  6. Click OK.

  • To change the formatting of a column:

  1. Select Edit > Column Properties.

  2. If you have more than one data table in the document, select the Data table to work on.

  3. Locate the column of interest by scrolling in the list or by typing a search expression in the field provided.

  4. Click to select the column.

  5. On the Formatting tab, click to select a Category.

  6. Make any changes desired. See Column Properties - Formatting for more information about the various options.

    Comment: For example, to change the number of decimals displayed for a Real column, click Number. Then change the Decimal places to the desired number.

  7. Click OK.

  • To create a custom sort order for a string column:

Note: Custom sort order can only be applied to string columns, not to columns of other data types.

  1. Select Edit > Column Properties.

  2. If you have more than one data table in the document, select the Data table to work on.

  3. Locate the string column of interest by scrolling in the list or by typing a search expression in the field provided.

  4. Click to select the column.

  5. On the