fabric Package
Packages
exceptions | |
matcher |
Classes
DataCategory |
PowerBI data categories for metadata stored in FabricDataFrame. |
FabricDataFrame |
A dataframe for storage and propogation of PowerBI metadata. The elements of column_metadata can contain the following keys:
|
FabricRestClient |
REST client to access Fabric REST endpoints. Authentication tokens are automatically acquired from the execution environment. All methods (get, post, ...) have an additional parameter lro_wait that can be set to True to wait for the long-running-operation to complete. Experimental: This class is experimental and may change in future versions. |
FabricSeries |
A series for storage and propogation of PowerBI metadata. |
MetadataKeys |
Keys for column metadata in FabricDataFrame. Column properties can be found here. |
PowerBIRestClient |
REST client to access PowerBI REST endpoints. Authentication tokens are automatically acquired from the execution environment. Experimental: This class is experimental and may change in future versions. |
RefreshExecutionDetails |
The status of a refresh request (Power BI Documentation). |
Trace |
Trace object for collecting diagnostic and performance related information from the Microsoft Analysis Services Tabular server. Python wrapper around Microsoft Analysis Services Trace NOTE: This feature is only intended for exploratory use. Due to the asynchronous communication required between the Microsoft Analysis Services (AS) Server and other AS clients, trace events are registered on a best-effort basis where timings are dependent on server load. |
TraceConnection |
Connection object for starting, viewing, and removing Traces. Python wrapper around Microsoft Analysis Services Tabular Server. |
Functions
create_lakehouse
Create a lakehouse in the specified workspace.
create_lakehouse(display_name: str, description: str | None = None, max_attempts: int = 10, workspace: str | UUID | None = None) -> str
Parameters
Name | Description |
---|---|
display_name
Required
|
The display name of the lakehouse. |
description
|
The optional description of the lakehouse. Default value: None
|
max_attempts
|
Maximum number of retries to wait for creation of the notebook. Default value: 10
|
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
Returns
Type | Description |
---|---|
The id of lakehouse. |
create_notebook
Create a notebook in the specified workspace.
create_notebook(display_name: str, description: str | None = None, content: str | dict | None = None, default_lakehouse: str | UUID | None = None, default_lakehouse_workspace: str | UUID | None = None, max_attempts: int = 10, workspace: str | UUID | None = None) -> str
Parameters
Name | Description |
---|---|
display_name
Required
|
The display name of the lakehouse. |
description
|
The optional description of the lakehouse. Default value: None
|
content
|
The optional notebook content (JSON). Default value: None
|
default_lakehouse
|
The optional lakehouse name or UUID object to attach to the new notebook. Default value: None
|
default_lakehouse_workspace
|
The Fabric workspace name or UUID object containing the workspace ID the lakehouse is in. If None, the workspace specified for the notebook is used. Default value: None
|
max_attempts
|
Maximum number of retries to wait for creation of the notebook. Default value: 10
|
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
Returns
Type | Description |
---|---|
The notebook id. |
create_tom_server
Create a TOM server for the specified workspace.
Note that not all properties and methods of the Tabular Object Model (TOM) are supported due to limitation when bridging Python to .NET.
If changes are made to models, make sure to call SaveChanges() on the model object and invoke refresh_tom_cache().
create_tom_server(readonly: bool = True, workspace: str | UUID | None = None) -> object
Parameters
Name | Description |
---|---|
readonly
|
Whether to create a read-only server. Default value: True
|
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
Returns
Type | Description |
---|---|
The TOM server. See Microsoft.AnalysisServices.Tabular.Server. |
create_trace_connection
Create a TraceConnection to the server specified by the dataset.
NOTE: This feature is only intended for exploratory use. Due to the asynchronous communication required between the Microsoft Analysis Services (AS) Server and other AS clients, trace events are registered on a best-effort basis where timings are dependent on server load.
create_trace_connection(dataset: str | UUID, workspace: str | UUID | None = None) -> TraceConnection
Parameters
Name | Description |
---|---|
dataset
Required
|
Name or UUID of the dataset to list traces on. |
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
Returns
Type | Description |
---|---|
Server connected to specified dataset. |
create_workspace
Create a workspace.
create_workspace(display_name: str, capacity_id: str | None = None, description: str | None = None) -> str
Parameters
Name | Description |
---|---|
display_name
Required
|
The display name of the workspace. |
capacity_id
|
The optional capacity id. Default value: None
|
description
|
The optional description of the workspace. Default value: None
|
Returns
Type | Description |
---|---|
The id of workspace. |
delete_item
Delete the item in the specified workspace.
delete_item(item_id: str, workspace: str | UUID | None = None)
Parameters
Name | Description |
---|---|
item_id
Required
|
The id of the item. |
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
evaluate_dax
Compute DAX query for a given dataset.
evaluate_dax(dataset: str | UUID, dax_string: str, workspace: str | UUID | None = None, verbose: int = 0, num_rows: int | None = None) -> FabricDataFrame
Parameters
Name | Description |
---|---|
dataset
Required
|
Name or UUID of the dataset. |
dax_string
Required
|
The DAX query. |
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
verbose
|
Verbosity. 0 means no verbosity. Default value: 0
|
num_rows
|
Maximum number of rows to read from the result. None means read all rows. Default value: None
|
Returns
Type | Description |
---|---|
FabricDataFrame holding the result of the DAX query. |
evaluate_measure
Compute PowerBI measure for a given dataset.
evaluate_measure(dataset: str | UUID, measure: str | List[str], groupby_columns: List[str] | None = None, filters: Dict[str, List[str]] | None = None, fully_qualified_columns: bool | None = None, num_rows: int | None = None, use_xmla: bool = False, workspace: str | UUID | None = None, verbose: int = 0) -> FabricDataFrame
Parameters
Name | Description |
---|---|
dataset
Required
|
Name or UUID of the dataset. |
measure
Required
|
Name of the measure, or list of measures to compute. |
groupby_columns
|
List of columns in a fully qualified form e.g. "TableName[ColumnName]" or "'Table Name'[Column Name]". Default value: None
|
filters
|
Dictionary containing a list of column values to filter the output by, where the key is a column reference, which must be fully qualified with the table name. Currently only supports the "in" filter. For example, to specify that in the "State" table the "Region" column can only be "East" or "Central" and that the "State" column can only be "WA" or "CA":
Default value: None
|
fully_qualified_columns
|
Whether to output columns in their fully qualified form (TableName[ColumnName] for dimensions). Measures are always represented without the table name. If None, the fully qualified form will only be used if there is a name conflict between columns from different tables. Default value: None
|
num_rows
|
How many rows of the table to return. If None, all rows are returned. Default value: None
|
use_xmla
|
Whether or not to use XMLA as the backend for evaluation. When False, REST backend will be used. Default value: False
|
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
verbose
|
Verbosity. 0 means no verbosity. Default value: 0
|
Returns
Type | Description |
---|---|
FabricDataFrame holding the computed measure stratified by groupby columns. |
execute_tmsl
Execute TMSL script.
execute_tmsl(script: Dict | str, refresh_tom_cache: bool = True, workspace: str | UUID | None = None)
Parameters
Name | Description |
---|---|
script
Required
|
The TMSL script json. |
refresh_tom_cache
|
Whether or not to refresh the dataset after executing the TMSL script. Default value: True
|
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
execute_xmla
Execute XMLA command for a given dataset.
e.g. clear cache when optimizing DAX queries.
execute_xmla(dataset: str | UUID, xmla_command: str, workspace: str | UUID | None = None) -> int
Parameters
Name | Description |
---|---|
dataset
Required
|
Name or UUID of the dataset. |
xmla_command
Required
|
The XMLA command. |
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
Returns
Type | Description |
---|---|
Number of rows affected. |
get_artifact_id
Return artifact id.
get_artifact_id() -> str
Returns
Type | Description |
---|---|
Artifact (most commonly notebook) id guid. |
get_lakehouse_id
Return lakehouse id of the lakehouse that is connected to the workspace.
get_lakehouse_id() -> str
Returns
Type | Description |
---|---|
Lakehouse id guid. |
get_notebook_workspace_id
Return notebook workspace id.
get_notebook_workspace_id() -> str
Returns
Type | Description |
---|---|
Workspace id guid. |
get_refresh_execution_details
Poll the status for a specific refresh requests using Enhanced refresh with the Power BI REST API.
More details on the underlying implementation in PBI Documentation
get_refresh_execution_details(dataset: str | UUID, refresh_request_id: str | UUID, workspace: str | UUID | None = None) -> RefreshExecutionDetails
Parameters
Name | Description |
---|---|
dataset
Required
|
Name or UUID of the dataset. |
refresh_request_id
Required
|
Id of refresh request on which to check the status. |
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
Returns
Type | Description |
---|---|
RefreshExecutionDetails instance with statuses of refresh request retrieved based on the passed URL. |
get_roles
Retrieve all roles associated with the dataset.
get_roles(dataset: str | UUID, include_members: bool = False, additional_xmla_properties: str | List[str] | None = None, workspace: str | UUID | None = None) -> DataFrame
Parameters
Name | Description |
---|---|
dataset
Required
|
Name or UUID of the dataset. |
include_members
|
Whether or not to include members for each role. Default value: False
|
additional_xmla_properties
|
Additional XMLA role properties to include in the returned dataframe. Default value: None
|
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
Returns
Type | Description |
---|---|
Dataframe listing roles and with their attributes. |
get_row_level_security_permissions
Retrieve row level security permissions for a dataset.
get_row_level_security_permissions(dataset: str | UUID, additional_xmla_properties: str | List[str] | None = None, workspace: str | UUID | None = None) -> DataFrame
Parameters
Name | Description |
---|---|
dataset
Required
|
Name or UUID of the dataset. |
additional_xmla_properties
|
Additional XMLA tablepermission properties to include in the returned dataframe. Default value: None
|
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
Returns
Type | Description |
---|---|
Dataframe listing tables and row filter expressions (DAX) for the dataset. |
get_tmsl
Retrieve the Tabular Model Scripting Language (TMSL) for a given dataset.
get_tmsl(dataset: str | UUID, workspace: str | UUID | None = None) -> str
Parameters
Name | Description |
---|---|
dataset
Required
|
Name or UUID of the dataset. |
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
Returns
Type | Description |
---|---|
TMSL for the given dataset. |
get_workspace_id
Return workspace id or default Lakehouse's workspace id.
get_workspace_id() -> str
Returns
Type | Description |
---|---|
Workspace id guid if no default Lakehouse is set; otherwise, the default Lakehouse's workspace id guid. |
list_annotations
List all annotations in a dataset.
list_annotations(dataset: str | UUID, additional_xmla_properties: str | List[str] | None = None, workspace: str | UUID | None = None) -> DataFrame
Parameters
Name | Description |
---|---|
dataset
Required
|
Name or UUID of the dataset. |
additional_xmla_properties
|
Additional XMLA annotation properties to include in the returned dataframe. Default value: None
|
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
Returns
Type | Description |
---|---|
Dataframe listing all annotations. |
list_apps
List all the Power BI apps.
list_apps() -> DataFrame
Returns
Type | Description |
---|---|
DataFrame with one row per app. |
list_calculation_items
List all calculation items for each group in a dataset.
list_calculation_items(dataset: str | UUID, additional_xmla_properties: str | List[str] | None = None, workspace: str | UUID | None = None) -> DataFrame
Parameters
Name | Description |
---|---|
dataset
Required
|
Name or UUID of the dataset. |
additional_xmla_properties
|
Additional XMLA calculationitem properties to include in the returned dataframe. Default value: None
|
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
Returns
Type | Description |
---|---|
Dataframe listing all calculation groups. |
list_capacities
Return a list of capacities that the principal has access to (details).
list_capacities() -> DataFrame
Returns
Type | Description |
---|---|
Dataframe listing the capacities. |
list_columns
List all columns for all tables in a dataset.
list_columns(dataset: str | UUID, table: str | None = None, extended: bool | None = False, additional_xmla_properties: str | List[str] | None = None, workspace: str | UUID | None = None) -> DataFrame
Parameters
Name | Description |
---|---|
dataset
Required
|
Name or UUID of the dataset. |
table
|
Name of the table. Default value: None
|
extended
|
Fetches extended column information. Default value: False
|
additional_xmla_properties
|
Additional XMLA column properties to include in the returned dataframe. Default value: None
|
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
Returns
Type | Description |
---|---|
Dataframe listing the columns. |
list_dataflow_storage_accounts
List a list of dataflow storage accounts that the user has access to.
Please see Dataflow Storage Accounts - Get Dataflow Storage Accounts for more details.
list_dataflow_storage_accounts() -> DataFrame
Returns
Type | Description |
---|---|
DataFrame with one row per dataflow storage account. |
list_dataflows
List all the Power BI dataflows.
Please see Dataflows - Get Dataflows for more details.
list_dataflows(workspace: str | UUID | None = None) -> DataFrame
Parameters
Name | Description |
---|---|
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
Returns
Type | Description |
---|---|
DataFrame with one row per data flow. |
list_datasets
List datasets in a Fabric workspace.
list_datasets(workspace: str | UUID | None = None, mode: str = 'xmla', additional_xmla_properties: str | List[str] | None = None) -> DataFrame
Parameters
Name | Description |
---|---|
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
mode
|
Whether to use the XMLA "xmla" or REST API "rest". See REST docs for returned fields. Default value: "xmla"
|
additional_xmla_properties
|
Additional XMLA model properties to include in the returned dataframe. Default value: None
|
Returns
Type | Description |
---|---|
Dataframe listing databases and their attributes. |
list_datasources
List all datasources in a dataset.
list_datasources(dataset: str | UUID, additional_xmla_properties: str | List[str] | None = None, workspace: str | UUID | None = None) -> DataFrame
Parameters
Name | Description |
---|---|
dataset
Required
|
Name or UUID of the dataset. |
additional_xmla_properties
|
Additional XMLA datasource properties to include in the returned dataframe. Default value: None
|
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
Returns
Type | Description |
---|---|
Dataframe listing all datasources. |
list_expressions
List all expressions in a dataset.
list_expressions(dataset: str | UUID, additional_xmla_properties: str | List[str] | None = None, workspace: str | UUID | None = None) -> DataFrame
Parameters
Name | Description |
---|---|
dataset
Required
|
Name or UUID of the dataset. |
additional_xmla_properties
|
Additional XMLA expression properties to include in the returned dataframe. Default value: None
|
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
Returns
Type | Description |
---|---|
Dataframe listing the expressions. |
list_gateways
List all the Power BI gateways.
list_gateways() -> DataFrame
Returns
Type | Description |
---|---|
DataFrame with one row per gateway. |
list_hierarchies
List hierarchies in a dataset.
list_hierarchies(dataset: str | UUID, extended: bool | None = False, additional_xmla_properties: str | List[str] | None = None, workspace: str | UUID | None = None) -> DataFrame
Parameters
Name | Description |
---|---|
dataset
Required
|
Name or UUID of the dataset. |
extended
|
Fetches extended column information. Default value: False
|
additional_xmla_properties
|
Additional XMLA level properties to include in the returned dataframe. Use Parent to navigate to the parent level. Default value: None
|
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
Returns
Type | Description |
---|---|
Dataframe listing the hierachies and their attributes. |
list_items
Return a list of items in the specified workspace.
list_items(type: str | None = None, workspace: str | UUID | None = None) -> DataFrame
Parameters
Name | Description |
---|---|
type
|
Filter the list of items by the type specified (see valid types). Default value: None
|
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
Returns
Type | Description |
---|---|
DataFrame with one row per artifact. |
list_measures
Retrieve all measures associated with the given dataset.
list_measures(dataset: str | UUID, additional_xmla_properties: str | List[str] | None = None, workspace: str | UUID | None = None) -> DataFrame
Parameters
Name | Description |
---|---|
dataset
Required
|
Name or UUID of the dataset. |
additional_xmla_properties
|
Additional XMLA measure properties to include in the returned dataframe. Default value: None
|
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
Returns
Type | Description |
---|---|
Dataframe listing measures and their attributes. |
list_partitions
List all partitions in a dataset.
list_partitions(dataset: str | UUID, table: str | None = None, extended: bool | None = False, additional_xmla_properties: str | List[str] | None = None, workspace: str | UUID | None = None) -> DataFrame
Parameters
Name | Description |
---|---|
dataset
Required
|
Name or UUID of the dataset. |
table
|
Name of the table. Default value: None
|
extended
|
Fetches extended column information. Default value: False
|
additional_xmla_properties
|
Additional XMLA partition properties to include in the returned dataframe. Use Parent to navigate to the parent level. Default value: None
|
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
Returns
Type | Description |
---|---|
Dataframe listing the partitions. |
list_perspectives
List all perspectives in a dataset.
list_perspectives(dataset: str | UUID, additional_xmla_properties: str | List[str] | None = None, workspace: str | UUID | None = None) -> DataFrame
Parameters
Name | Description |
---|---|
dataset
Required
|
Name or UUID of the dataset. |
additional_xmla_properties
|
Additional XMLA perspective properties to include in the returned dataframe. Default value: None
|
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
Returns
Type | Description |
---|---|
Dataframe listing all perspectives. |
list_refresh_requests
Poll the status or refresh requests for a given dataset using Enhanced refresh with the Power BI REST API.
See details in: PBI Documentation
list_refresh_requests(dataset: str | UUID, workspace: str | UUID | None = None, top_n: int | None = None) -> DataFrame
Parameters
Name | Description |
---|---|
dataset
Required
|
Name or UUID of the dataset. |
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
top_n
|
Limit the number of refresh operations returned. Default value: = None
|
Returns
Type | Description |
---|---|
Dataframe with statuses of refresh request retrieved based on the passed parameters. |
list_relationship_violations
Validate if the content of tables matches relationships.
Relationships are extracted from the metadata in FabricDataFrames. The function examines results of joins for provided relationships and searches for inconsistencies with the specified relationship multiplicity.
Relationships from empty tables (dataframes) are assumed as valid.
list_relationship_violations(tables: Dict[str, FabricDataFrame] | List[FabricDataFrame], missing_key_errors='raise', coverage_threshold: float = 1.0, n_keys: int = 10) -> DataFrame
Parameters
Name | Description |
---|---|
tables
Required
|
A dictionary that maps table names to the dataframes with table content. If a list of dataframes is provided, the function will try to infer the names from the session variables and if it cannot, it will use the positional index to describe them in the results. |
missing_key_errors
|
One of 'raise', 'warn', 'ignore'. Action to take when either table or column of the relationship is not found in the elements of the argument tables. Default value: 'raise'
|
coverage_threshold
|
Fraction of rows in the "from" part that need to join in inner join. Default value: 1.0
|
n_keys
|
Number of missing keys to report. Random collection can be reported. Default value: 10
|
Returns
Type | Description |
---|---|
Dataframe with relationships, error type and error message. If there are no violations, returns an empty DataFrame. |
list_relationships
List all relationship found within the Power BI model.
list_relationships(dataset: str | UUID, extended: bool | None = False, additional_xmla_properties: str | List[str] | None = None, calculate_missing_rows: bool | None = False, workspace: str | UUID | None = None) -> DataFrame
Parameters
Name | Description |
---|---|
dataset
Required
|
Name or UUID of the dataset. |
extended
|
Fetches extended column information. Default value: False
|
additional_xmla_properties
|
Additional XMLA relationship properties to include in the returned dataframe. Use Parent to navigate to the parent level. Default value: None
|
calculate_missing_rows
|
Calculate the number of missing rows in the relationship. Default value: False
|
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
Returns
Type | Description |
---|---|
DataFrame with one row per relationship. |
list_reports
Return a list of reports in the specified workspace.
list_reports(workspace: str | UUID | None = None) -> DataFrame
Parameters
Name | Description |
---|---|
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
Returns
Type | Description |
---|---|
DataFrame with one row per report. |
list_tables
List all tables in a dataset.
list_tables(dataset: str | UUID, include_columns: bool = False, include_partitions: bool = False, extended: bool = False, additional_xmla_properties: str | List[str] | None = None, workspace: str | UUID | None = None) -> DataFrame
Parameters
Name | Description |
---|---|
dataset
Required
|
Name or UUID of the dataset. |
include_columns
|
Whether or not to include column level information. Cannot be combined with include_partitions or extended. Default value: False
|
include_partitions
|
Whether or not to include partition level information. Cannot be combined with include_columns or extended. Default value: False
|
extended
|
Fetches extended table information information. Cannot be combined with include_columns or include_partitions. Default value: False
|
additional_xmla_properties
|
Additional XMLA table properties to include in the returned dataframe. Default value: None
|
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
Returns
Type | Description |
---|---|
Dataframe listing the tables and optional columns. |
list_translations
List all translations in a dataset.
list_translations(dataset: str | UUID, additional_xmla_properties: str | List[str] | None = None, workspace: str | UUID | None = None) -> DataFrame
Parameters
Name | Description |
---|---|
dataset
Required
|
Name or UUID of the dataset. |
additional_xmla_properties
|
Additional XMLA tramslation properties to include in the returned dataframe. Default value: None
|
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
Returns
Type | Description |
---|---|
Dataframe listing the translations. |
list_workspaces
Return a list of workspaces the user has access to.
list_workspaces(filter: str | None = None, top: int | None = None, skip: int | None = None) -> DataFrame
Parameters
Name | Description |
---|---|
filter
|
OData filter expression. For example, to filter by name, use "name eq 'My workspace'". Default value: None
|
top
|
Maximum number of workspaces to return. Default value: None
|
skip
|
Number of workspaces to skip. Default value: None
|
Returns
Type | Description |
---|---|
DataFrame with one row per workspace. |
plot_relationships
Visualize relationship dataframe with a graph.
plot_relationships(tables: Dict[str, FabricDataFrame] | List[FabricDataFrame], include_columns='keys', missing_key_errors='raise', *, graph_attributes: Dict | None = None) -> Digraph
Parameters
Name | Description |
---|---|
tables
Required
|
A dictionary that maps table names to the dataframes with table content. If a list of dataframes is provided, the function will try to infer the names from the session variables and if it cannot, it will use the positional index to describe them in the results. It needs to provided only when include_columns = 'all' and it will be used for mapping table names from relationships to the dataframe columns. |
include_columns
|
One of 'keys', 'all', 'none'. Indicates which columns should be included in the graph. Default value: 'keys'
|
missing_key_errors
|
One of 'raise', 'warn', 'ignore'. Action to take when either table or column of the relationship is not found in the elements of the argument tables. Default value: 'raise'
|
graph_attributes
|
Attributes passed to graphviz. Note that all values need to be strings. Useful attributes are:
Default value: None
|
Returns
Type | Description |
---|---|
Graph object containing all relationships. If include_attributes is true, attributes are represented as ports in the graph. |
read_parquet
Read FabricDataFrame from a parquet file specified by path parameter using Arrow including column metadata.
read_parquet(path: str) -> FabricDataFrame
Parameters
Name | Description |
---|---|
path
Required
|
String containing the filepath to where the parquet is located. |
Returns
Type | Description |
---|---|
FabricDataFrame containing table data from specified parquet. |
read_table
Read a PowerBI table into a FabricDataFrame.
read_table(dataset: str | UUID, table: str, fully_qualified_columns: bool = False, num_rows: int | None = None, multiindex_hierarchies: bool = False, mode: Literal['xmla', 'rest', 'onelake'] = 'xmla', onelake_import_method: Literal['spark', 'pandas'] | None = None, workspace: str | UUID | None = None, verbose: int = 0) -> FabricDataFrame
Parameters
Name | Description |
---|---|
dataset
Required
|
Name or UUID of the dataset. |
table
Required
|
Name of the table to read. |
fully_qualified_columns
|
Whether or not to represent columns in their fully qualified form (TableName[ColumnName]). Default value: False
|
num_rows
|
How many rows of the table to return. If None, all rows are returned. Default value: None
|
multiindex_hierarchies
|
Whether or not to convert existing PowerBI Hierarchies to pandas MultiIndex. Default value: False
|
mode
Required
|
Whether to use the XMLA "xmla", REST API "rest", export of import datasets to Onelake "onelake" to retrieve the data. |
onelake_import_method
|
The method to read from the onelake. Only be effective when the mode is "onelake". Use "spark" to read the table with spark API, "deltalake" with the deltalake API, or None with the proper method auto-selected based on the current runtime. Default value: None
|
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
verbose
|
Verbosity. 0 means no verbosity. Default value: 0
|
Returns
Type | Description |
---|---|
Dataframe for the given table name with metadata from the PowerBI model. |
refresh_dataset
Refresh data associated with the given dataset.
For detailed documentation on the implementation see Enhanced refresh with the Power BI REST API.
refresh_dataset(dataset: str | UUID, workspace: str | UUID | None = None, refresh_type: str = 'automatic', max_parallelism: int = 10, commit_mode: str = 'transactional', retry_count: int = 0, objects: List | None = None, apply_refresh_policy: bool = True, effective_date: date = datetime.date(2024, 11, 21), verbose: int = 0) -> str
Parameters
Name | Description |
---|---|
dataset
Required
|
Name or UUID of the dataset. |
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
refresh_type
|
The type of processing to perform. Types align with the TMSL refresh command types: full, clearValues, calculate, dataOnly, automatic, and defragment. The add type isn't supported. Defaults to "automatic". Default value: "automatic"
|
max_parallelism
|
Determines the maximum number of threads that can run the processing commands in parallel. This value aligns with the MaxParallelism property that can be set in the TMSL Sequence command or by using other methods. Defaults to 10. Default value: 10
|
commit_mode
|
Determines whether to commit objects in batches or only when complete. Modes are "transactional" and "partialBatch". Defaults to "transactional". Default value: "transactional"
|
retry_count
|
Number of times the operation retries before failing. Defaults to 0. Default value: 0
|
objects
|
A list of objects to process. Each object includes table when processing an entire table, or table and partition when processing a partition. If no objects are specified, the entire dataset refreshes. Pass output of json.dumps of a structure that specifies the objects that you want to refresh. For example, this is to refresh "DimCustomer1" partition of table "DimCustomer" and complete table "DimDate":
Default value: None
|
apply_refresh_policy
|
If an incremental refresh policy is defined, determines whether to apply the policy. Modes are true or false. If the policy isn't applied, the full process leaves partition definitions unchanged, and fully refreshes all partitions in the table. If commitMode is transactional, applyRefreshPolicy can be true or false. If commitMode is partialBatch, applyRefreshPolicy of true isn't supported, and applyRefreshPolicy must be set to false. Default value: True
|
effective_date
|
If an incremental refresh policy is applied, the effectiveDate parameter overrides the current date. Default value: datetime.date.today()
|
verbose
|
If set to non-zero, extensive log output is printed. Default value: 0
|
Returns
Type | Description |
---|---|
The refresh request id. |
refresh_tom_cache
Refresh TOM cache in the notebook kernel.
refresh_tom_cache(workspace: str | UUID | None = None)
Parameters
Name | Description |
---|---|
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
resolve_dataset_id
Resolve the dataset ID by name in the specified workspace.
resolve_dataset_id(dataset_name: str, workspace: str | UUID | None = None) -> str
Parameters
Name | Description |
---|---|
dataset_name
Required
|
Name of the dataset to be resolved. |
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
Returns
Type | Description |
---|---|
The ID of the specified dataset. |
resolve_dataset_name
Resolve the dataset name by ID in the specified workspace.
resolve_dataset_name(dataset_id: str | UUID, workspace: str | UUID | None = None) -> str
Parameters
Name | Description |
---|---|
dataset_id
Required
|
Dataset ID or UUID object containing the dataset ID to be resolved. |
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
Returns
Type | Description |
---|---|
The name of the specified dataset. |
resolve_item_id
Resolve the item ID by name in the specified workspace.
The item type can be given to limit the search. Otherwise the function will search for all items in the workspace.
Please see ItemTypes /en-us/rest/api/fabric/core/items/create-item?tabs=HTTP#itemtype_ for all supported item types.
resolve_item_id(item_name: str, type: str | None = None, workspace: str | UUID | None = None) -> str
Parameters
Name | Description |
---|---|
item_name
Required
|
Name of the item to be resolved. |
type
|
Type of the item to be resolved. Default value: = None
|
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
Returns
Type | Description |
---|---|
The item ID of the specified item. |
resolve_item_name
Resolve the item name by ID in the specified workspace.
The item type can be given to limit the search. Otherwise the function will search for all items in the workspace.
Please see ItemTypes /en-us/rest/api/fabric/core/items/create-item?tabs=HTTP#itemtype_ for all supported item types.
resolve_item_name(item_id: str | UUID, type: str | None = None, workspace: str | UUID | None = None) -> str
Parameters
Name | Description |
---|---|
item_id
Required
|
Item ID or UUID object containing the item ID to be resolved. |
type
|
Type of the item to be resolved. Default value: = None
|
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
Returns
Type | Description |
---|---|
The item ID of the specified item. |
resolve_workspace_id
Resolve the workspace name or ID to the workspace UUID.
resolve_workspace_id(workspace: str | UUID | None = None) -> str
Parameters
Name | Description |
---|---|
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
Returns
Type | Description |
---|---|
The workspace UUID. |
resolve_workspace_name
Resolve the workspace name or ID to the workspace name.
resolve_workspace_name(workspace: str | UUID | None = None) -> str
Parameters
Name | Description |
---|---|
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
Returns
Type | Description |
---|---|
The workspace name. |
run_notebook_job
Run a notebook job and wait for it to complete.
run_notebook_job(notebook_id: str, max_attempts: int = 10, workspace: str | UUID | None = None) -> str
Parameters
Name | Description |
---|---|
notebook_id
Required
|
The id of the notebook to run. |
max_attempts
|
Maximum number of retries to wait for creation of the notebook. Default value: 10
|
workspace
|
The Fabric workspace name or UUID object containing the workspace ID. Defaults to None which resolves to the workspace of the attached lakehouse or if no lakehouse attached, resolves to the workspace of the notebook. Default value: None
|
Returns
Type | Description |
---|---|
The job id. |