Skip to content
Ernestas Kulik edited this page Mar 25, 2020 · 1 revision

Actions

Every Action Plugin must subclass pyfaf.actions.Action class directly. The file containing the class should be placed into $PYTHONDIR/pyfaf/actions/ directory, its name should not start with an underscore _ and it should have .py extension. This will trigger an automatic import and register the plugin.

Each Action Plugin needs to have a name attribute, which is a short identifier of the plugin, should be lowercase and with no spaces (it will also be used as command line identifier of the action when executing $ faf <action> ...).

Each plugin should override the following methods:

  • __init__(self, *args, **kwargs) - At least call super to initialize the superclass. This gives you the benefit of unified logging with self.log_debug, self.log_info, self.log_warn and self.log_error.

  • tweak_cmdline_parser(self, parser) - Tweak the command line parser with action-specific options. If you need no special tweaking, you can omit this method completely.

  • run(self, db, cmdline) - Implement the actual action. It is called when $ faf <your_action> ... is executed and gives you a connection to storage db and a parsed command line cmdline (including the custom options specified by tweak_cmdline_parser).

All registered actions are loaded into a dictionary pyfaf.actions.actions where the name attribute is used as key and and the actual plugin instance as value:

>>> pyfaf.actions.actions
{
 "archadd": <pyfaf.actions.archadd.ArchAdd at 0x3d3cc50>,
 "archlist": <pyfaf.actions.archlist.ArchList at 0x3d3ccd0>,
 "init": <pyfaf.actions.init.Init at 0x3d3cb50>,
 "opsysadd": <pyfaf.actions.opsysadd.OpSysAdd at 0x3d3c490>,
 "opsysdel": <pyfaf.actions.opsysdel.OpSysDel at 0x3d3c9d0>,
 "opsyslist": <pyfaf.actions.opsyslist.OpSysList at 0x3d3ca50>,
 "pull-components": <pyfaf.actions.pull_components.PullComponents at 0x3d3cf10>,
 "pull-releases": <pyfaf.actions.pull_releases.PullReleases at 0x3d3cd50>,
 "releaselist": <pyfaf.actions.releaselist.ReleaseList at 0x3d3cad0>,
 "repoadd": <pyfaf.actions.repoadd.RepoAdd at 0x3d3cf90>,
 "repoassign": <pyfaf.actions.repoassign.RepoAssign at 0x3d3cfd0>,
 "repodel": <pyfaf.actions.repodel.RepoDel at 0x3d42090>,
 "repoimport": <pyfaf.actions.repoimport.RepoImport at 0x3d42110>,
 "repoinfo": <pyfaf.actions.repoinfo.RepoInfo at 0x3d42190>,
 "repolist": <pyfaf.actions.repolist.RepoList at 0x3d42210>,
 "repomod": <pyfaf.actions.repomod.RepoMod at 0x3d42290>,
 "reposync": <pyfaf.actions.reposync.RepoSync at 0x3d42310>,
 "save-reports": <pyfaf.actions.save_reports.SaveReports at 0x3d3cdd0>,
 "shell": <pyfaf.actions.shell.Shell at 0x3d3cbd0>
}

Operating systems

Every Operating System Plugin must subclass pyfaf.opsys.System class directly. The file containing the class should be placed into $PYTHONDIR/pyfaf/opsys/ directory, its name should not start with an underscore _ and it should have .py extension. This will trigger an automatic import and register the plugin.

Each Operating System Plugin needs to have a name attribute, which is a short identifier of the plugin, should be lowercase and with no spaces (it will also be used as command line identifier of the operating system). It also needs a nice_name attribute, which is the full name as it should be shown to end users.

Each plugin should override the following methods. Even if you do not need the method, you can just use something like self.log_info("method XYZ is not required for My Operating System").

  • __init__(self, *args, **kwargs) - At least call super to initialize the superclass. This gives you the benefit of unified logging with self.log_debug, self.log_info, self.log_warn and self.log_error.

  • validate_ureport(self, ureport) - Validate the operating system part of the uReport. Please note that ureport argument is not the whole uReport, but only the ureport["os"] sub-object. name, version and architecture keys are already processed by the generic validator. You must raise FafError or any of its subclasses if the uReport is invalid. Returning True is only a way to explicitely show everything is OK, the return value is actually checked nowhere.

  • validate_packages(self, packages) - Validate the packages list from the uReport. Please note that packages argument is the ureport["packages"] sub-object. You must raise FafError or any of its subclasses if any part of a package is invalid. Returning True is only a way to explicitely show everything is OK, the return value is actually checked nowhere.

  • save_ureport(self, db, db_report, ureport, packages, flush=False, count=1) - Associate all the operating system data to the actual report. You get db - the pyfaf.storage.Database object through which you can query the storage, db_report - the pyfaf.storage.Report object you are associating to (not necessarily flushed into DB, thus it does not need to have a valid id attribute), ureport and packages which have already passed validate_* and flush - whether the caller wants you to flush db afterwards or not. count specifies the count of this report being currently saved, see save-reports --speedup action.

  • get_releases(self) - Get the list of releases (or versions) of your operating system. Return a dictionary {"release1": properties", "release2": properties2, ...} where properties is a dictionary {'property1': "value1", "property2": "value2", ...}. Example:
{
  "1.0": { "status": "END_OF_LIFE" },
  "1.1": { "status": "LIVE" },
  "2.0": { "status": "BETA" },
}

The releases will be appended to the nice_name to identify operating system releases such as Operating System 1.0, Operating System 1.1 etc.

  • get_components(self, release) - Get a list of supported components of the given operating system release. The release argument will be the same as returned from get_releases. Return a list of strings.

  • get_component_acls(self, component, release=None) - Get the ACLs (maintainer, co-maintainer, qa, ...) of a given component. Return a dictionary {"username1": acls1, "username2": acls2, ...} where acls is a dictionary {"acl1": True, "acl2": False, ...}. If release is None, return a dictionary of ACLs through all available releases such as {"1.0": self.get_component_acls(component, "1.0"), "2.0": self.get_component_acls(component, "2.0"), ...}. Examples:
{
  "user1": { "maintainer": True, "watch_bugs": True, "commit": True },
  "user2": { "maintainer": False, "watch_bugs": True, "commit": False },
}
{
  "1.0": {
    "user1": { "maintainer": True },
    "user2": { "maintainer": False },
  },
  "2.0": {
    "user2": { "maintainer": True },
  },
}
  • get_build_candidates(self, db) - Get builds that are not yet assigned to a component, but are candidates for the current operating system. You get db - the pyfaf.storage.Database object through which you can query the storage.

  • check_pkgname_match(self, packages, parser) - Verify whether a package in uReport matches the given knowledgebase rule. You get packages - the ureport["packages"] sub-object and parser - _sre.SRE_Pattern (created by re.compile(pattern)).

All registered operating systems are loaded into a dictionary pyfaf.opsys.systems where the name attribute is used as key and and the actual plugin instance as value:

>>> pyfaf.opsys.systems
{"fedora": <pyfaf.opsys.fedora.Fedora at 0x280b850>}

Problems

Every Problem Plugin must subclass pyfaf.problemtypes.ProblemType class directly. The file containing the class should be placed into $PYTHONDIR/pyfaf/problemtypes/ directory, its name should not start with an underscore _ and it should have .py extension. This will trigger an automatic import and register the plugin.

Each Problem Plugin needs to have a name attribute, which is a short identifier of the plugin, should be lowercase and with no spaces (it will also be used as command line identifier of the problem type). It also needs a nice_name attribute, which is the full description as it should be shown to end users.

Each plugin should override the following methods. Even if you do not need the method, you can just use something like self.log_info("method XYZ is not required for this type of problems").

  • __init__(self, *args, **kwargs) - At least call super to initialize the superclass. This gives you the benefit of unified logging with self.log_debug, self.log_info, self.log_warn and self.log_error.

  • validate_ureport(self, ureport) - Validate the problem part of the uReport. Please note that ureport argument is not the whole uReport, but only the ureport["problem"] sub-object. name is already processed by the generic validator. You must raise FafError or any of its subclasses if the uReport is invalid. Returning True is only a way to explicitely show everything is OK, the return value is actually checked nowhere.

  • hash_ureport(self, ureport) - Quickly compute an UUID for the given uReport. Pleas note that ureport argument is not the whole uReport, but only the ureport["problem"] sub-object. The hash is used for identifying the problem and for duplicate detection. If ureport1 == ureport2 then hash_ureport(ureport1) == hash_ureport(ureport2). If ureport1 and ureport2 differ, hashes must differ as well.

  • save_ureport(self, db, db_report, ureport, flush=False, count=1) - Associate all the problem-specific data to the actual report. You get db - the pyfaf.storage.Database object through which you can query the storage, db_report - the pyfaf.storage.Report object you are associating to (not necessarily flushed into DB, thus it does not need to have a valid id attribute), ureport which has already passed validate_ureport and flush - whether the caller wants you to flush db afterwards or not. count specifies the count of this report being currently saved, see save-reports --speedup action.

  • save_ureport_post_flush(self) - You may need to do some actions after the data has been flushed to storage. This is necessary if you want to add LOBs to your objects, as LOBs are saved under the primary key of the objects, which is assigned at the moment of flush.

  • get_component_name(self, ureport) - Get the component name against which the report should be filed.

  • get_ssources_for_retrace(self, db) - Get a list of pyfaf.storage.SymbolSource objects that need to be retraced. You get db - the pyfaf.storage.Database object through which you can query the storage.

  • find_packages_for_ssource(self, db, db_ssource) - Fix db_ssource if necessary (e.g. UsrMove or make the path absolute), find the appropriate debuginfo, binary and source packages and return (db_ssource_fixed, (db_debug_pkg, db_bin_pkg, db_src_pkg)). You get db - the pyfaf.storage.Database object through which you can query the storage and db_ssource - the pyfaf.storage.SymbolSource object that should be processed.

  • retrace(self, db, task) - Execute the retrace process. You get db - the pyfaf.storage.Database object through which you can query the storage and task - the pyfaf.retrace.RetraceTask object that contains unpacked paths of all required packages. Do not flush db.

  • compare(self, db_report1, db_report2) - Compare two pyfaf.storage.Report objects. Return an integer where negative => db_report1 > db_report2; 0 => db_report1 = db_report2; positive => db_report2 > db_report1.

  • mass_compare(self, db_reports) - Compare many pyfaf.storage.Report objects at the same time. Some libraries (btparser, satyr) provide a way to compare many reports at the same time returing a btparser.Distances or satyr.Distances object. This may be a significant speed optimization. Return (db_reports_new, distances) where db_reports_new is an ordered subset of db_reports (some of db_reports may be invalid or not comparable with others, so these should be skipped). The order of db_reports_new must match the order of values in distances.

  • check_btpath_match(self, ureport, parser) - Verify whether a path in uReport stacktrace matches the given knowledgebase rule. You get ureport - the ureport["problem"] sub-object and parser - _sre.SRE_Pattern (created by re.compile(pattern)).

All registered operating systems are loaded into a dictionary pyfaf.problemtypes.problemtypes where the name attribute is used as key and and the actual plugin instance as value:

>>> pyfaf.problemtypes.problemtypes
{
 "core": <pyfaf.problemtypes.core.CoredumpProblem at 0x3828990>,
 "kerneloops": <pyfaf.problemtypes.kerneloops.KerneloopsProblem at 0x3828a90>,
 "python": <pyfaf.problemtypes.python.PythonProblem at 0x3828a10>
}

Repositories

Every Repository Plugin must subclass pyfaf.repos.Repo class directly. The file containing the class should be placed into $PYTHONDIR/pyfaf/repos/ directory, its name should not start with an underscore _ and it should have .py extension. This will trigger an automatic import and register the plugin.

Each plugin should override the following methods:

  • __init__(self, *args, **kwargs) - At least call super to initialize the superclass. This gives you the benefit of unified logging with self.log_debug, self.log_info, self.log_warn and self.log_error.

  • list_packages(self, architectures) - Get all the packages from the repository that are built for any architecture specified in architectures. Return a list of dictionaries:
{
  "url": "http://localhost/fafrepo/glibc-2.16-31.fc18.x86_64.rpm", # URL to download from
  "type": "rpm",                                                   # package type - rpm/deb/tgz/...
  "name": "glibc",                                                 # package name
  "version": "2.16",                                               # upstream version
  "release": "31.fc18",                                            # downstream version
  "arch": "x86_64",                                                # architecture
  "base_package_name": "glibc",                                    # source package name (a.k.a. component)
  "filename": "glibc-2.16-31.fc18.x86_64.rpm",                     # file name
  "epoch": "0",                                                    # if necessary
}

All registered repository types are loaded into a dictionary pyfaf.repos.repo_types where the lowercased class name is used as key and and the actual plugin class as value (unlike other plugins, repositories do not instantiate the class and do not require the name attribute):

>>> pyfaf.repos.repo_types
{"yum": pyfaf.repos.yum.Yum}

Solution finders

Every Solution Finder Plugin must subclass pyfaf.solutionfinders.SolutionFinder class directly. The file containing the class should be placed into $PYTHONDIR/pyfaf/solutionsfinders/ directory, its name should not start with an underscore _ and it should have .py extension. This will trigger an automatic import and register the plugin.

Each Solution Finder Plugin needs to have a name attribute, which is a short identifier of the plugin, should be lowercase and with no spaces.

Each plugin should override the following methods:

  • __init__(self, *args, **kwargs) - At least call super to initialize the superclass. This gives you the benefit of unified logging with self.log_debug, self.log_info, self.log_warn and self.log_error.

  • find_solution_ureport(self, db, ureport) - Check if there are solutions for the given ureport (dict). Returns a solution object specified below or None.

  • find_solution_db_report(self, db, db_report) - Check if there are solutions for the given db_report (Report). Returns a solution object specified below or None.

Solution object must have these attributes:

  • cause - A short description of the cause
  • url - An URL with more information
  • note_text - Plain text solution description
  • note_html - HTML formatted solution description

All registered actions are loaded into a dictionary pyfaf.solutionfinders.solutionfinders where the name attribute is used as key and and the actual plugin instance as value

Clone this wiki locally