code
string
signature
string
docstring
string
loss_without_docstring
float64
loss_with_docstring
float64
factor
float64
n_f = self.partial_transform(traj).shape[1] zippy=zip(itertools.repeat("N/A", n_f), itertools.repeat("N/A", n_f), itertools.repeat("N/A", n_f), itertools.repeat(("N/A","N/A","N/A","N/A"), n_f)) return dict_maker(zippy)
def describe_features(self, traj)
Generic method for describing features. Parameters ---------- traj : mdtraj.Trajectory Trajectory to use Returns ------- feature_descs : list of dict Dictionary describing each feature with the following information about the atoms participating in each feature - resnames: unique names of residues - atominds: the four atom indicies - resseqs: unique residue sequence ids (not necessarily 0-indexed) - resids: unique residue ids (0-indexed) - featurizer: Featurizer name - featuregroup: Other information Notes ------- Method resorts to returning N/A for everything if describe_features in not implemented in the sub_class
4.524621
4.386999
1.03137
traj.superpose(self.reference_traj, atom_indices=self.superpose_atom_indices) diff2 = (traj.xyz[:, self.atom_indices] - self.reference_traj.xyz[0, self.atom_indices]) ** 2 x = np.sqrt(np.sum(diff2, axis=2)) return x
def partial_transform(self, traj)
Featurize an MD trajectory into a vector space via distance after superposition Parameters ---------- traj : mdtraj.Trajectory A molecular dynamics trajectory to featurize. Returns ------- features : np.ndarray, dtype=float, shape=(n_samples, n_features) A featurized trajectory is a 2D array of shape `(length_of_trajectory x n_features)` where each `features[i]` vector is computed by applying the featurization function to the `i`th snapshot of the input trajectory. See Also -------- transform : simultaneously featurize a collection of MD trajectories
4.119519
4.471361
0.921312
if self.atom_indices is not None: sliced_traj = traj.atom_slice(self.atom_indices) else: sliced_traj = traj result = libdistance.cdist( sliced_traj, self.sliced_reference_traj, 'rmsd' ) return self._transform(result)
def partial_transform(self, traj)
Featurize an MD trajectory into a vector space via distance after superposition Parameters ---------- traj : mdtraj.Trajectory A molecular dynamics trajectory to featurize. Returns ------- features : np.ndarray, shape=(n_frames, n_ref_frames) The RMSD value of each frame of the input trajectory to be featurized versus each frame in the reference trajectory. The number of features is the number of reference frames. See Also -------- transform : simultaneously featurize a collection of MD trajectories
4.755688
5.294437
0.898242
feature_descs = [] # fill in the atom indices using just the first frame self.partial_transform(traj[0]) top = traj.topology aind_tuples = [self.atom_indices for _ in range(self.sliced_reference_traj.n_frames)] zippy = zippy_maker(aind_tuples, top) zippy = itertools.product(["LandMarkFeaturizer"], ["RMSD"], [self.sigma], zippy) feature_descs.extend(dict_maker(zippy)) return feature_descs
def describe_features(self, traj)
Return a list of dictionaries describing the LandmarkRMSD features. Parameters ---------- traj : mdtraj.Trajectory The trajectory to describe Returns ------- feature_descs : list of dict Dictionary describing each feature with the following information about the atoms participating in each feature - resnames: unique names of residues - atominds: the four atom indicies - resseqs: unique residue sequence ids (not necessarily 0-indexed) - resids: unique residue ids (0-indexed) - featurizer: Alpha Angle - featuregroup: the type of dihedral angle and whether sin or cos has been applied.
12.608794
11.000287
1.146224
d = md.geometry.compute_distances(traj, self.pair_indices, periodic=self.periodic) return d ** self.exponent
def partial_transform(self, traj)
Featurize an MD trajectory into a vector space via pairwise atom-atom distances Parameters ---------- traj : mdtraj.Trajectory A molecular dynamics trajectory to featurize. Returns ------- features : np.ndarray, dtype=float, shape=(n_samples, n_features) A featurized trajectory is a 2D array of shape `(length_of_trajectory x n_features)` where each `features[i]` vector is computed by applying the featurization function to the `i`th snapshot of the input trajectory. See Also -------- transform : simultaneously featurize a collection of MD trajectories
8.844834
12.62221
0.700736
feature_descs = [] top = traj.topology residue_indices = [[top.atom(i[0]).residue.index, top.atom(i[1]).residue.index] \ for i in self.atom_indices] aind = [] resseqs = [] resnames = [] for ind,resid_ids in enumerate(residue_indices): aind += [[i for i in self.atom_indices[ind]]] resseqs += [[top.residue(ri).resSeq for ri in resid_ids]] resnames += [[top.residue(ri).name for ri in resid_ids]] zippy = itertools.product(["AtomPairs"], ["Distance"], ["Exponent {}".format(self.exponent)], zip(aind, resseqs, residue_indices, resnames)) feature_descs.extend(dict_maker(zippy)) return feature_descs
def describe_features(self, traj)
Return a list of dictionaries describing the atom pair features. Parameters ---------- traj : mdtraj.Trajectory The trajectory to describe Returns ------- feature_descs : list of dict Dictionary describing each feature with the following information about the atoms participating in each dihedral - resnames: unique names of residues - atominds: the two atom inds - resseqs: unique residue sequence ids (not necessarily 0-indexed) - resids: unique residue ids (0-indexed) - featurizer: AtomPairsFeaturizer - featuregroup: Distance. - other info : Value of the exponent
4.599507
3.796385
1.211549
feature_descs = [] for dihed_type in self.types: # TODO: Don't recompute dihedrals, just get the indices func = getattr(md, 'compute_%s' % dihed_type) # ainds is a list of four-tuples of atoms participating # in each dihedral aind_tuples, _ = func(traj) top = traj.topology zippy = zippy_maker(aind_tuples, top) if self.sincos: zippy = itertools.product(['Dihedral'],[dihed_type], ['sin', 'cos'], zippy) else: zippy = itertools.product(['Dihedral'],[dihed_type], ['nosincos'], zippy) feature_descs.extend(dict_maker(zippy)) return feature_descs
def describe_features(self, traj)
Return a list of dictionaries describing the dihderal features. Parameters ---------- traj : mdtraj.Trajectory The trajectory to describe Returns ------- feature_descs : list of dict Dictionary describing each feature with the following information about the atoms participating in each dihedral - resnames: unique names of residues - atominds: the four atom indicies - resseqs: unique residue sequence ids (not necessarily 0-indexed) - resids: unique residue ids (0-indexed) - featurizer: Dihedral - featuregroup: the type of dihedral angle and whether sin or cos has been applied.
6.474621
5.366882
1.206403
x = [] for a in self.types: func = getattr(md, 'compute_%s' % a) _, y = func(traj) if self.sincos: x.extend([np.sin(y), np.cos(y)]) else: x.append(y) return np.hstack(x)
def partial_transform(self, traj)
Featurize an MD trajectory into a vector space via calculation of dihedral (torsion) angles Parameters ---------- traj : mdtraj.Trajectory A molecular dynamics trajectory to featurize. Returns ------- features : np.ndarray, dtype=float, shape=(n_samples, n_features) A featurized trajectory is a 2D array of shape `(length_of_trajectory x n_features)` where each `features[i]` vector is computed by applying the featurization function to the `i`th snapshot of the input trajectory. See Also -------- transform : simultaneously featurize a collection of MD trajectories
4.489706
5.531665
0.811637
feature_descs = [] for dihed_type in self.types: # TODO: Don't recompute dihedrals, just get the indices func = getattr(md, 'compute_%s' % dihed_type) # ainds is a list of four-tuples of atoms participating # in each dihedral aind_tuples, _ = func(traj) top = traj.topology bin_info =[] resseqs = [] resids = [] resnames = [] all_aind = [] #its bin0---all phis bin1--all_phis for bin_index in range(self.n_bins): for ainds in aind_tuples: resid = set(top.atom(ai).residue.index for ai in ainds) all_aind.append(ainds) bin_info += ["bin-%d"%bin_index] resids += [list(resid)] reseq = set(top.atom(ai).residue.resSeq for ai in ainds) resseqs += [list(reseq)] resname = set(top.atom(ai).residue.name for ai in ainds) resnames += [list(resname)] zippy = zip(all_aind, resseqs, resids, resnames) #fast check to make sure we have the right number of features assert len(bin_info) == len(aind_tuples) * self.n_bins zippy = zip(["VonMises"]*len(bin_info), [dihed_type]*len(bin_info), bin_info, zippy) feature_descs.extend(dict_maker(zippy)) return feature_descs
def describe_features(self, traj)
Return a list of dictionaries describing the dihderal features. Parameters ---------- traj : mdtraj.Trajectory The trajectory to describe Returns ------- feature_descs : list of dict Dictionary describing each feature with the following information about the atoms participating in each dihedral - resnames: unique names of residues - atominds: the four atom indicies - resseqs: unique residue sequence ids (not necessarily 0-indexed) - resids: unique residue ids (0-indexed) - featurizer: Dihedral - featuregroup: The bin index(0..nbins-1) and dihedral type(phi/psi/chi1 etc )
4.835691
4.215271
1.147184
x = [] for a in self.types: func = getattr(md, 'compute_%s' % a) _, y = func(traj) res = vm.pdf(y[..., np.newaxis], loc=self.loc, kappa=self.kappa) #we reshape the results using a Fortran-like index order, #so that it goes over the columns first. This should put the results #phi dihedrals(all bin0 then all bin1), psi dihedrals(all_bin1) x.extend(np.reshape(res, (1, -1, self.n_bins*y.shape[1]), order='F')) return np.hstack(x)
def partial_transform(self, traj)
Featurize an MD trajectory into a vector space via calculation of soft-bins over dihdral angle space. Parameters ---------- traj : mdtraj.Trajectory A molecular dynamics trajectory to featurize. Returns ------- features : np.ndarray, dtype=float, shape=(n_samples, n_features) A featurized trajectory is a 2D array of shape `(length_of_trajectory x n_features)` where each `features[i]` vector is computed by applying the featurization function to the `i`th snapshot of the input trajectory. See Also -------- transform : simultaneously featurize a collection of MD trajectories
10.44646
11.608897
0.899867
ca = [a.index for a in traj.top.atoms if a.name == 'CA'] if len(ca) < 4: return np.zeros((len(traj), 0), dtype=np.float32) alpha_indices = np.array( [(ca[i - 1], ca[i], ca[i + 1], ca[i + 2]) for i in range(1, len(ca) - 2)]) result = md.compute_dihedrals(traj, alpha_indices) x = [] if self.atom_indices is None: self.atom_indices = np.vstack(alpha_indices) if self.sincos: x.extend([np.cos(result), np.sin(result)]) else: x.append(result) return np.hstack(x)
def partial_transform(self, traj)
Featurize an MD trajectory into a vector space via calculation of dihedral (torsion) angles of alpha carbon backbone Parameters ---------- traj : mdtraj.Trajectory A molecular dynamics trajectory to featurize. Returns ------- features : np.ndarray, dtype=float, shape=(n_samples, n_features) A featurized trajectory is a 2D array of shape `(length_of_trajectory x n_features)` where each `features[i]` vector is computed by applying the featurization function to the `i`th snapshot of the input trajectory.
3.100617
2.899705
1.069287
feature_descs = [] # fill in the atom indices using just the first frame self.partial_transform(traj[0]) top = traj.topology if self.atom_indices is None: raise ValueError("Cannot describe features for " "trajectories with " "fewer than 4 alpha carbon" "using AlphaAngleFeaturizer.") aind_tuples = self.atom_indices zippy = zippy_maker(aind_tuples, top) if self.sincos: zippy = itertools.product(["AlphaAngle"], ["N/A"], ['cos', 'sin'], zippy) else: zippy = itertools.product(["AlphaAngle"], ["N/A"], ['nosincos'], zippy) feature_descs.extend(dict_maker(zippy)) return feature_descs
def describe_features(self, traj)
Return a list of dictionaries describing the dihderal features. Parameters ---------- traj : mdtraj.Trajectory The trajectory to describe Returns ------- feature_descs : list of dict Dictionary describing each feature with the following information about the atoms participating in each dihedral - resnames: unique names of residues - atominds: the four atom indicies - resseqs: unique residue sequence ids (not necessarily 0-indexed) - resids: unique residue ids (0-indexed) - featurizer: Alpha Angle - featuregroup: the type of dihedral angle and whether sin or cos has been applied.
9.269378
7.721204
1.200509
feature_descs = [] _, mapping = md.geometry.sasa.shrake_rupley(traj, mode=self.mode, get_mapping=True) top = traj.topology if self.mode == "residue": resids = np.unique(mapping) resseqs = [top.residue(ri).resSeq for ri in resids] resnames = [top.residue(ri).name for ri in resids] atoms_in_res = [res.atoms for res in top.residues] aind_tuples = [] # For each resdiue... for i,x in enumerate(atoms_in_res): # For each atom in the residues, append it's index aind_tuples.append([atom.index for atom in x]) zippy = itertools.product(['SASA'],['N/A'],[self.mode], zip(aind_tuples, resseqs, resids, resnames)) else: resids = [top.atom(ai).residue.index for ai in mapping] resseqs = [top.atom(ai).residue.resSeq for ai in mapping] resnames = [top.atom(ai).residue.name for ai in mapping] zippy = itertools.product(['SASA'],['N/A'],[self.mode], zip(mapping, resseqs, resids, resnames)) feature_descs.extend(dict_maker(zippy)) return feature_descs
def describe_features(self, traj)
Return a list of dictionaries describing the SASA features. Parameters ---------- traj : mdtraj.Trajectory The trajectory to describe Returns ------- feature_descs : list of dict Dictionary describing each feature with the following information about the atoms participating in each SASA feature - resnames: names of residues - atominds: atom index or atom indices in mode="residue" - resseqs: residue ids (not necessarily 0-indexed) - resids: unique residue ids (0-indexed) - featurizer: SASA - featuregroup: atom or residue
3.450904
3.055352
1.129462
if self.soft_min: distances, _ = md.compute_contacts(traj, self.contacts, self.scheme, self.ignore_nonprotein, soft_min=self.soft_min, soft_min_beta=self.soft_min_beta, periodic=self.periodic) else: distances, _ = md.compute_contacts(traj, self.contacts, self.scheme, self.ignore_nonprotein, periodic=self.periodic) return self._transform(distances)
def partial_transform(self, traj)
Featurize an MD trajectory into a vector space derived from residue-residue distances Parameters ---------- traj : mdtraj.Trajectory A molecular dynamics trajectory to featurize. Returns ------- features : np.ndarray, dtype=float, shape=(n_samples, n_features) A featurized trajectory is a 2D array of shape `(length_of_trajectory x n_features)` where each `features[i]` vector is computed by applying the featurization function to the `i`th snapshot of the input trajectory. See Also -------- transform : simultaneously featurize a collection of MD trajectories
3.299571
3.680676
0.896458
feature_descs = [] # fill in the atom indices using just the first frame if self.soft_min: distances, residue_indices = md.compute_contacts(traj[0], self.contacts, self.scheme, self.ignore_nonprotein, soft_min=self.soft_min, soft_min_beta=self.soft_min_beta, periodic=self.periodic) else: distances, residue_indices = md.compute_contacts(traj[0], self.contacts, self.scheme, self.ignore_nonprotein, periodic=self.periodic) top = traj.topology aind = [] resseqs = [] resnames = [] if self.scheme=='ca': atom_ind_list = [[j.index for j in i.atoms if j.name=='CA'] for i in top.residues] elif self.scheme=='closest-heavy': atom_ind_list = [[j.index for j in i.atoms if j.element.name!="hydrogen"] for i in top.residues] elif self.scheme=='closest': atom_ind_list = [[j.index for j in i.atoms] for i in top.residues] else: atom_ind_list = [["N/A"] for i in top.residues] for resid_ids in residue_indices: aind += [[atom_ind_list[ri] for ri in resid_ids]] resseqs += [[top.residue(ri).resSeq for ri in resid_ids]] resnames += [[top.residue(ri).name for ri in resid_ids]] zippy = itertools.product(["Contact"], [self.scheme], ["{}".format(self.soft_min_beta)], zip(aind, resseqs, residue_indices, resnames)) feature_descs.extend(dict_maker(zippy)) return feature_descs
def describe_features(self, traj)
Return a list of dictionaries describing the contacts features. Parameters ---------- traj : mdtraj.Trajectory The trajectory to describe Returns ------- feature_descs : list of dict Dictionary describing each feature with the following information about the atoms participating in each dihedral - resnames: unique names of residues - atominds: atom indices(returns CA if scheme is ca_inds,otherwise returns all atom_inds) - resseqs: unique residue sequence ids (not necessarily 0-indexed) - resids: unique residue ids (0-indexed) - featurizer: Contact - featuregroup: ca, heavy etc.
3.376592
3.1327
1.077854
# The result vector fingerprints = np.zeros((traj.n_frames, self.n_features)) atom_pairs = np.zeros((len(self.solvent_indices), 2)) sigma = self.sigma for i, solute_i in enumerate(self.solute_indices): # For each solute atom, calculate distance to all solvent # molecules atom_pairs[:, 0] = solute_i atom_pairs[:, 1] = self.solvent_indices distances = md.compute_distances(traj, atom_pairs, periodic=True) distances = np.exp(-distances / (2 * sigma * sigma)) # Sum over water atoms for all frames fingerprints[:, i] = np.sum(distances, axis=1) return fingerprints
def partial_transform(self, traj)
Featurize an MD trajectory into a vector space via calculation of solvent fingerprints Parameters ---------- traj : mdtraj.Trajectory A molecular dynamics trajectory to featurize. Returns ------- features : np.ndarray, dtype=float, shape=(n_samples, n_features) A featurized trajectory is a 2D array of shape `(length_of_trajectory x n_features)` where each `features[i]` vector is computed by applying the featurization function to the `i`th snapshot of the input trajectory. See Also -------- transform : simultaneously featurize a collection of MD trajectories
3.737714
3.838824
0.973661
# Optionally take only certain atoms if self.atom_indices is not None: p_traj = traj.atom_slice(self.atom_indices) else: p_traj = traj # Optionally superpose to a reference trajectory. if self.ref_traj is not None: p_traj.superpose(self.ref_traj, parallel=False) # Get the positions and reshape. value = p_traj.xyz.reshape(len(p_traj), -1) return value
def partial_transform(self, traj)
Featurize an MD trajectory into a vector space with the raw cartesian coordinates. Parameters ---------- traj : mdtraj.Trajectory A molecular dynamics trajectory to featurize. Returns ------- features : np.ndarray, dtype=float, shape=(n_samples, n_features) A featurized trajectory is a 2D array of shape `(length_of_trajectory x n_features)` where each `features[i]` vector is computed by applying the featurization function to the `i`th snapshot of the input trajectory. Notes ----- If you requested superposition (gave `ref_traj` in __init__) the input trajectory will be modified. See Also -------- transform : simultaneously featurize a collection of MD trajectories
3.655636
4.247855
0.860584
if self.index is not None: return traj[:, self.index] else: return traj[:, :self.first]
def partial_transform(self, traj)
Slice a single input array along to select a subset of features. Parameters ---------- traj : np.ndarray, shape=(n_samples, n_features) A sample to slice. Returns ------- sliced_traj : np.ndarray shape=(n_samples, n_feature_subset) Slice of traj
5.570136
8.107821
0.687008
MultiSequenceClusterMixin.fit(self, sequences) self.distances_ = self._split(self.distances_) return self
def fit(self, sequences, y=None)
Fit the kcenters clustering on the data Parameters ---------- sequences : list of array-like, each of shape [sequence_length, n_features] A list of multivariate timeseries, or ``md.Trajectory``. Each sequence may have a different length, but they all must have the same number of features, or the same number of atoms if they are ``md.Trajectory``s. Returns ------- self
10.580265
20.947176
0.505093
if isinstance(param_grid, dict): param_grid = ParameterGrid(param_grid) elif not isinstance(param_grid, ParameterGrid): raise ValueError("param_grid must be a dict or ParamaterGrid instance") # iterable with (model, sequence) as items iter_args = ((clone(model).set_params(**params), sequences) for params in param_grid) models = Parallel(n_jobs=n_jobs, verbose=verbose)( delayed(_param_sweep_helper)(args) for args in iter_args) return models
def param_sweep(model, sequences, param_grid, n_jobs=1, verbose=0)
Fit a series of models over a range of parameters. Parameters ---------- model : msmbuilder.BaseEstimator An *instance* of an estimator to be used to fit data. sequences : list of array-like List of sequences, or a single sequence. Each sequence should be a 1D iterable of state labels. Labels can be integers, strings, or other orderable objects. param_grid : dict or sklearn.grid_search.ParameterGrid Parameter grid to specify models to fit. See sklearn.grid_search.ParameterGrid for an explanation n_jobs : int, optional Number of jobs to run in parallel using joblib.Parallel Returns ------- models : list List of models fit to the data according to param_grid
2.983254
3.249949
0.917939
if mode == 'r' and fmt is None: fmt = _guess_format(path) elif mode in 'wa' and fmt is None: raise ValueError('mode="%s", but no fmt. fmt=%s' % (mode, fmt)) if fmt == 'dir-npy': return NumpyDirDataset(path, mode=mode, verbose=verbose) elif fmt == 'mdtraj': return MDTrajDataset(path, mode=mode, verbose=verbose, **kwargs) elif fmt == 'hdf5': return HDF5Dataset(path, mode=mode, verbose=verbose) elif fmt.endswith("-union"): raise ValueError("union datasets have been removed. " "Please use msmbuilder.featurizer.FeatureUnion") else: raise NotImplementedError("Unknown format fmt='%s'" % fmt)
def dataset(path, mode='r', fmt=None, verbose=False, **kwargs)
Open a dataset object MSMBuilder supports several dataset 'formats' for storing lists of sequences on disk. This function can also be used as a context manager. Parameters ---------- path : str The path to the dataset on the filesystem mode : {'r', 'w', 'a'} Open a dataset for reading, writing, or appending. Note that some formats only support a subset of these modes. fmt : {'dir-npy', 'hdf5', 'mdtraj'} The format of the data on disk ``dir-npy`` A directory of binary numpy files, one file per sequence ``hdf5`` A single hdf5 file with each sequence as an array node ``mdtraj`` A read-only set of trajectory files that can be loaded with mdtraj verbose : bool Whether to print information about the dataset
3.737693
3.246167
1.151417
if os.path.isdir(path): return 'dir-npy' if path.endswith('.h5') or path.endswith('.hdf5'): # TODO: Check for mdtraj .h5 file return 'hdf5' # TODO: What about a list of trajectories, e.g. from command line nargs='+' return 'mdtraj'
def _guess_format(path)
Guess the format of a dataset based on its filename / filenames.
7.497886
6.837175
1.096635
r = [] for c in string: if c.isdigit(): if r and isinstance(r[-1], int): r[-1] = r[-1] * 10 + int(c) else: r.append(int(c)) else: r.append(9 + ord(c)) return r
def _keynat(string)
A natural sort helper function for sort() and sorted() without using regular expression.
2.302766
2.290545
1.005335
if isinstance(out_ds, str): out_ds = self.create_derived(out_ds, fmt=fmt) elif isinstance(out_ds, _BaseDataset): err = "Dataset must be opened in write mode." assert out_ds.mode in ('w', 'a'), err else: err = "Please specify a dataset path or an existing dataset." raise ValueError(err) for key in self.keys(): out_ds[key] = estimator.partial_transform(self[key]) return out_ds
def transform_with(self, estimator, out_ds, fmt=None)
Call the partial_transform method of the estimator on this dataset Parameters ---------- estimator : object with ``partial_fit`` method This object will be used to transform this dataset into a new dataset. The estimator should be fitted prior to calling this method. out_ds : str or Dataset This dataset will be transformed and saved into out_ds. If out_ds is a path, a new dataset will be created at that path. fmt : str The type of dataset to create if out_ds is a string. Returns ------- out_ds : Dataset The tranformed dataset.
3.219316
3.186921
1.010165
self.fit_with(estimator) return self.transform_with(estimator, out_ds, fmt=fmt)
def fit_transform_with(self, estimator, out_ds, fmt=None)
Create a new dataset with the given estimator. The estimator will be fit by this dataset, and then each trajectory will be transformed by the estimator. Parameters ---------- estimator : BaseEstimator This object will be fit and used to transform this dataset into a new dataset. out_ds : str or Dataset This dataset will be transformed and saved into out_ds. If out_ds is a path, a new dataset will be created at that path. fmt : str The type of dataset to create if out_ds is a string. Returns ------- out_ds : Dataset The transformed dataset. Examples -------- diheds = dataset("diheds") tica = diheds.fit_transform_with(tICA(), 'tica') kmeans = tica.fit_transform_with(KMeans(), 'kmeans') msm = kmeans.fit_with(MarkovStateModel())
2.77322
5.707515
0.485889
if self.max_landmarks is not None: if self.n_clusters > self.n_landmarks: self.n_landmarks = self.max_landmarks if self.n_landmarks is None: distances = pdist(X, self.metric) tree = linkage(distances, method=self.linkage) self.landmark_labels_ = fcluster(tree, criterion='maxclust', t=self.n_clusters) - 1 self.cardinality_ = np.bincount(self.landmark_labels_) self.squared_distances_within_cluster_ = np.zeros(self.n_clusters) n = len(X) for k in range(len(distances)): i = int(n - 2 - np.floor(np.sqrt(-8*k + 4*n*(n-1)-7)/2.0 - 0.5)) j = int(k + i + 1 - n*(n-1)/2 + (n-i)*((n-i)-1)/2) if self.landmark_labels_[i] == self.landmark_labels_[j]: self.squared_distances_within_cluster_[ self.landmark_labels_[i]] += distances[k] ** 2 self.landmarks_ = X else: if self.landmark_strategy == 'random': land_indices = check_random_state(self.random_state).randint( len(X), size=self.n_landmarks) else: land_indices = np.arange(len(X))[::(len(X) // self.n_landmarks)][:self.n_landmarks] distances = pdist(X[land_indices], self.metric) tree = linkage(distances, method=self.linkage) self.landmark_labels_ = fcluster(tree, criterion='maxclust', t=self.n_clusters) - 1 self.cardinality_ = np.bincount(self.landmark_labels_) self.squared_distances_within_cluster_ = np.zeros(self.n_clusters) n = len(X[land_indices]) for k in range(len(distances)): i = int(n - 2 - np.floor(np.sqrt(-8*k + 4*n*(n-1)-7)/2.0 - 0.5)) j = int(k + i + 1 - n*(n-1)/2 + (n-i)*((n-i)-1)/2) if self.landmark_labels_[i] == self.landmark_labels_[j]: self.squared_distances_within_cluster_[ self.landmark_labels_[i]] += distances[k] ** 2 self.landmarks_ = X[land_indices] if self.metric != 'rmsd': cluster_centers_ = [] for i in range(self.n_clusters): temp = list(np.mean(self.landmarks_[self.landmark_labels_==i], axis=0)) cluster_centers_.append(temp) self.cluster_centers_ = np.array(cluster_centers_) return self
def fit(self, X, y=None)
Compute agglomerative clustering. Parameters ---------- X : array-like, shape=(n_samples, n_features) Returns ------- self
1.980719
2.004216
0.988276
dists = cdist(X, self.landmarks_, self.metric) pfunc_name = self.ward_predictor if self.linkage == 'ward' else self.linkage try: pooling_func = POOLING_FUNCTIONS[pfunc_name] except KeyError: raise ValueError("linkage {} is not supported".format(pfunc_name)) pooled_distances = np.empty(len(X)) pooled_distances.fill(np.infty) labels = np.zeros(len(X), dtype=int) for i in range(self.n_clusters): if np.any(self.landmark_labels_ == i): d = pooling_func(dists[:, self.landmark_labels_ == i], self.cardinality_[i], self.squared_distances_within_cluster_[i]) if np.any(d < 0): warnings.warn("Distance shouldn't be negative.") mask = (d < pooled_distances) pooled_distances[mask] = d[mask] labels[mask] = i else: print("No data points were assigned to cluster {}".format(i)) return labels
def predict(self, X)
Predict the closest cluster each sample in X belongs to. Parameters ---------- X : {array-like, sparse matrix}, shape = [n_samples, n_features] New data to predict. Returns ------- labels : array, shape [n_samples,] Index of the cluster each sample belongs to.
3.621857
3.657675
0.990207
return_vect = np.zeros(mdl1.n_states_) for i in range(mdl1.n_states_): try: #there has to be a better way to do this mdl1_unmapped = mdl1.inverse_transform([i])[0][0] mdl2_mapped = mdl2.mapping_[mdl1_unmapped] return_vect[i] = mdl2.populations_[mdl2_mapped] except: pass return return_vect
def _mapped_populations(mdl1, mdl2)
Method to get the populations for states in mdl 1 from populations inferred in mdl 2. Resorts to 0 if population is not present.
3.362734
3.350452
1.003666
net_flux = copy.copy(net_flux) bottleneck_ind = net_flux[path[:-1], path[1:]].argmin() net_flux[path[bottleneck_ind], path[bottleneck_ind + 1]] = 0.0 return net_flux
def _remove_bottleneck(net_flux, path)
Internal function for modifying the net flux matrix by removing a particular edge, corresponding to the bottleneck of a particular path.
3.19778
2.754642
1.160869
net_flux = copy.copy(net_flux) net_flux[path[:-1], path[1:]] -= net_flux[path[:-1], path[1:]].min() # The above *should* make the bottleneck have zero flux, but # numerically that may not be the case, so just set it to zero # to be sure. bottleneck_ind = net_flux[path[:-1], path[1:]].argmin() net_flux[path[bottleneck_ind], path[bottleneck_ind + 1]] = 0.0 return net_flux
def _subtract_path_flux(net_flux, path)
Internal function for modifying the net flux matrix by subtracting a path's flux from every edge in the path.
3.590731
3.450504
1.04064
p = len(S) assert S.shape == (p, p) alpha = (n-2)/(n*(n+2)) beta = ((p+1)*n - 2) / (n*(n+2)) trace_S2 = np.sum(S*S) # np.trace(S.dot(S)) U = ((p * trace_S2 / np.trace(S)**2) - 1) rho = min(alpha + beta/U, 1) F = (np.trace(S) / p) * np.eye(p) return (1-rho)*S + rho*F, rho
def rao_blackwell_ledoit_wolf(S, n)
Rao-Blackwellized Ledoit-Wolf shrinkaged estimator of the covariance matrix. Parameters ---------- S : array, shape=(n, n) Sample covariance matrix (e.g. estimated with np.cov(X.T)) n : int Number of data points. Returns ------- sigma : array, shape=(n, n) shrinkage : float References ---------- .. [1] Chen, Yilun, Ami Wiesel, and Alfred O. Hero III. "Shrinkage estimation of high dimensional covariance matrices" ICASSP (2009)
4.764742
5.327794
0.894318
self._initialized = False check_iter_of_sequences(sequences, max_iter=3) # we might be lazy-loading for X in sequences: self._fit(X) if self.n_sequences_ == 0: raise ValueError('All sequences were shorter than ' 'the lag time, %d' % self.lag_time) return self
def fit(self, sequences, y=None)
Fit the model with a collection of sequences. This method is not online. Any state accumulated from previous calls to fit() or partial_fit() will be cleared. For online learning, use `partial_fit`. Parameters ---------- sequences: list of array-like, each of shape (n_samples_i, n_features) Training data, where n_samples_i in the number of samples in sequence i and n_features is the number of features. y : None Ignored Returns ------- self : object Returns the instance itself.
8.946213
10.825713
0.826386
check_iter_of_sequences(sequences, max_iter=3) # we might be lazy-loading sequences_new = [] for X in sequences: X = array2d(X) if self.means_ is not None: X = X - self.means_ X_transformed = np.dot(X, self.components_.T) if self.kinetic_mapping: X_transformed *= self.eigenvalues_ if self.commute_mapping: # thanks to @maxentile and @jchodera for providing/directing to a # reference implementation in pyemma #(markovmodel/PyEMMA#963) # dampening smaller timescales based recommendtion of [7] # # some timescales are NaNs and regularized timescales will # be negative when they are less than the lag time; all these # are set to zero using nan_to_num before returning regularized_timescales = 0.5 * self.timescales_ *\ np.tanh( np.pi *((self.timescales_ - self.lag_time) /self.lag_time) + 1) X_transformed *= np.sqrt(regularized_timescales / 2) X_transformed = np.nan_to_num(X_transformed) sequences_new.append(X_transformed) return sequences_new
def transform(self, sequences)
Apply the dimensionality reduction on X. Parameters ---------- sequences: list of array-like, each of shape (n_samples_i, n_features) Training data, where n_samples_i in the number of samples in sequence i and n_features is the number of features. Returns ------- sequence_new : list of array-like, each of shape (n_samples_i, n_components)
9.223295
9.525697
0.968254
# force shrinkage to be calculated self.covariance_ return .format(n_components=self.n_components, lag_time=self.lag_time, shrinkage=self.shrinkage_, kinetic_mapping=self.kinetic_mapping, timescales=self.timescales_[:5], eigenvalues=self.eigenvalues_[:5])
def summarize(self)
Some summary information.
8.894125
8.545626
1.040781
if self._sliding_window: return [X[k::self._lag_time] for k in range(self._lag_time) for X in X_all] else: return [X[::self._lag_time] for X in X_all]
def transform(self, X_all, y=None)
Subsample several time series. Parameters ---------- X_all : list(np.ndarray) List of feature time series Returns ------- features : list(np.ndarray), length = len(X_all) The subsampled trajectories.
3.303708
3.127441
1.056362
us, lvs, rvs = self._get_eigensystem() # make sure to leave off equilibrium distribution timescales = - self.lag_time / np.log(us[:, 1:]) return timescales
def all_timescales_(self)
Implied relaxation timescales each sample in the ensemble Returns ------- timescales : array-like, shape = (n_samples, n_timescales,) The longest implied relaxation timescales of the each sample in the ensemble of transition matrices, expressed in units of time-step between indices in the source data supplied to ``fit()``. References ---------- .. [1] Prinz, Jan-Hendrik, et al. "Markov models of molecular kinetics: Generation and validation." J. Chem. Phys. 134.17 (2011): 174105.
23.120064
20.610544
1.121759
last_hash = None last_hash_count = 0 arr = yield for i in xrange(maxiter): arr = yield i if arr is not None: hsh = hashlib.sha1(arr.view(np.uint8)).hexdigest() if last_hash == hsh: last_hash_count += 1 else: last_hash = hsh last_hash_count = 1 if last_hash_count >= max_nc: if verbose: print('Termination. Over %d iterations without ' 'change.' % max_nc) break
def iterate_tracker(maxiter, max_nc, verbose=False)
Generator that breaks after maxiter, or after the same array has been sent in more max_nc times in a row.
3.455852
3.309603
1.044189
assert self._initialized V = self.eigenvectors_ # Note: How do we deal with regularization parameters like gamma # here? I'm not sure. Should C and S be estimated using self's # regularization parameters? m2 = self.__class__(shrinkage=self.shrinkage, n_components=self.n_components, lag_time=self.lag_time, landmarks=self.landmarks, kernel_params=self.kernel_params) m2.fit(sequences) numerator = V.T.dot(m2.offset_correlation_).dot(V) denominator = V.T.dot(m2.covariance_).dot(V) try: trace = np.trace(numerator.dot(np.linalg.inv(denominator))) except np.linalg.LinAlgError: trace = np.nan return trace
def score(self, sequences, y=None)
Score the model on new data using the generalized matrix Rayleigh quotient Parameters ---------- sequences : list of array, each of shape (n_samples_i, n_features) Test data. A list of sequences in afeature space, each of which is a 2D array of possibily different lengths, but the same number of features. Returns ------- gmrq : float Generalized matrix Rayleigh quotient. This number indicates how well the top ``n_timescales+1`` eigenvectors of this tICA model perform as slowly decorrelating collective variables for the new data in ``sequences``. References ---------- .. [1] McGibbon, R. T. and V. S. Pande, "Variational cross-validation of slow dynamical modes in molecular kinetics" J. Chem. Phys. 142, 124105 (2015)
5.296618
4.94315
1.071507
super(PCCA, self).fit(sequences, y=y) self._do_lumping() return self
def fit(self, sequences, y=None)
Fit a PCCA lumping model using a sequence of cluster assignments. Parameters ---------- sequences : list(np.ndarray(dtype='int')) List of arrays of cluster assignments y : None Unused, present for sklearn compatibility only. Returns ------- self
10.40243
7.308234
1.423385
# Extract non-perron eigenvectors right_eigenvectors = self.right_eigenvectors_[:, 1:] assert self.n_states_ > 0 microstate_mapping = np.zeros(self.n_states_, dtype=int) def spread(x): return x.max() - x.min() for i in range(self.n_macrostates - 1): v = right_eigenvectors[:, i] all_spreads = np.array([spread(v[microstate_mapping == k]) for k in range(i + 1)]) state_to_split = np.argmax(all_spreads) inds = ((microstate_mapping == state_to_split) & (v >= self.pcca_tolerance)) microstate_mapping[inds] = i + 1 self.microstate_mapping_ = microstate_mapping
def _do_lumping(self)
Do the PCCA lumping. Notes ------- 1. Iterate over the eigenvectors, starting with the slowest. 2. Calculate the spread of that eigenvector within each existing macrostate. 3. Pick the macrostate with the largest eigenvector spread. 4. Split the macrostate based on the sign of the eigenvector.
4.352676
3.579731
1.215922
params = msm.get_params() lumper = cls(n_macrostates=n_macrostates, objective_function=objective_function, **params) lumper.transmat_ = msm.transmat_ lumper.populations_ = msm.populations_ lumper.mapping_ = msm.mapping_ lumper.countsmat_ = msm.countsmat_ lumper.n_states_ = msm.n_states_ lumper._do_lumping() return lumper
def from_msm(cls, msm, n_macrostates, objective_function=None)
Create and fit lumped model from pre-existing MSM. Parameters ---------- msm : MarkovStateModel The input microstate msm to use. n_macrostates : int The number of macrostates Returns ------- lumper : cls The fit PCCA(+) object.
2.73164
2.841434
0.96136
num_micro, num_eigen = right_eigenvectors.shape A, chi, mapping = calculate_fuzzy_chi(alpha, square_map, right_eigenvectors) # If current point is infeasible or leads to degenerate lumping. if (len(np.unique(mapping)) != right_eigenvectors.shape[1] or has_constraint_violation(A, right_eigenvectors)): return -1.0 * np.inf obj = 0.0 # Calculate metastabilty of the lumped model. Eqn 4.20 in LAA. for i in range(num_eigen): obj += np.dot(T.dot(chi[:, i]), pi * chi[:, i]) / np.dot(chi[:, i], pi) return obj
def metastability(alpha, T, right_eigenvectors, square_map, pi)
Return the metastability PCCA+ objective function. Parameters ---------- alpha : ndarray Parameters of objective function (e.g. flattened A) T : csr sparse matrix Transition matrix right_eigenvectors : ndarray The right eigenvectors. square_map : ndarray Mapping from square indices (i,j) to flat indices (k). pi : ndarray Equilibrium Populations of transition matrix. Returns ------- obj : float The objective function Notes ------- metastability: try to make metastable fuzzy state decomposition. Defined in ref. [2].
7.069219
7.248174
0.97531
A, chi, mapping = calculate_fuzzy_chi(alpha, square_map, right_eigenvectors) # If current point is infeasible or leads to degenerate lumping. if (len(np.unique(mapping)) != right_eigenvectors.shape[1] or has_constraint_violation(A, right_eigenvectors)): return -1.0 * np.inf obj = tr(dot(diag(1. / A[0]), dot(A.transpose(), A))) return obj
def crispness(alpha, T, right_eigenvectors, square_map, pi)
Return the crispness PCCA+ objective function. Parameters ---------- alpha : ndarray Parameters of objective function (e.g. flattened A) T : csr sparse matrix Transition matrix right_eigenvectors : ndarray The right eigenvectors. square_map : ndarray Mapping from square indices (i,j) to flat indices (k). pi : ndarray Equilibrium Populations of transition matrix. Returns ------- obj : float The objective function Notes ------- Tries to make crisp state decompostion. This function is defined in [3].
8.769701
9.164303
0.956941
N = A.shape[0] flat_map = [] for i in range(1, N): for j in range(1, N): flat_map.append([i, j]) flat_map = np.array(flat_map) square_map = np.zeros(A.shape, 'int') for k in range((N - 1) ** 2): i, j = flat_map[k] square_map[i, j] = k return flat_map, square_map
def get_maps(A)
Get mappings from the square array A to the flat vector of parameters alpha. Helper function for PCCA+ optimization. Parameters ---------- A : ndarray The transformation matrix A. Returns ------- flat_map : ndarray Mapping from flat indices (k) to square (i,j) indices. square map : ndarray Mapping from square indices (i,j) to flat indices (k).
2.493094
2.155917
1.156396
lhs = 1 - A[0, 1:].sum() rhs = dot(right_eigenvectors[:, 1:], A[1:, 0]) rhs = -1 * rhs.min() if abs(lhs - rhs) > epsilon: return True else: return False
def has_constraint_violation(A, right_eigenvectors, epsilon=1E-8)
Check for constraint violations in transformation matrix. Parameters ---------- A : ndarray The transformation matrix. right_eigenvectors : ndarray The right eigenvectors. epsilon : float, optional Tolerance of constraint violation. Returns ------- truth : bool Whether or not the violation exists Notes ------- Checks constraints using Eqn 4.25 in [1]. References ---------- .. [1] Deuflhard P, Weber, M., "Robust perron cluster analysis in conformation dynamics," Linear Algebra Appl., vol 398 pp 161-184 2005.
3.997733
4.294612
0.930872
num_micro, num_eigen = right_eigenvectors.shape index = np.zeros(num_eigen, 'int') # first vertex: row with largest norm index[0] = np.argmax( [norm(right_eigenvectors[i]) for i in range(num_micro)]) ortho_sys = right_eigenvectors - np.outer(np.ones(num_micro), right_eigenvectors[index[0]]) for j in range(1, num_eigen): temp = ortho_sys[index[j - 1]].copy() for l in range(num_micro): ortho_sys[l] -= temp * dot(ortho_sys[l], temp) dist_list = np.array([norm(ortho_sys[l]) for l in range(num_micro)]) index[j] = np.argmax(dist_list) ortho_sys /= dist_list.max() return index
def index_search(right_eigenvectors)
Find simplex structure in eigenvectors to begin PCCA+. Parameters ---------- right_eigenvectors : ndarray Right eigenvectors of transition matrix Returns ------- index : ndarray Indices of simplex
3.197362
3.246283
0.98493
num_micro, num_eigen = right_eigenvectors.shape A = A.copy() # compute 1st column of A by row sum condition A[1:, 0] = -1 * A[1:, 1:].sum(1) # compute 1st row of A by maximum condition A[0] = -1 * dot(right_eigenvectors[:, 1:].real, A[1:]).min(0) # rescale A to be in the feasible set A /= A[0].sum() return A
def fill_A(A, right_eigenvectors)
Construct feasible initial guess for transformation matrix A. Parameters ---------- A : ndarray Possibly non-feasible transformation matrix. right_eigenvectors : ndarray Right eigenvectors of transition matrix Returns ------- A : ndarray Feasible transformation matrix.
5.217433
5.523253
0.94463
# Convert parameter vector into matrix A A = to_square(alpha, square_map) # Make A feasible. A = fill_A(A, right_eigenvectors) # Calculate the fuzzy membership matrix. chi_fuzzy = np.dot(right_eigenvectors, A) # Calculate the microstate mapping. mapping = np.argmax(chi_fuzzy, 1) return A, chi_fuzzy, mapping
def calculate_fuzzy_chi(alpha, square_map, right_eigenvectors)
Calculate the membership matrix (chi) from parameters alpha. Parameters ---------- alpha : ndarray Parameters of objective function (e.g. flattened A) square_map : ndarray Mapping from square indices (i,j) to flat indices (k). right_eigenvectors : ndarray The right eigenvectors. Returns ------- A : ndarray The transformation matrix A chi_fuzzy : ndarray The (fuzzy) membership matrix. mapping: ndarray The mapping from microstates to macrostates.
5.440626
3.635806
1.496402
right_eigenvectors = self.right_eigenvectors_[:, :self.n_macrostates] index = index_search(right_eigenvectors) # compute transformation matrix A as initial guess for local # optimization (maybe not feasible) A = right_eigenvectors[index, :] A = inv(A) A = fill_A(A, right_eigenvectors) if self.do_minimization: A = self._optimize_A(A) self.A_ = fill_A(A, right_eigenvectors) self.chi_ = dot(right_eigenvectors, self.A_) self.microstate_mapping_ = np.argmax(self.chi_, 1)
def _do_lumping(self)
Perform PCCA+ algorithm by optimizing transformation matrix A. Creates the following member variables: ------- A : ndarray The transformation matrix. chi : ndarray The membership matrix microstate_mapping : ndarray Mapping from microstates to macrostates.
5.591986
4.613251
1.212157
right_eigenvectors = self.right_eigenvectors_[:, :self.n_macrostates] flat_map, square_map = get_maps(A) alpha = to_flat(1.0 * A, flat_map) def obj(x): return -1 * self._objective_function( x, self.transmat_, right_eigenvectors, square_map, self.populations_ ) alpha = scipy.optimize.basinhopping( obj, alpha, niter_success=1000, )['x'] alpha = scipy.optimize.fmin( obj, alpha, full_output=True, xtol=1E-4, ftol=1E-4, maxfun=5000, maxiter=100000 )[0] if np.isneginf(obj(alpha)): raise ValueError( "Error: minimization has not located a feasible point.") A = to_square(alpha, square_map) return A
def _optimize_A(self, A)
Find optimal transformation matrix A by minimization. Parameters ---------- A : ndarray The transformation matrix A. Returns ------- A : ndarray The transformation matrix.
5.091238
5.065453
1.00509
S = np.zeros((n, n)) pi = np.exp(theta[-n:]) pi = pi / pi.sum() _ratematrix.build_ratemat(theta, n, S, which='S') u, lv, rv = map(np.asarray, _ratematrix.eig_K(S, n, pi, 'S')) order = np.argsort(-u) u = u[order[:k]] lv = lv[:, order[:k]] rv = rv[:, order[:k]] return _normalize_eigensystem(u, lv, rv)
def _solve_ratemat_eigensystem(theta, k, n)
Find the dominant eigenpairs of a reversible rate matrix (master equation) Parameters ---------- theta : ndarray, shape=(n_params,) The free parameters of the rate matrix k : int The number of eigenpairs to find n : int The number of states Notes ----- Normalize the left (:math:`\phi`) and right (:math:``\psi``) eigenfunctions according to the following criteria. * The first left eigenvector, \phi_1, _is_ the stationary distribution, and thus should be normalized to sum to 1. * The left-right eigenpairs should be biorthonormal: <\phi_i, \psi_j> = \delta_{ij} * The left eigenvectors should satisfy <\phi_i, \phi_i>_{\mu^{-1}} = 1 * The right eigenvectors should satisfy <\psi_i, \psi_i>_{\mu} = 1 Returns ------- eigvals : np.ndarray, shape=(k,) The largest `k` eigenvalues lv : np.ndarray, shape=(n_states, k) The normalized left eigenvectors (:math:`\phi`) of the rate matrix. rv : np.ndarray, shape=(n_states, k) The normalized right eigenvectors (:math:`\psi`) of the rate matrix.
4.907184
5.116841
0.959026
u, lv, rv = scipy.linalg.eig(transmat, left=True, right=True) order = np.argsort(-np.real(u)) u = np.real_if_close(u[order[:k]]) lv = np.real_if_close(lv[:, order[:k]]) rv = np.real_if_close(rv[:, order[:k]]) return _normalize_eigensystem(u, lv, rv)
def _solve_msm_eigensystem(transmat, k)
Find the dominant eigenpairs of an MSM transition matrix Parameters ---------- transmat : np.ndarray, shape=(n_states, n_states) The transition matrix k : int The number of eigenpairs to find. Notes ----- Normalize the left (:math:`\phi`) and right (:math:``\psi``) eigenfunctions according to the following criteria. * The first left eigenvector, \phi_1, _is_ the stationary distribution, and thus should be normalized to sum to 1. * The left-right eigenpairs should be biorthonormal: <\phi_i, \psi_j> = \delta_{ij} * The left eigenvectors should satisfy <\phi_i, \phi_i>_{\mu^{-1}} = 1 * The right eigenvectors should satisfy <\psi_i, \psi_i>_{\mu} = 1 Returns ------- eigvals : np.ndarray, shape=(k,) The largest `k` eigenvalues lv : np.ndarray, shape=(n_states, k) The normalized left eigenvectors (:math:`\phi`) of ``transmat`` rv : np.ndarray, shape=(n_states, k) The normalized right eigenvectors (:math:`\psi`) of ``transmat``
2.326972
2.361149
0.985526
# first normalize the stationary distribution separately lv[:, 0] = lv[:, 0] / np.sum(lv[:, 0]) for i in range(1, lv.shape[1]): # the remaining left eigenvectors to satisfy # <\phi_i, \phi_i>_{\mu^{-1}} = 1 lv[:, i] = lv[:, i] / np.sqrt(np.dot(lv[:, i], lv[:, i] / lv[:, 0])) for i in range(rv.shape[1]): # the right eigenvectors to satisfy <\phi_i, \psi_j> = \delta_{ij} rv[:, i] = rv[:, i] / np.dot(lv[:, i], rv[:, i]) return u, lv, rv
def _normalize_eigensystem(u, lv, rv)
Normalize the eigenvectors of a reversible Markov state model according to our preferred scheme.
3.562584
3.3977
1.048528
n_states_input = counts.shape[0] n_components, component_assignments = csgraph.connected_components( csr_matrix(counts >= weight), connection="strong") populations = np.array(counts.sum(0)).flatten() component_pops = np.array([populations[component_assignments == i].sum() for i in range(n_components)]) which_component = component_pops.argmax() def cpop(which): csum = component_pops.sum() return 100 * component_pops[which] / csum if csum != 0 else np.nan percent_retained = cpop(which_component) if verbose: print("MSM contains %d strongly connected component%s " "above weight=%.2f. Component %d selected, with " "population %f%%" % ( n_components, 's' if (n_components != 1) else '', weight, which_component, percent_retained)) # keys are all of the "input states" which have a valid mapping to the output. keys = np.arange(n_states_input)[component_assignments == which_component] if n_components == n_states_input and counts[np.ix_(keys, keys)] == 0: # if we have a completely disconnected graph with no self-transitions return np.zeros((0, 0)), {}, percent_retained # values are the "output" state that these guys are mapped to values = np.arange(len(keys)) mapping = dict(zip(keys, values)) n_states_output = len(mapping) trimmed_counts = np.zeros((n_states_output, n_states_output), dtype=counts.dtype) trimmed_counts[np.ix_(values, values)] = counts[np.ix_(keys, keys)] return trimmed_counts, mapping, percent_retained
def _strongly_connected_subgraph(counts, weight=1, verbose=True)
Trim a transition count matrix down to its maximal strongly ergodic subgraph. From the counts matrix, we define a graph where there exists a directed edge between two nodes, `i` and `j` if `counts[i][j] > weight`. We then find the nodes belonging to the largest strongly connected subgraph of this graph, and return a new counts matrix formed by these rows and columns of the input `counts` matrix. Parameters ---------- counts : np.array, shape=(n_states_in, n_states_in) Input set of directed counts. weight : float Threshold by which ergodicity is judged in the input data. Greater or equal to this many transition counts in both directions are required to include an edge in the ergodic subgraph. verbose : bool Print a short statement Returns ------- counts_component : "Trimmed" version of ``counts``, including only states in the maximal strongly ergodic subgraph. mapping : dict Mapping from "input" states indices to "output" state indices The semantics of ``mapping[i] = j`` is that state ``i`` from the "input space" for the counts matrix is represented by the index ``j`` in counts_component
3.847552
3.705092
1.03845
return {k: dict2.get(v) for k, v in dict1.items() if v in dict2}
def _dict_compose(dict1, dict2)
Example ------- >>> dict1 = {'a': 0, 'b': 1, 'c': 2} >>> dict2 = {0: 'A', 1: 'B'} >>> _dict_compose(dict1, dict2) {'a': 'A', 'b': 'b'}
2.982762
3.333877
0.894683
if mode not in ['clip', 'fill']: raise ValueError('mode must be one of ["clip", "fill"]: %s' % mode) sequence = np.asarray(sequence) if sequence.ndim != 1: raise ValueError("Each sequence must be 1D") f = np.vectorize(lambda k: self.mapping_.get(k, np.nan), otypes=[np.float]) a = f(sequence) if mode == 'fill': if np.all(np.mod(a, 1) == 0): result = a.astype(int) else: result = a elif mode == 'clip': result = [a[s].astype(int) for s in np.ma.clump_unmasked(np.ma.masked_invalid(a))] else: raise RuntimeError() return result
def partial_transform(self, sequence, mode='clip')
Transform a sequence to internal indexing Recall that `sequence` can be arbitrary labels, whereas ``transmat_`` and ``countsmat_`` are indexed with integers between 0 and ``n_states - 1``. This methods maps a set of sequences from the labels onto this internal indexing. Parameters ---------- sequence : array-like A 1D iterable of state labels. Labels can be integers, strings, or other orderable objects. mode : {'clip', 'fill'} Method by which to treat labels in `sequence` which do not have a corresponding index. This can be due, for example, to the ergodic trimming step. ``clip`` Unmapped labels are removed during transform. If they occur at the beginning or end of a sequence, the resulting transformed sequence will be shorted. If they occur in the middle of a sequence, that sequence will be broken into two (or more) sequences. (Default) ``fill`` Unmapped labels will be replaced with NaN, to signal missing data. [The use of NaN to signal missing data is not fantastic, but it's consistent with current behavior of the ``pandas`` library.] Returns ------- mapped_sequence : list or ndarray If mode is "fill", return an ndarray in internal indexing. If mode is "clip", return a list of ndarrays each in internal indexing.
3.064626
2.876623
1.065355
if mode not in ['clip', 'fill']: raise ValueError('mode must be one of ["clip", "fill"]: %s' % mode) sequences = list_of_1d(sequences) result = [] for y in sequences: if mode == 'fill': result.append(self.partial_transform(y, mode)) elif mode == 'clip': result.extend(self.partial_transform(y, mode)) else: raise RuntimeError() return result
def transform(self, sequences, mode='clip')
Transform a list of sequences to internal indexing Recall that `sequences` can be arbitrary labels, whereas ``transmat_`` and ``countsmat_`` are indexed with integers between 0 and ``n_states - 1``. This methods maps a set of sequences from the labels onto this internal indexing. Parameters ---------- sequences : list of array-like List of sequences, or a single sequence. Each sequence should be a 1D iterable of state labels. Labels can be integers, strings, or other orderable objects. mode : {'clip', 'fill'} Method by which to treat labels in `sequences` which do not have a corresponding index. This can be due, for example, to the ergodic trimming step. ``clip`` Unmapped labels are removed during transform. If they occur at the beginning or end of a sequence, the resulting transformed sequence will be shorted. If they occur in the middle of a sequence, that sequence will be broken into two (or more) sequences. (Default) ``fill`` Unmapped labels will be replaced with NaN, to signal missing data. [The use of NaN to signal missing data is not fantastic, but it's consistent with current behavior of the ``pandas`` library.] Returns ------- mapped_sequences : list List of sequences in internal indexing
3.265855
4.025589
0.811274
ec_is_str = isinstance(self.ergodic_cutoff, str) if ec_is_str and self.ergodic_cutoff.lower() == 'on': if self.sliding_window: return 1.0 / self.lag_time else: return 1.0 elif ec_is_str and self.ergodic_cutoff.lower() == 'off': return 0.0 else: return self.ergodic_cutoff
def _parse_ergodic_cutoff(self)
Get a numeric value from the ergodic_cutoff input, which can be 'on' or 'off'.
2.653208
2.237772
1.185647
sequences = list_of_1d(sequences) inverse_mapping = {v: k for k, v in self.mapping_.items()} f = np.vectorize(inverse_mapping.get) result = [] for y in sequences: uq = np.unique(y) if not np.all(np.logical_and(0 <= uq, uq < self.n_states_)): raise ValueError('sequence must be between 0 and n_states-1') result.append(f(y)) return result
def inverse_transform(self, sequences)
Transform a list of sequences from internal indexing into labels Parameters ---------- sequences : list List of sequences, each of which is one-dimensional array of integers in ``0, ..., n_states_ - 1``. Returns ------- sequences : list List of sequences, each of which is one-dimensional array of labels.
3.588136
3.687976
0.972928
r random = check_random_state(random_state) r = random.rand(1 + n_steps) if state is None: initial = np.sum(np.cumsum(self.populations_) < r[0]) elif hasattr(state, '__len__') and len(state) == self.n_states_: initial = np.sum(np.cumsum(state) < r[0]) else: initial = self.mapping_[state] cstr = np.cumsum(self.transmat_, axis=1) chain = [initial] for i in range(1, n_steps): chain.append(np.sum(cstr[chain[i - 1], :] < r[i])) return self.inverse_transform([chain])[0]
def sample_discrete(self, state=None, n_steps=100, random_state=None)
r"""Generate a random sequence of states by propagating the model using discrete time steps given by the model lagtime. Parameters ---------- state : {None, ndarray, label} Specify the starting state for the chain. ``None`` Choose the initial state by randomly drawing from the model's stationary distribution. ``array-like`` If ``state`` is a 1D array with length equal to ``n_states_``, then it is is interpreted as an initial multinomial distribution from which to draw the chain's initial state. Note that the indexing semantics of this array must match the _internal_ indexing of this model. otherwise Otherwise, ``state`` is interpreted as a particular deterministic state label from which to begin the trajectory. n_steps : int Lengths of the resulting trajectory random_state : int or RandomState instance or None (default) Pseudo Random Number generator seed control. If None, use the numpy.random singleton. Returns ------- sequence : array of length n_steps A randomly sampled label sequence
3.401778
3.441306
0.988514
if not any([isinstance(seq, collections.Iterable) for seq in sequences]): sequences = [sequences] random = check_random_state(random_state) selected_pairs_by_state = [] for state in range(self.n_states_): all_frames = [np.where(a == state)[0] for a in sequences] pairs = [(trj, frame) for (trj, frames) in enumerate(all_frames) for frame in frames] if pairs: selected_pairs_by_state.append( [pairs[random.choice(len(pairs))] for i in range(n_samples)]) else: selected_pairs_by_state.append([]) return np.array(selected_pairs_by_state)
def draw_samples(self, sequences, n_samples, random_state=None)
Sample conformations for a sequences of states. Parameters ---------- sequences : list or list of lists A sequence or list of sequences, in which each element corresponds to a state label. n_samples : int How many samples to return for any given state. Returns ------- selected_pairs_by_state : np.array, dtype=int, shape=(n_states, n_samples, 2) selected_pairs_by_state[state] gives an array of randomly selected (trj, frame) pairs from the specified state. See Also -------- utils.map_drawn_samples : Extract conformations from MD trajectories by index.
3.000938
2.607856
1.15073
model = LandmarkAgglomerative(linkage='ward', n_clusters=self.n_macrostates, metric=self.metric, n_landmarks=self.n_landmarks, landmark_strategy=self.landmark_strategy, random_state=self.random_state) model.fit([self.transmat_]) if self.fit_only: microstate_mapping_ = model.landmark_labels_ else: microstate_mapping_ = model.transform([self.transmat_])[0] self.microstate_mapping_ = microstate_mapping_
def _do_lumping(self)
Do the MVCA lumping.
4.487997
4.345141
1.032877
params = msm.get_params() lumper = cls(n_macrostates, metric=metric, fit_only=fit_only, n_landmarks=n_landmarks, landmark_strategy=landmark_strategy, random_state=random_state, **params) lumper.transmat_ = msm.transmat_ lumper.populations_ = msm.populations_ lumper.mapping_ = msm.mapping_ lumper.countsmat_ = msm.countsmat_ lumper.n_states_ = msm.n_states_ if n_macrostates is not None: lumper._do_lumping() if get_linkage: p = pdist(msm.transmat_, metric=metric) l = scipy.cluster.hierarchy.linkage(p, 'ward') lumper.pairwise_dists = p lumper.linkage = l lumper.elbow_data = l[:, 2][::-1] else: lumper.pairwise_dists = None lumper.linkage = None lumper.elbow_data = None return lumper
def from_msm(cls, msm, n_macrostates, metric=js_metric_array, n_landmarks=None, landmark_strategy='stride', random_state=None, get_linkage=False, fit_only=False)
Create and fit lumped model from pre-existing MSM. Parameters ---------- msm : MarkovStateModel The input microstate msm to use. n_macrostates : int The number of macrostates get_linkage : boolean, default=False Whether to return linkage and elbow data objects. Warning: This will compute n choose 2 pairwise distances Returns ------- lumper : cls The fit MVCA object. pairwise_dists : if get_linkage is True, np.array, [number of microstates choose 2] linkage : if get_linkage is True, scipy linkage object elbow_data : if get_linkage is True, np.array, [number of microstates - 1]. Change in updated Ward objective function, indexed by n_macrostates - 1 Example ------- plt.figure() scipy.cluster.hierarchy.dendrogram(mvca.linkage) scatter(arange(1,n_microstates), mvca.elbow_data)
2.491796
2.340312
1.064728
''' given a vector x, leave its top-k absolute-value entries alone, and set the rest to 0 ''' not_F = np.argsort(np.abs(x))[:-k] x[not_F] = 0 return x
def _truncate(self, x, k)
given a vector x, leave its top-k absolute-value entries alone, and set the rest to 0
8.112921
3.435658
2.361387
''' given a matrix A, an initial guess x0, and a maximum cardinality k, find the best k-sparse approximation to its dominant eigenvector References ---------- [1] Yuan, X-T. and Zhang, T. "Truncated Power Method for Sparse Eigenvalue Problems." Journal of Machine Learning Research. Vol. 14. 2013. http://www.jmlr.org/papers/volume14/yuan13a/yuan13a.pdf ''' xts = [x0] for t in range(max_iter): xts.append(self._normalize(self._truncate(np.dot(A, xts[-1]), k))) if np.linalg.norm(xts[-1] - xts[-2]) < thresh: break return xts[-1]
def _truncated_power_method(self, A, x0, k, max_iter=10000, thresh=1e-8)
given a matrix A, an initial guess x0, and a maximum cardinality k, find the best k-sparse approximation to its dominant eigenvector References ---------- [1] Yuan, X-T. and Zhang, T. "Truncated Power Method for Sparse Eigenvalue Problems." Journal of Machine Learning Research. Vol. 14. 2013. http://www.jmlr.org/papers/volume14/yuan13a/yuan13a.pdf
3.806417
1.665096
2.286004
nonzeros = np.sum(np.abs(self.eigenvectors_) > 0, axis=0) active = '[%s]' % ', '.join(['%d/%d' % (n, self.n_features) for n in nonzeros[:n_timescales_to_report]]) return .format(n_components=self.n_components, shrinkage=self.shrinkage_, lag_time=self.lag_time, kinetic_mapping=self.kinetic_mapping, timescales=self.timescales_[:n_timescales_to_report], eigenvalues=self.eigenvalues_[:n_timescales_to_report], n_features=self.n_features, active=active, n_timescales_to_report=n_timescales_to_report)
def summarize(self, n_timescales_to_report=5)
Some summary information.
3.435649
3.374477
1.018128
labels, inertia = libdistance.assign_nearest( X, self.cluster_centers_, metric=self.metric) return labels
def predict(self, X)
Predict the closest cluster each sample in X belongs to. In the vector quantization literature, `cluster_centers_` is called the code book and each value returned by `predict` is the index of the closest code in the code book. Parameters ---------- X : array-like, shape = [n_samples, n_features] New data to predict. Returns ------- Y : array, shape [n_samples,] Index of the closest center each sample belongs to.
14.765899
18.474138
0.799274
MultiSequenceClusterMixin.fit(self, sequences) self.cluster_center_indices_ = self._split_indices(self.cluster_center_indices_) return self
def fit(self, sequences, y=None)
Fit the kcenters clustering on the data Parameters ---------- sequences : list of array-like, each of shape [sequence_length, n_features] A list of multivariate timeseries, or ``md.Trajectory``. Each sequence may have a different length, but they all must have the same number of features, or the same number of atoms if they are ``md.Trajectory``s. Returns ------- self
6.911815
10.889477
0.634724
# X = check_array(X) t0 = time.time() self.X = X self._run() t1 = time.time() # print("APM clustering Time Cost:", t1 - t0) return self
def fit(self, X, y=None)
Perform clustering. Parameters ----------- X : array-like, shape=[n_samples, n_features] Samples to cluster.
5.942219
6.567494
0.904792
# print("Doing APM Clustering...") # Start looping for maxIter times n_macrostates = 1 # initialized as 1 because no macrostate exist in loop 0 metaQ = -1.0 prevQ = -1.0 global_maxQ = -1.0 local_maxQ = -1.0 for iter in range(self.max_iter): self.__max_state = -1 self.__micro_stack = [] for k in range(n_macrostates): self._do_split(micro_state=k, sub_clus=self.sub_clus) self._do_time_clustering(macro_state=k) # do Lumping n_micro_states = np.amax(self.__temp_labels_) + 1 if n_micro_states > self.n_macrostates: # print("PCCA Lumping...", n_micro_states, "microstates") self.__temp_MacroAssignments_ = self._do_lumping( n_macrostates=n_macrostates) #self.__temp_labels_ = [copy.copy(element) for element in self.__temp_MacroAssignments_] #Calculate Metastabilty prevQ = metaQ metaQ = self.__temp_transmat_.diagonal().sum() metaQ /= len(self.__temp_transmat_) else: self.__temp_MacroAssignments_ = [ copy.copy(element) for element in self.__temp_labels_ ] # Optimization / Monte-Carlo acceptedMove = False MCacc = np.exp(metaQ * metaQ - prevQ * prevQ) if MCacc > 1.0: MCacc = 1.0 optLim = 0.95 if MCacc > optLim: acceptedMove = True if acceptedMove: local_maxQ = metaQ if metaQ > global_maxQ: global_maxQ = metaQ self.MacroAssignments_ = [ copy.copy(element) for element in self.__temp_MacroAssignments_ ] self.labels_ = [copy.copy(element) for element in self.__temp_labels_] self.transmat_ = self.__temp_transmat_ # print("Loop:", iter, "AcceptedMove?", acceptedMove, "metaQ:", # metaQ, "prevQ:", prevQ, "global_maxQ:", global_maxQ, # "local_maxQ:", local_maxQ, "macroCount:", n_macrostates) #set n_macrostates n_macrostates = self.n_macrostates self.__temp_labels_ = [copy.copy(element) for element in self.__temp_MacroAssignments_ ]
def _run(self)
Do the APM lumping.
4.241953
4.036024
1.051023
if pbar.currval == 0: return 'ETA: --:--:--' elif pbar.finished: return 'Time: %s' % self.format_time(pbar.seconds_elapsed) else: elapsed = pbar.seconds_elapsed currval1, elapsed1 = self._update_samples(pbar.currval, elapsed) eta = self._eta(pbar.maxval, pbar.currval, elapsed) if pbar.currval > currval1: etasamp = self._eta(pbar.maxval - currval1, pbar.currval - currval1, elapsed - elapsed1) weight = (pbar.currval / float(pbar.maxval)) ** 0.5 eta = (1 - weight) * eta + weight * etasamp return 'ETA: %s' % self.format_time(eta)
def update(self, pbar)
Updates the widget to show the ETA or total time when finished.
3.250688
3.062123
1.06158
if pbar.seconds_elapsed < 2e-6 or pbar.currval < 2e-6: # =~ 0 scaled = power = 0 else: speed = pbar.currval / pbar.seconds_elapsed power = int(math.log(speed, 1000)) scaled = speed / 1000.**power return self.FORMAT % (scaled, self.PREFIXES[power], self.unit)
def update(self, pbar)
Updates the widget with the current SI prefixed speed.
6.134021
5.130699
1.195553
if keep_atoms is None: keep_atoms = ATOM_NAMES top, bonds = reference_traj.top.to_dataframe() if keep_atoms is not None: atom_indices = top[top.name.isin(keep_atoms) == True].index.values if exclude_atoms is not None: atom_indices = top[top.name.isin(exclude_atoms) == False].index.values pair_indices = np.array(list(itertools.combinations(atom_indices, 2))) if reject_bonded: a_list = bonds.min(1) b_list = bonds.max(1) n = atom_indices.max() + 1 bond_hashes = a_list + b_list * n pair_hashes = pair_indices[:, 0] + pair_indices[:, 1] * n not_bonds = ~np.in1d(pair_hashes, bond_hashes) pair_indices = np.array([(a, b) for k, (a, b) in enumerate(pair_indices) if not_bonds[k]]) return atom_indices, pair_indices
def get_atompair_indices(reference_traj, keep_atoms=None, exclude_atoms=None, reject_bonded=True)
Get a list of acceptable atom pairs. Parameters ---------- reference_traj : mdtraj.Trajectory Trajectory to grab atom pairs from keep_atoms : np.ndarray, dtype=string, optional Select only these atom names. Defaults to N, CA, CB, C, O, H exclude_atoms : np.ndarray, dtype=string, optional Exclude these atom names reject_bonded : bool, default=True If True, exclude bonded atompairs. Returns ------- atom_indices : np.ndarray, dtype=int The atom indices that pass your criteria pair_indices : np.ndarray, dtype=int, shape=(N, 2) Pairs of atom indices that pass your criteria. Notes ----- This function has been optimized for speed. A naive implementation can be slow (~minutes) for large proteins.
2.646785
2.549942
1.037979
fixed_indices = list(trajs.keys()) trajs = [trajs[k][:, [dimension]] for k in fixed_indices] txx = np.concatenate([traj[:,0] for traj in trajs]) if scheme == "linear": spaced_points = np.linspace(np.min(txx), np.max(txx), n_frames) spaced_points = spaced_points[:, np.newaxis] elif scheme == "random": spaced_points = np.sort(np.random.choice(txx, n_frames)) spaced_points = spaced_points[:, np.newaxis] elif scheme == "edge": _cut_point = n_frames // 2 txx = np.sort(txx) spaced_points = np.hstack((txx[:_cut_point], txx[-_cut_point:])) spaced_points = np.reshape(spaced_points, newshape=(len(spaced_points), 1)) else: raise ValueError("Scheme has be to one of linear, random or edge") tree = KDTree(trajs) dists, inds = tree.query(spaced_points) return [(fixed_indices[i], j) for i, j in inds]
def sample_dimension(trajs, dimension, n_frames, scheme="linear")
Sample a dimension of the data. This method uses one of 3 schemes. All other dimensions are ignored, so this might result in a really "jumpy" sampled trajectory. Parameters ---------- trajs : dictionary of np.ndarray Dictionary of tica-transformed trajectories, keyed by arbitrary keys. The resulting trajectory indices will use these keys. dimension : int dimension to sample on n_frames : int Number of frames requested scheme : {'linear', 'random', 'edges'} 'linear' samples the tic linearly, 'random' samples randomly (thereby taking approximate free energies into account), and 'edges' samples the edges of the tic only. Returns ------- inds : list of tuples Tuples of (trajectory_index, frame_index), where trajectory_index is in the domain of the keys of the input dictionary.
2.554786
2.624496
0.973439
X = array2d(X) self.n_features = X.shape[1] self.n_bins = self.n_bins_per_feature ** self.n_features if self.min is None: min = np.min(X, axis=0) elif isinstance(self.min, numbers.Number): min = self.min * np.ones(self.n_features) else: min = np.asarray(self.min) if not min.shape == (self.n_features,): raise ValueError('min shape error') if self.max is None: max = np.max(X, axis=0) elif isinstance(self.max, numbers.Number): max = self.max * np.ones(self.n_features) else: max = np.asarray(self.max) if not max.shape == (self.n_features,): raise ValueError('max shape error') self.grid = np.array( [np.linspace(min[i] - EPS, max[i] + EPS, self.n_bins_per_feature + 1) for i in range(self.n_features)]) return self
def fit(self, X, y=None)
Fit the grid Parameters ---------- X : array-like, shape = [n_samples, n_features] Data points Returns ------- self
1.777171
1.884676
0.942959
if np.any(X < self.grid[:, 0]) or np.any(X > self.grid[:, -1]): raise ValueError('data out of min/max bounds') binassign = np.zeros((self.n_features, len(X)), dtype=int) for i in range(self.n_features): binassign[i] = np.digitize(X[:, i], self.grid[i]) - 1 labels = np.dot(self.n_bins_per_feature ** np.arange(self.n_features), binassign) assert np.max(labels) < self.n_bins return labels
def predict(self, X)
Get the index of the grid cell containing each sample in X Parameters ---------- X : array-like, shape = [n_samples, n_features] New data Returns ------- y : array, shape = [n_samples,] Index of the grid cell containing each sample
3.155688
3.526485
0.894854
return np.concatenate([self._dim_match(traj) / norm for traj, norm in zip(traj_zip, self._norms)], axis=1)
def partial_transform(self, traj_zip)
Featurize an MD trajectory into a vector space. Parameters ---------- traj : mdtraj.Trajectory A molecular dynamics trajectory to featurize. Returns ------- features : np.ndarray, dtype=float, shape=(n_samples, n_features) A featurized trajectory is a 2D array of shape `(length_of_trajectory x n_features)` where each `features[i]` vector is computed by applying the featurization function to the `i`th snapshot of the input trajectory. See Also -------- transform : simultaneously featurize a collection of MD trajectories
7.646486
12.839734
0.595533
lens = [len(trajs) for trajs in trajs_tuple] if len(set(lens)) > 1: err = "Each dataset must be the same length. You gave: {}" err = err.format(lens) raise ValueError(err)
def _check_same_length(self, trajs_tuple)
Check that the datasets are the same length
3.168447
2.774047
1.142175
return [self.partial_transform(traj_zip) for traj_zip in zip(*trajs_tuple)]
def transform(self, trajs_tuple, y=None)
Featurize a several trajectories. Parameters ---------- traj_list : list(mdtraj.Trajectory) Trajectories to be featurized. Returns ------- features : list(np.ndarray), length = len(traj_list) The featurized trajectories. features[i] is the featurized version of traj_list[i] and has shape (n_samples_i, n_features)
7.48658
9.87602
0.758056
n_states = np.shape(populations)[0] if sinks is None: # Use Thm 11.16 in [1] limiting_matrix = np.vstack([populations] * n_states) # Fundamental matrix fund_matrix = scipy.linalg.inv(np.eye(n_states) - tprob + limiting_matrix) # mfpt[i,j] = (fund_matrix[j,j] - fund_matrix[i,j]) / populations[j] mfpts = fund_matrix * -1 for j in xrange(n_states): mfpts[:, j] += fund_matrix[j, j] mfpts[:, j] /= populations[j] mfpts *= lag_time else: # See section 11.5, and use Thm 11.5 # Turn our ergodic MSM into an absorbing one (all sink # states are absorbing). Then calculate the mean time # to absorption. # Note: we are slightly modifying the description in # 11.5 so that we also get the mfpts[sink] = 0.0 sinks = np.array(sinks, dtype=int).reshape((-1,)) absorb_tprob = copy.copy(tprob) for state in sinks: absorb_tprob[state, :] = 0.0 absorb_tprob[state, state] = 2.0 # note it has to be 2 because we subtract # the identity below. lhs = np.eye(n_states) - absorb_tprob rhs = np.ones(n_states) for state in sinks: rhs[state] = 0.0 mfpts = lag_time * np.linalg.solve(lhs, rhs) return mfpts
def _mfpts(tprob, populations, sinks, lag_time)
Gets the Mean First Passage Time (MFPT) for all states to a *set* of sinks. Parameters ---------- tprob : np.ndarray Transition matrix populations : np.ndarray, (n_states,) MSM populations sinks : array_like, int, optional Indices of the sink states. There are two use-cases: - None [default] : All MFPTs will be calculated, and the result is a matrix of the MFPT from state i to state j. This uses the fundamental matrix formalism. - list of ints or int : Only the MFPTs into these sink states will be computed. The result is a vector, with entry i corresponding to the average time it takes to first get to *any* sink state from state i lag_time : float, optional Lag time for the model. The MFPT will be reported in whatever units are given here. Default is (1) which is in units of the lag time of the MSM. Returns ------- mfpts : np.ndarray, float MFPT in time units of lag_time, which depends on the input value of sinks: - If sinks is None, then mfpts's shape is (n_states, n_states). Where mfpts[i, j] is the mean first passage time to state j from state i. - If sinks contains one or more states, then mfpts's shape is (n_states,). Where mfpts[i] is the mean first passage time from state i to any state in sinks. References ---------- .. [1] Grinstead, C. M. and Snell, J. L. Introduction to Probability. American Mathematical Soc., 1998. As of November 2014, this chapter was available for free online: http://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_book/Chapter11.pdf
4.642923
4.168859
1.113716
def inner(s): if s == '': return s first, last = os.path.splitext(s) return first + suffix return inner
def exttype(suffix)
Type for use with argument(... type=) that will force a specific suffix Especially for output files, so that we can enforce the use of appropriate file-type specific suffixes
5.289034
6.477236
0.816557
if hasattr(klass, '_init_argspec'): return _shim_argspec(klass._init_argspec()) elif PY2: return _shim_argspec(inspect.getargspec(klass.__init__)) else: return inspect.signature(klass.__init__)
def get_init_argspec(klass)
Wrapper around inspect.getargspec(klass.__init__) which, for cython classes uses an auxiliary '_init_argspec' method, since they don't play nice with the inspect module. By convention, a cython class should define the classmethod _init_argspec that, when called, returns what ``inspect.getargspec`` would be expected to return when called on that class's __init__ method.
3.100283
3.368975
0.920245
assert cls.klass is not None sig = get_init_argspec(cls.klass) doc = numpydoc.docscrape.ClassDoc(cls.klass) # mapping from the name of the argument to the helptext helptext = {d[0]: ' '.join(d[2]) for d in doc['Parameters']} # mapping from the name of the argument to the type typemap = {d[0]: d[1].replace(',', ' ').split() for d in doc['Parameters']} # put all of these arguments into an argument group, to separate them # from other arguments on the subcommand group = argument_group('Parameters') for i, arg in enumerate(sig.parameters): if i == 0 and arg == 'self': continue # get default value kwargs = {} if sig.parameters[arg].default != Parameter.empty: kwargs['default'] = sig.parameters[arg].default else: kwargs['required'] = True if arg in helptext: # try to get some helptext kwargs['help'] = helptext[arg] # obviously this isn't an exaustive list, but try to make # reasonable argparse decisions based on the docstring. if arg in typemap: if 'list' in typemap[arg]: kwargs['nargs'] = '+' if 'bool' in typemap[arg]: kwargs['action'] = FlagAction if hasattr(cls, '_{}_type'.format(arg)): # If the docstring *contains* the word float or int, # parsing will fail for things not of that type # even if a custom loader will eventually be used. # Let's check for custom loaders here and set the type # to str. kwargs['type'] = str else: basic_types = {'str': str, 'float': float, 'int': int} for basic_type in basic_types: if basic_type in typemap[arg]: kwargs['type'] = basic_types[basic_type] break group.add_argument('--{}'.format(arg), **kwargs) group.register(subparser)
def _register_arguments(cls, subparser)
this is a special method that gets called to construct the argparse parser. it uses the python inspect module to introspect the __init__ method of `klass`, and add an argument for each parameter. it also uses numpydoc to read the class docstring of klass (which is supposed to be in numpydoc format) to get the help-text and type for each argument, as well as a description of the class.
4.271119
4.133193
1.03337
if not os.path.exists(fn): return backnum = 1 backfmt = "{fn}.bak.{backnum}" trial_fn = backfmt.format(fn=fn, backnum=backnum) while os.path.exists(trial_fn): backnum += 1 trial_fn = backfmt.format(fn=fn, backnum=backnum) warnings.warn("{fn} exists. Moving it to {newfn}" .format(fn=fn, newfn=trial_fn), BackupWarning) shutil.move(fn, trial_fn)
def backup(fn)
If ``fn`` exists, rename it and issue a warning This function will rename an existing filename {fn}.bak.{i} where i is the smallest integer that gives a filename that doesn't exist. This naively uses a while loop to find such a filename, so there shouldn't be too many existing backups or performance will degrade. Parameters ---------- fn : str The filename to check.
2.632516
2.458804
1.070649
if isinstance(key, tuple): paths = [dfmt.format(k) for k in key[:-1]] paths += [ffmt.format(key[-1])] return os.path.join(*paths) else: return ffmt.format(key)
def default_key_to_path(key, dfmt="{}", ffmt="{}.npy")
Turn an arbitrary python object into a filename This uses string formatting, so make sure your keys map to unique strings. If the key is a tuple, it will join each element of the tuple with '/', resulting in a filesystem hierarchy of files.
2.017959
2.179191
0.926013
top_fns = set(meta['top_fn']) tops = {} for tfn in top_fns: tops[tfn] = md.load_topology(tfn) return tops
def preload_tops(meta)
Load all topology files into memory. This might save some performance compared to re-parsing the topology file for each trajectory you try to load in. Typically, you have far fewer (possibly 1) topologies than trajectories Parameters ---------- meta : pd.DataFrame The DataFrame of metadata with a column named 'top_fn' Returns ------- tops : dict Dictionary of ``md.Topology`` objects, keyed by "top_fn" values.
3.983028
2.841919
1.401527
top_fns = set(meta['top_fn']) if len(top_fns) != 1: raise ValueError("More than one topology is used in this project!") return md.load_topology(top_fns.pop())
def preload_top(meta)
Load one topology file into memory. This function checks to make sure there's only one topology file in play. When sampling frames, you have to have all the same topology to concatenate. Parameters ---------- meta : pd.DataFrame The DataFrame of metadata with a column named 'top_fn' Returns ------- top : md.Topology The one topology file that can be used for all trajectories.
5.92475
3.842821
1.541771
tops = preload_tops(meta) for i, row in meta.iterrows(): yield i, md.join(md.iterload(row['traj_fn'], top=tops[row['top_fn']], stride=stride), discard_overlapping_frames=False, check_topology=False)
def itertrajs(meta, stride=1)
Load one mdtraj trajectory at a time and yield it. MDTraj does striding badly. It reads in the whole trajectory and then performs a stride. We join(iterload) to conserve memory.
8.97506
6.679388
1.343695
if pandas_kwargs is None: pandas_kwargs = {} kwargs_with_defaults = { 'classes': ('table', 'table-condensed', 'table-hover'), } kwargs_with_defaults.update(**pandas_kwargs) env = Environment(loader=PackageLoader('msmbuilder', 'io_templates')) templ = env.get_template("twitter-bootstrap.html") rendered = templ.render( title=title, content=meta.to_html(**kwargs_with_defaults) ) # Ugh, pandas hardcodes border="1" rendered = re.sub(r' border="1"', '', rendered) backup(fn) with open(fn, 'w') as f: f.write(rendered)
def render_meta(meta, fn="meta.pandas.html", title="Project Metadata - MSMBuilder", pandas_kwargs=None)
Render a metadata dataframe as an html webpage for inspection. Parameters ---------- meta : pd.Dataframe The DataFrame of metadata fn : str Output filename (should end in html) title : str Page title pandas_kwargs : dict Arguments to be passed to pandas
3.354024
3.5604
0.942036
backup(fn) with open(fn, 'wb') as f: pickle.dump(obj, f)
def save_generic(obj, fn)
Save Python objects, including msmbuilder Estimators. This is a convenience wrapper around Python's ``pickle`` serialization scheme. This protocol is backwards-compatible among Python versions, but may not be "forwards-compatible". A file saved with Python 3 won't be able to be opened under Python 2. Please read the pickle docs (specifically related to the ``protocol`` parameter) to specify broader compatibility. If a file already exists at the given filename, it will be backed up. Parameters ---------- obj : object A Python object to serialize (save to disk) fn : str Filename to save the object. We recommend using the '.pickl' extension, but don't do anything to enforce that convention.
3.152029
6.550192
0.481212
if key_to_path is None: key_to_path = default_key_to_path validate_keys(meta.index, key_to_path) backup(fn) os.mkdir(fn) for k in meta.index: v = trajs[k] npy_fn = os.path.join(fn, key_to_path(k)) os.makedirs(os.path.dirname(npy_fn), exist_ok=True) np.save(npy_fn, v)
def save_trajs(trajs, fn, meta, key_to_path=None)
Save trajectory-like data Data is stored in individual numpy binary files in the directory given by ``fn``. This method will automatically back up existing files named ``fn``. Parameters ---------- trajs : dict of (key, np.ndarray) Dictionary of trajectory-like ndarray's keyed on ``meta.index`` values. fn : str Where to save the data. This will be a directory containing one file per trajectory meta : pd.DataFrame The DataFrame of metadata
2.530778
2.486887
1.017649
if key_to_path is None: key_to_path = default_key_to_path if isinstance(meta, str): meta = load_meta(meta_fn=meta) trajs = {} for k in meta.index: trajs[k] = np.load(os.path.join(fn, key_to_path(k))) return meta, trajs
def load_trajs(fn, meta='meta.pandas.pickl', key_to_path=None)
Load trajectory-like data Data is expected to be stored as if saved by ``save_trajs``. This method finds trajectories based on the ``meta`` dataframe. If you remove a file (trajectory) from disk, be sure to remove its row from the dataframe. If you remove a row from the dataframe, be aware that that trajectory (file) will not be loaded, even if it exists on disk. Parameters ---------- fn : str Where the data is saved. This should be a directory containing one file per trajectory. meta : pd.DataFrame or str The DataFrame of metadata. If this is a string, it is interpreted as a filename and the dataframe is loaded from disk. Returns ------- meta : pd.DataFrame The DataFrame of metadata. If you passed in a string (filename) to the ``meta`` input, this will be the loaded DataFrame. If you gave a DataFrame object, this will just be a reference back to that object trajs : dict Dictionary of trajectory-like np.ndarray's keyed on the values of ``meta.index``.
2.234631
2.434287
0.917982
cdists, cinds = self._kdtree.query(x, k, p, distance_upper_bound) return cdists, self._split_indices(cinds)
def query(self, x, k=1, p=2, distance_upper_bound=np.inf)
Query the kd-tree for nearest neighbors Parameters ---------- x : array_like, last dimension self.m An array of points to query. k : int, optional The number of nearest neighbors to return. eps : nonnegative float, optional Return approximate nearest neighbors; the kth returned value is guaranteed to be no further than (1+eps) times the distance to the real kth nearest neighbor. p : float, 1<=p<=infinity, optional Which Minkowski p-norm to use. 1 is the sum-of-absolute-values "Manhattan" distance 2 is the usual Euclidean distance infinity is the maximum-coordinate-difference distance distance_upper_bound : nonnegative float, optional Return only neighbors within this distance. This is used to prune tree searches, so if you are doing a series of nearest-neighbor queries, it may help to supply the distance to the nearest neighbor of the most recent point. Returns ------- d : float or array of floats The distances to the nearest neighbors. If x has shape tuple+(self.m,), then d has shape tuple if k is one, or tuple+(k,) if k is larger than one. Missing neighbors (e.g. when k > n or distance_upper_bound is given) are indicated with infinite distances. If k is None, then d is an object array of shape tuple, containing lists of distances. In either case the hits are sorted by distance (nearest first). i : tuple(int, int) or array of tuple(int, int) The locations of the neighbors in self.data. Locations are given by tuples of (traj_i, frame_i) Examples -------- >>> from msmbuilder.utils import KDTree >>> X1 = 0.3 * np.random.RandomState(0).randn(500, 2) >>> X2 = 0.3 * np.random.RandomState(1).randn(1000, 2) + 10 >>> tree = KDTree([X1, X2]) >>> pts = np.array([[0, 0], [10, 10]]) >>> tree.query(pts) (array([ 0.0034, 0.0102]), array([[ 0, 410], [ 1, 670]])) >>> tree.query(pts[0]) (0.0034, array([ 0, 410]))
4.434827
7.900272
0.561351
clengths = np.append([0], np.cumsum(self.__lengths)) mapping = np.zeros((clengths[-1], 2), dtype=int) for traj_i, (start, end) in enumerate(zip(clengths[:-1], clengths[1:])): mapping[start:end, 0] = traj_i mapping[start:end, 1] = np.arange(end - start) return mapping[concat_inds]
def _split_indices(self, concat_inds)
Take indices in 'concatenated space' and return as pairs of (traj_i, frame_i)
2.782436
2.362219
1.177891
check_iter_of_sequences(sequences) transforms = [] for X in sequences: transforms.append(self.partial_transform(X)) return transforms
def transform(self, sequences)
Apply dimensionality reduction to sequences Parameters ---------- sequences: list of array-like, each of shape (n_samples_i, n_features) Sequence data to transform, where n_samples_i in the number of samples in sequence i and n_features is the number of features. Returns ------- sequence_new : list of array-like, each of shape (n_samples_i, n_components)
6.775969
12.116138
0.559251
self.fit(sequences) transforms = self.transform(sequences) return transforms
def fit_transform(self, sequences, y=None)
Fit the model and apply dimensionality reduction Parameters ---------- sequences: list of array-like, each of shape (n_samples_i, n_features) Training data, where n_samples_i in the number of samples in sequence i and n_features is the number of features. y : None Ignored Returns ------- sequence_new : list of array-like, each of shape (n_samples_i, n_components)
5.872624
10.201693
0.575652
check_iter_of_sequences(sequences, allow_trajectory=self._allow_trajectory) super(MultiSequenceClusterMixin, self).fit(self._concat(sequences)) if hasattr(self, 'labels_'): self.labels_ = self._split(self.labels_) return self
def fit(self, sequences, y=None)
Fit the clustering on the data Parameters ---------- sequences : list of array-like, each of shape [sequence_length, n_features] A list of multivariate timeseries. Each sequence may have a different length, but they all must have the same number of features. Returns ------- self
7.917976
10.099009
0.784035
predictions = [] check_iter_of_sequences(sequences, allow_trajectory=self._allow_trajectory) for X in sequences: predictions.append(self.partial_predict(X)) return predictions
def predict(self, sequences, y=None)
Predict the closest cluster each sample in each sequence in sequences belongs to. In the vector quantization literature, `cluster_centers_` is called the code book and each value returned by `predict` is the index of the closest code in the code book. Parameters ---------- sequences : list of array-like, each of shape [sequence_length, n_features] A list of multivariate timeseries. Each sequence may have a different length, but they all must have the same number of features. Returns ------- Y : list of arrays, each of shape [sequence_length,] Index of the closest center each sample belongs to.
7.541344
13.767659
0.547758
if isinstance(X, md.Trajectory): X.center_coordinates() return super(MultiSequenceClusterMixin, self).predict(X)
def partial_predict(self, X, y=None)
Predict the closest cluster each sample in X belongs to. In the vector quantization literature, `cluster_centers_` is called the code book and each value returned by `predict` is the index of the closest code in the code book. Parameters ---------- X : array-like shape=(n_samples, n_features) A single timeseries. Returns ------- Y : array, shape=(n_samples,) Index of the cluster that each sample belongs to
14.953556
27.606785
0.541662
if hasattr(super(MultiSequenceClusterMixin, self), 'fit_predict'): check_iter_of_sequences(sequences, allow_trajectory=self._allow_trajectory) labels = super(MultiSequenceClusterMixin, self).fit_predict(sequences) else: self.fit(sequences) labels = self.predict(sequences) if not isinstance(labels, list): labels = self._split(labels) return labels
def fit_predict(self, sequences, y=None)
Performs clustering on X and returns cluster labels. Parameters ---------- sequences : list of array-like, each of shape [sequence_length, n_features] A list of multivariate timeseries. Each sequence may have a different length, but they all must have the same number of features. Returns ------- Y : list of ndarray, each of shape [sequence_length, ] Cluster labels
4.636979
5.20788
0.890377