roboskin.calibration.utils package¶
Submodules¶
roboskin.calibration.utils.io module¶
-
roboskin.calibration.utils.io.
add_noise
(data, data_types, sigma=1)[source]¶ - Parameters
data_types (List[str]) –
-
roboskin.calibration.utils.io.
add_outlier
(data, data_types, sigma=3, outlier_ratio=0.25)[source]¶ - Parameters
data_types (List[str]) –
-
roboskin.calibration.utils.io.
initialize_logging
(log_level, filename=None)[source]¶ Initialize Logging Module with given log_lvel
- Parameters
log_level (str) –
filename (str) –
-
roboskin.calibration.utils.io.
load_data
(robot, directory)[source]¶ Function for collecting acceleration data with poses
- Returns
data – Data includes static and dynamic accelerations data
- Return type
Data
-
roboskin.calibration.utils.io.
load_robot_configs
(configdir, robot)[source]¶ Loads robot’s DH parameters, SUs’ DH parameters and their poses
- configdir: str
Path to the config directory where robot yaml files exist
- robot: str
Name of the robot
roboskin.calibration.utils.quaternion module¶
-
roboskin.calibration.utils.quaternion.
angle_between_quaternions
(q_1, q_2, output_in_degrees=False)[source]¶ Angle between quaternions a and b in degrees. Please note the input quaternions should be of form np.ndarray([x, y, z, w]). The formula for angle between quaternions is:
\[\theta = \cos^{-1}\bigl(2\langle q_1,q_2\rangle^2 -1\bigr)\]where ⟨q1,q2⟩ denotes the inner product of the corresponding quaternions:
\[\begin{split}\langle a_1 +b_1 \textbf{i} + c_1 \textbf{j} + d_1 \textbf{k}, \\ a_2 + b_2 \textbf{i} + c_2 \textbf{j} + d_2 \textbf{k}\rangle \\ = a_1a_2 + b_1b_2 + c_1 c_2 + d_1d_2.\end{split}\]Reference: https://math.stackexchange.com/questions/90081/quaternion-distance :param q_1: np.ndarray
Quaternion a
- Parameters
q_2 – np.ndarray Quaternion b
output_in_degrees – bool
- Returns
float Angle between quaternions in degrees
-
roboskin.calibration.utils.quaternion.
np_to_pyqt
(q)[source]¶ Convert a numpyp quaternion to a pyqt array. Order of a numpy quaternion q = (x, y, z, w)
Order of a pyqt quaternion q = (w, x, y, z)
-
roboskin.calibration.utils.quaternion.
pyqt_to_np
(q)[source]¶ Convert a pyqt quaternion to a numpy array. Order of a numpy quaternion q = (x, y, z, w)
Order of a pyqt quaternion q = (w, x, y, z)
-
roboskin.calibration.utils.quaternion.
pyqt_to_tf
(q)[source]¶ Convert a pyqt quaternion to a tf array. Order of a tf quaternion q = (x, y, z, w)
Order of a pyqt quaternion q = (w, x, y, z)
-
roboskin.calibration.utils.quaternion.
quaternion_from_two_vectors
(source, target)[source]¶ Computes the quaternion from vector source to vector target.
-
roboskin.calibration.utils.quaternion.
quaternion_l2_distance
(q1, q2)[source]¶ A metric for computing the distance between 2 quaternions. sources: - https://fgiesen.wordpress.com/2013/01/07/small-note-on-quaternion-distance-metrics/ - http://kieranwynn.github.io/pyquaternion/#accessing-individual-elements
roboskin.calibration.utils.rotational_acceleration module¶
-
roboskin.calibration.utils.rotational_acceleration.
centripetal_acceleration
(r, w)[source]¶ - \[`a = \omega \times \omega \times r`\]
- Parameters
r (float) – Position vector r of the body in the inertial coordinate
w (float) – Angular Velocity of an axis in the inertial coordinate.
-
roboskin.calibration.utils.rotational_acceleration.
compute_2nd_order_derivative
(x_func, t=0, dt=0.001)[source]¶ Take 2nd order derivative of x_func This computation comes from the Taylor Expansion
\[ \begin{align}\begin{aligned}x(t+dt) = x(t) + x'(t)*dt + 1/2 * x''(t)*dt^2\\x(t-dt) = x(t) - x'(t)*dt + 1/2 * x''(t)*dt^2\\x(t+dt) + x(t-dt) = 2*x(t) + x''(t)*dt^2\\x''(t) = \frac{x(t+dt) + x(t-dt) - 2*x(t)}{dt^2}\end{aligned}\end{align} \]
-
roboskin.calibration.utils.rotational_acceleration.
compute_acceleration_analytically
(inert_w_body, inert_r_body, inert_alpha_body, body_R_inert=None, body_R_world=None, inert_R_world=None, coordinate='body')[source]¶ There are 3 coordinates to remember. 1. World Frame (Fixed to the world) 2. Inertial Frame (each Rotating Joint Coordinate) 3. Body Frame (SU Coordinate)
We use the following notation coord1_variable_coord2. This represents a variable of coord2 defined in coord1.
For example, SU’s linear accelerations can be represented as rs_A_su. It represents the acceleration A measured in SU coordinate defined in the world coordinate. This is what we can measure from the real IMU as well.
We use the same notation for the rotation matrix coord1_R_coord2, but this represents how you can rotate soem vector from coord2 to `coord1.
For example, if you want the gravity in the su coordinate, su_g = su_R_rs * rs_g As a result, the gravity defined in the RS frame is converted to the SU coordinate.
The inverse of the Rotation Matrix is its transpose. Therefore, one can rotate it back to its original frame by rs_g = rs_R_su * su_g = su_R_rs.T * su_g
-
roboskin.calibration.utils.rotational_acceleration.
compute_acceleration_numerically
(kinematic_chain, i_rotate_joint, i_su, current_time, angle_func, method)[source]¶ Returns tangential acceleration in RS coordinate. The acceleration is computed by taking 2nd derivative of the position. This small change in position in Rotating Coordinate is only in the tangential direction. Thus, you can only compute the tangential acceleration, from this method.
- Parameters
kinematic_chain (roboskin.calibration.kinematic_chain.KinematicChain) – Robot’s Kinematic Chain
'i_rotate_joint' ('int') – dof ‘d’
i (int) – imu i
current_time (float`) – Current Time
angle_func (function) – A function to compute the current angle at time t
-
roboskin.calibration.utils.rotational_acceleration.
estimate_acceleration
(kinematic_chain, i_rotate_joint, i_su, method, joint_angular_velocity, joint_angular_acceleration=0, current_time=0, angle_func=None)[source]¶ Compute an acceleration value from positions. .. math:: a = frac{f({Delta t}) + f({Delta t) - 2 f(0)}{h^2}
This equation came from Taylor Expansion to get the second derivative from f(t). .. math:: f(t+{Delta t}) = f(t) + hf^{prime}(t) + frac{h^2}{2}f^{primeprime}(t) .. math:: f(t-{Delta t}) = f(t) - hf^{prime}(t) + frac{h^2}{2}f^{primeprime}(t)
Add both equations and plug t=0 to get the above equation
- Parameters
kinematic_chain (roboskin.calibration.kinematic_chain.KinematicChain) – Robot’s Kinematic Chain
i_rotate_joint (int) – dof d
i_su (int) – `i`th SU
joint_angular_velocity (float) – Angular velocity
'max_angular_velocity' ('float') – Maximum angular velocity
current_time (float) – Current Time
method (str) – Determines if we are using analytical, our, mittendorfer or modified_mittendorfer methods (which we modified due to some possible missing terms).
roboskin.calibration.utils.rotational_acceleration_torch module¶
-
roboskin.calibration.utils.rotational_acceleration_torch.
centripetal_acceleration_torch
(r, w)[source]¶ - \[`a = \omega \times \omega \times r`\]
- Parameters
r (float) – Position vector r of the body in the inertial coordinate
w (float) – Angular Velocity of an axis in the inertial coordinate.
-
roboskin.calibration.utils.rotational_acceleration_torch.
compute_2nd_order_derivative_torch
(x_func, t=0, dt=0.001)[source]¶ Take 2nd order derivative of x_func This computation comes from the Taylor Expansion
\[ \begin{align}\begin{aligned}x(t+dt) = x(t) + x'(t)*dt + 1/2 * x''(t)*dt^2\\x(t-dt) = x(t) - x'(t)*dt + 1/2 * x''(t)*dt^2\\x(t+dt) + x(t-dt) = 2*x(t) + x''(t)*dt^2\\x''(t) = \frac{x(t+dt) + x(t-dt) - 2*x(t)}{dt^2}\end{aligned}\end{align} \]
-
roboskin.calibration.utils.rotational_acceleration_torch.
compute_acceleration_analytically_torch
(inert_w_body, inert_r_body, inert_alpha_body, body_R_inert=None, body_R_world=None, inert_R_world=None, coordinate='body')[source]¶ There are 3 coordinates to remember. 1. World Frame (Fixed to the world) 2. Inertial Frame (each Rotating Joint Coordinate) 3. Body Frame (SU Coordinate)
We use the following notation coord1_variable_coord2. This represents a variable of coord2 defined in coord1.
For example, SU’s linear accelerations can be represented as rs_A_su. It represents the acceleration A measured in SU coordinate defined in the world coordinate. This is what we can measure from the real IMU as well.
We use the same notation for the rotation matrix coord1_R_coord2, but this represents how you can rotate soem vector from coord2 to `coord1.
For example, if you want the gravity in the su coordinate, su_g = su_R_rs * rs_g As a result, the gravity defined in the RS frame is converted to the SU coordinate.
The inverse of the Rotation Matrix is its transpose. Therefore, one can rotate it back to its original frame by rs_g = rs_R_su * su_g = su_R_rs.T * su_g
-
roboskin.calibration.utils.rotational_acceleration_torch.
compute_acceleration_numerically_torch
(kinematic_chain, i_rotate_joint, i_su, current_time, angle_func, method)[source]¶ Returns tangential acceleration in RS coordinate. The acceleration is computed by taking 2nd derivative of the position. This small change in position in Rotating Coordinate is only in the tangential direction. Thus, you can only compute the tangential acceleration, from this method.
- Parameters
kinematic_chain (roboskin.calibration.kinematic_chain.KinematicChain) – Robot’s Kinematic Chain
'i_rotate_joint' ('int') – dof ‘d’
i (int) – imu i
current_time (float`) – Current Time
angle_func (function) – A function to compute the current angle at time t
-
roboskin.calibration.utils.rotational_acceleration_torch.
estimate_acceleration_torch
(kinematic_chain, i_rotate_joint, i_su, method, joint_angular_velocity, joint_angular_acceleration=0, current_time=0, angle_func=None)[source]¶ Compute an acceleration value from positions. .. math:: a = frac{f({Delta t}) + f({Delta t) - 2 f(0)}{h^2}
This equation came from Taylor Expansion to get the second derivative from f(t). .. math:: f(t+{Delta t}) = f(t) + hf^{prime}(t) + frac{h^2}{2}f^{primeprime}(t) .. math:: f(t-{Delta t}) = f(t) - hf^{prime}(t) + frac{h^2}{2}f^{primeprime}(t)
Add both equations and plug t=0 to get the above equation
- Parameters
kinematic_chain (roboskin.calibration.kinematic_chain.KinematicChain) – Robot’s Kinematic Chain
i_rotate_joint (int) – dof d
i_su (int) – `i`th SU
joint_angular_velocity (float) – Angular velocity
'max_angular_velocity' ('float') – Maximum angular velocity
current_time (float) – Current Time
method (str) – Determines if we are using analytical, our, mittendorfer or modified_mittendorfer methods (which we modified due to some possible missing terms).
Module contents¶
-
roboskin.calibration.utils.
add_noise
(data, data_types, sigma=1)[source]¶ - Parameters
data_types (List[str]) –
-
roboskin.calibration.utils.
add_outlier
(data, data_types, sigma=3, outlier_ratio=0.25)[source]¶ - Parameters
data_types (List[str]) –
-
roboskin.calibration.utils.
angle_between_quaternions
(q_1, q_2, output_in_degrees=False)[source]¶ Angle between quaternions a and b in degrees. Please note the input quaternions should be of form np.ndarray([x, y, z, w]). The formula for angle between quaternions is:
\[\theta = \cos^{-1}\bigl(2\langle q_1,q_2\rangle^2 -1\bigr)\]where ⟨q1,q2⟩ denotes the inner product of the corresponding quaternions:
\[\begin{split}\langle a_1 +b_1 \textbf{i} + c_1 \textbf{j} + d_1 \textbf{k}, \\ a_2 + b_2 \textbf{i} + c_2 \textbf{j} + d_2 \textbf{k}\rangle \\ = a_1a_2 + b_1b_2 + c_1 c_2 + d_1d_2.\end{split}\]Reference: https://math.stackexchange.com/questions/90081/quaternion-distance :param q_1: np.ndarray
Quaternion a
- Parameters
q_2 – np.ndarray Quaternion b
output_in_degrees – bool
- Returns
float Angle between quaternions in degrees
-
roboskin.calibration.utils.
initialize_logging
(log_level, filename=None)[source]¶ Initialize Logging Module with given log_lvel
- Parameters
log_level (str) –
filename (str) –
-
roboskin.calibration.utils.
load_data
(robot, directory)[source]¶ Function for collecting acceleration data with poses
- Returns
data – Data includes static and dynamic accelerations data
- Return type
Data
-
roboskin.calibration.utils.
load_robot_configs
(configdir, robot)[source]¶ Loads robot’s DH parameters, SUs’ DH parameters and their poses
- configdir: str
Path to the config directory where robot yaml files exist
- robot: str
Name of the robot
-
roboskin.calibration.utils.
n2s
(x, precision=3)[source]¶ converts numpy array to string.
- Parameters
x (np.array) – The numpy array to convert to a string.
precision (int) – The precision desired on each entry in the array.
-
roboskin.calibration.utils.
np_to_pyqt
(q)[source]¶ Convert a numpyp quaternion to a pyqt array. Order of a numpy quaternion q = (x, y, z, w)
Order of a pyqt quaternion q = (w, x, y, z)
-
roboskin.calibration.utils.
pyqt_to_np
(q)[source]¶ Convert a pyqt quaternion to a numpy array. Order of a numpy quaternion q = (x, y, z, w)
Order of a pyqt quaternion q = (w, x, y, z)
-
roboskin.calibration.utils.
pyqt_to_tf
(q)[source]¶ Convert a pyqt quaternion to a tf array. Order of a tf quaternion q = (x, y, z, w)
Order of a pyqt quaternion q = (w, x, y, z)
-
roboskin.calibration.utils.
quaternion_from_two_vectors
(source, target)[source]¶ Computes the quaternion from vector source to vector target.
-
roboskin.calibration.utils.
quaternion_l2_distance
(q1, q2)[source]¶ A metric for computing the distance between 2 quaternions. sources: - https://fgiesen.wordpress.com/2013/01/07/small-note-on-quaternion-distance-metrics/ - http://kieranwynn.github.io/pyquaternion/#accessing-individual-elements