Guía para sacarse al ex de la cabeza y el corazón.
Hay relaciones afectivas y personas que nos marcan a fuego, como si se enquistaran en nuestro ADN y en la esencia que nos define. Perderlas genera un vacío angustiante y devastador. ¿Cómo superar la ausencia de quien fue vital para nuestra vida amorosa?
La premisa es ésta: si logras desvincularte de tu ex (o de cualquier amor imposible que ronda tu vida) de manera adecuada, podrás reinventarte como se te dé la gana. El tiempo ayuda, es cierto, pero hay que ayudar al tiempo.
En este libro encontrarás una guía práctica que te permitirá superar la pérdida afectiva dignamente. Leerlo no eliminará el dolor que necesariamente debes sentir para salir adelante, pero lo hará más comprensivo y llevadero: lo transformará en un sufrimiento útil.
Toma la decisión de quitarte de manera definitiva los lastres afectivos que no te dejan crecer y ser feliz. Te sorprenderás de lo que eres capaz cuando compruebes que tu fortaleza interior marque el paso de un adiós contundente para sacarte al ex de la cabeza y el corazón.
PUBLICADO POR: Español PLANETA / OCÉANO | Brasil L&PM | Rusia EKSMO PUBLISHING HOUSE
Otros libros de
def jws_to_csv(input_file, output_file, fields_of_interest=None): """ Convert a file of JWS tokens (one per line) to CSV. fields_of_interest: list of claim names to extract (e.g., ['sub', 'exp', 'role']) """ tokens = Path(input_file).read_text().splitlines() rows = []
In this post, I’ll walk through why you’d want a JWS-to-CSV converter, the structure of a JWS, and a simple Python script to get the job done. A JSON Web Signature (JWS) is a way to securely transmit JSON data between parties with a signature. It’s the technical backbone of JWT (when signed). A JWS has three parts, each base64url-encoded, separated by dots:
from pandas import json_normalize normalized = json_normalize(payload) rows.append(normalized.iloc[0].to_dict()) What About Invalid or Expired Signatures? A pure converter doesn’t need to verify the signature – it just decodes the payload. However, you may want to add a signature_valid column using a cryptographic library (e.g., cryptography or jwt with verification disabled first, then verified separately).
for token in tokens: if not token.strip(): continue payload = decode_jws_payload(token) # If no fields specified, take all top-level keys if fields_of_interest is None: rows.append(payload) else: filtered = field: payload.get(field, None) for field in fields_of_interest rows.append(filtered) jws to csv converter
Once you have the CSV, the world opens up – pivot tables, duplicate detection, expiration audits, and even machine learning on claim patterns.
Replace the row-building section with:
Opening a raw .log file full of base64url-encoded strings isn’t practical. But dropping that data into a CSV? Now you can sort, filter, and pivot. It’s the technical backbone of JWT (when signed)
Do not trust the claims from an unverified JWS in a security context. For analysis, it’s fine. For access control, always verify the signature. Real-World Example Input ( tokens.txt ):
Extend the script to handle JWE (encrypted tokens) or add signature validation columns. Happy data wrangling. Have you built a similar converter for a different token format? Let me know in the comments.
pip install PyJWT pandas import base64 import json import csv import sys import pandas as pd from pathlib import Path def decode_jws_payload(jws_token): """Decode the payload (second part) of a compact JWS.""" try: parts = jws_token.split('.') if len(parts) != 3: raise ValueError("Invalid compact JWS: expected 3 parts") # Decode base64url (add padding if needed) payload_b64 = parts[1] # Add padding for base64 decoding padding = '=' * (4 - (len(payload_b64) % 4)) payload_bytes = base64.urlsafe_b64decode(payload_b64 + padding) return json.loads(payload_bytes) except Exception as e: return "error": str(e), "raw_token": jws_token[:50] However, you may want to add a signature_valid
If you work with JWT (JSON Web Tokens) or JWS (JSON Web Signatures) in logging, analytics, or batch processing, you’ve likely run into the same headache: how do you analyze hundreds or thousands of these tokens in a human-readable way?
To flatten these into CSV columns (e.g., user.id , permissions.0 ), you can use pandas.json_normalize() instead of the direct DataFrame constructor.